Poster
in
Workshop: Exploration in AI Today (EXAIT)
No-Regret Safety: Balancing Tests and Misclassification in Logistic Bandits
Tavor Baharav · Spyros Dragazis · Aldo Pacchiano
Keywords: [ Online Selective Sampling ] [ Online Learninng ] [ Logistic Bandits ]
Abstract:
We study the problem of sequentially testing individuals for a binary disease outcome whose true risk is governed by an unknown logistic regression model.At each round, a patient arrives with feature vector $x_t$, and the decision maker may either pay to administer a (noiseless) diagnostic test—revealing the true label—or skip testing and predict the patient's disease status based on prior observations.Our goal is to minimize the total number of costly tests while guaranteeing that the fraction of misclassifications does not exceed a prespecified error tolerance $\alpha$, with high probability.To address this, we develop a novel algorithm that (i) maintains a confidence ellipsoid for the unknown logistic parameter $\theta^\star$, (ii) interleaves label‐collection and distribution‐estimation to estimate both $\theta^\star$ and the context distribution, and (iii) computes a conservative, data‐driven threshold $\tau_t$ on the logistic score $|x_t^\top\theta|$ over $\theta$ in the confidence set to decide when testing is necessary.We prove that, with probability at least $1-\delta$, our procedure never exceeds the target misclassification rate and incurs only $\widetilde O(\sqrt{T})$ excess tests compared to the oracle baseline that knows both $\theta^\star$ and the patient feature distribution.This establishes the first no‐regret guarantees for error‐constrained logistic testing, with direct applications to cost‐sensitive medical screening.Simulations corroborate our theoretical guarantees, showing that in practice our procedure efficiently estimates $\theta^\star$ while retaining safety guarantees, and does not require too many excess tests.
Chat is not available.