×

The \(k\)-nearest neighbour UCB algorithm for multi-armed bandits with covariates. (English) Zbl 1405.68303

Janoos, Firdaus (ed.) et al., Algorithmic learning theory 2018. Proceedings of the 29th international conference (ALT 2018), Lanzarote, Spain, April 7–9, 2018. [s.l.]: Proceedings of Machine Learning Research PMLR. Proceedings of Machine Learning Research (PMLR) 83, 725-752 (2018).
Summary: In this paper we propose and explore the k-Nearest Neighbour UCB algorithm for multi-armed bandits with covariates. We focus on a setting where the covariates are supported on a metric space of low intrinsic dimension, such as a manifold embedded within a high dimensional ambient feature space. The algorithm is conceptually simple and straight-forward to implement. The \(k\)-nearest neighbour UCB algorithm does not require prior knowledge of the either the intrinsic dimension of the marginal distribution or the time horizon. We prove a regret bound for the \(k\)-nearest neighbour UCB algorithm which is minimax optimal up to logarithmic factors. In particular, the algorithm automatically takes advantage of both low intrinsic dimensionality of the marginal distribution over the covariates and low noise in the data, expressed as a margin condition. In addition, focusing on the case of bounded rewards, we give corresponding regret bounds for the \(k\)-nearest neighbour KL-UCB algorithm, which is an analogue of the KL-UCB algorithm adapted to the setting of multi-armed bandits with covariates. Finally, we present empirical results which demonstrate the ability of both the k-Nearest Neighbour UCB and \(k\)-nearest neighbour KL-UCB to take advantage of situations where the data is supported on an unknown sub-manifold of a high-dimensional feature space.
For the entire collection see [Zbl 1405.68011].

MSC:

68T05 Learning and adaptive systems in artificial intelligence
62C25 Compound decision problems in statistical decision theory
62L10 Sequential statistical analysis
62L15 Optimal stopping in statistics