Google
The paper proposes a causal feature selection method for contextual multi-armed bandits (MABs) in recommender systems. The key idea is to learn the causal relationships between the available features and the target recommendation outcome, and then use this knowledge to select the most relevant features.
Sep 23, 2024
Sep 20, 2024In this paper, we introduce model-free feature selection methods designed for contexutal MAB problem, based on heterogeneous causal effect contributed by the�...
Sep 20, 2024In this paper, we introduce model-free feature selection methods designed for contexutal MAB problem, based on heterogeneous causal effect contributed by the�...
The results show this feature selection method effectively selects the important features that lead to higher contextual MAB reward than unimportant features.
Sep 24, 2024Features (a.k.a. context) are critical for contextual multi-armed bandits (MAB) performance. In practice of large scale online system, it is�...
Features (a.k.a. context) are critical for contextual multi-armed bandits (MAB) performance. In practice of large scale online system, it is important to�...
Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain.
Multi-armed bandit methods allow us to scale to multiple treatments and to perform off-policy policy evaluation on logged data. The Thompson sampling strategy�...
People also ask
May 16, 2024Features (a.k.a. context) are critical for contextual multi-armed bandits (MAB) performance. In practice of large scale online system, it is�...
6 days agoThe multi-armed bandit framework provides a robust methodology for causal inference, particularly in dynamic environments like digital health.