DC4L: Distribution shift recovery via data-driven control for deep learning models

V Lin, KJ Jang, S Dutta, M Caprio…�- 6th Annual Learning�…, 2024 - proceedings.mlr.press
6th Annual Learning for Dynamics & Control Conference, 2024proceedings.mlr.press
Deep neural networks have repeatedly been shown to be non-robust to the uncertainties of
the real world, even to naturally occurring ones. A vast majority of current approaches have
focused on data-augmentation methods to expand the range of perturbations that the
classifier is exposed to while training. A relatively unexplored avenue that is equally
promising involves sanitizing an image as a preprocessing step, depending on the nature of
perturbation. In this paper, we propose to use control for learned models to recover from�…
Abstract
Deep neural networks have repeatedly been shown to be non-robust to the uncertainties of the real world, even to naturally occurring ones. A vast majority of current approaches have focused on data-augmentation methods to expand the range of perturbations that the classifier is exposed to while training. A relatively unexplored avenue that is equally promising involves sanitizing an image as a preprocessing step, depending on the nature of perturbation. In this paper, we propose to use control for learned models to recover from distribution shifts online. Specifically, our method applies a sequence of semantic-preserving transformations to bring the shifted data closer in distribution to the training set, as measured by the Wasserstein distance. Our approach is to 1) formulate the problem of distribution shift recovery as a Markov decision process, which we solve using reinforcement learning, 2) identify a minimum condition on the data for our method to be applied, which we check online using a binary classifier, and 3) employ dimensionality reduction through orthonormal projection to aid in our estimates of the Wasserstein distance. We provide theoretical evidence that orthonormal projection preserves characteristics of the data at the distributional level. We apply our distribution shift recovery approach to the ImageNet-C benchmark for distribution shifts, demonstrating an improvement in average accuracy of up to 14.21% across a variety of state-of-the-art ImageNet classifiers. We further show that our method generalizes to composites of shifts from the ImageNet-C benchmark, achieving improvements in average accuracy of up to 9.81%. Finally, we test our method on CIFAR-100-C and report improvements of up to 8.25%.
proceedings.mlr.press
Showing the best result for this search. See all results