×

VeLO

swMATH ID: 46620
Software Authors: Metz, Luke; Harrison, James; Freeman, C. Daniel; Merchant, Amil; Beyer, Lucas; Bradbury, James; Agrawal, Naman; Poole, Ben; Mordatch, Igor; Roberts, Adam; Sohl-Dickstein, Jascha
Description: VeLO: Training Versatile Learned Optimizers by Scaling Up. While deep learning models have replaced hand-designed features across many domains, these models are still trained with hand-designed optimizers. In this work, we leverage the same scaling approach behind the success of deep learning to learn versatile optimizers. We train an optimizer for deep learning which is itself a small neural network that ingests gradients and outputs parameter updates. Meta-trained with approximately four thousand TPU-months of compute on a wide variety of optimization tasks, our optimizer not only exhibits compelling performance, but optimizes in interesting and unexpected ways. It requires no hyperparameter tuning, instead automatically adapting to the specifics of the problem being optimized. We open source our learned optimizer, meta-training code, the associated train and test data, and an extensive optimizer benchmark suite with baselines at velo-code.github.io
Homepage: https://arxiv.org/abs/2211.09760
Source Code:  https://github.com/google/learned_optimization/tree/main/learned_optimization/research/general_lopt
Keywords: Machine Learning; arXiv_cs.LG; Optimization and Control; arXiv_math.OC; Machine Learning; arXiv_stat.ML
Related Software: Adam; Equinox; XGBoost; Tensor2Tensor; soft-DTW; JuMP; Julia; AdaGrad; Autograd; OSQP; OptNet; learn2learn; PILCO; U-Net; functorch; JAXopt; PyTorch; Learn2Hop; ADADELTA; Theano
Cited in: 2 Documents