Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation

JZ Wu, Y Ge, X Wang, SW Lei, Y Gu…�- Proceedings of the�…, 2023 - openaccess.thecvf.com
JZ Wu, Y Ge, X Wang, SW Lei, Y Gu, Y Shi, W Hsu, Y Shan, X Qie, MZ Shou
Proceedings of the IEEE/CVF International Conference on�…, 2023openaccess.thecvf.com
To replicate the success of text-to-image (T2I) generation, recent works employ large-scale
video datasets to train a text-to-video (T2V) generator. Despite their promising results, such
paradigm is computationally expensive. In this work, we propose a new T2V generation
setting--One-Shot Video Tuning, where only one text-video pair is presented. Our model is
built on state-of-the-art T2I diffusion models pre-trained on massive image data. We make
two key observations: 1) T2I models can generate still images that represent verb terms; 2)�…
Abstract
To replicate the success of text-to-image (T2I) generation, recent works employ large-scale video datasets to train a text-to-video (T2V) generator. Despite their promising results, such paradigm is computationally expensive. In this work, we propose a new T2V generation setting--One-Shot Video Tuning, where only one text-video pair is presented. Our model is built on state-of-the-art T2I diffusion models pre-trained on massive image data. We make two key observations: 1) T2I models can generate still images that represent verb terms; 2) extending T2I models to generate multiple images concurrently exhibits surprisingly good content consistency. To further learn continuous motion, we introduce Tune-A-Video, which involves a tailored spatio-temporal attention mechanism and an efficient one-shot tuning strategy. At inference, we employ DDIM inversion to provide structure guidance for sampling. Extensive qualitative and numerical experiments demonstrate the remarkable ability of our method across various applications.
openaccess.thecvf.com