Advancing plain vision transformer toward remote sensing foundation model

D Wang, Q Zhang, Y Xu, J Zhang, B Du…�- …�on Geoscience and�…, 2022 - ieeexplore.ieee.org
IEEE Transactions on Geoscience and Remote Sensing, 2022ieeexplore.ieee.org
Large-scale vision foundation models have made significant progress in visual tasks on
natural images, with vision transformers (ViTs) being the primary choice due to their good
scalability and representation ability. However, large-scale models in remote sensing (RS)
have not yet been sufficiently explored. In this article, we resort to plain ViTs with about 100
million parameters and make the first attempt to propose large vision models tailored to RS
tasks and investigate how such large models perform. To handle the large sizes and objects�…
Large-scale vision foundation models have made significant progress in visual tasks on natural images, with vision transformers (ViTs) being the primary choice due to their good scalability and representation ability. However, large-scale models in remote sensing (RS) have not yet been sufficiently explored. In this article, we resort to plain ViTs with about 100 million parameters and make the first attempt to propose large vision models tailored to RS tasks and investigate how such large models perform. To handle the large sizes and objects of arbitrary orientations in RS images, we propose a new rotated varied-size window attention to replace the original full attention in transformers, which can significantly reduce the computational cost and memory footprint while learning better object representation by extracting rich context from the generated diverse windows. Experiments on detection tasks show the superiority of our model over all state-of-the-art models, achieving 81.24% mean average precision (mAP) on the DOTA-V1.0 dataset. The results of our models on downstream classification and segmentation tasks also show competitive performance compared to existing advanced methods. Further experiments show the advantages of our models in terms of computational complexity and data efficiency in transferring. The code and models will be released at https://github.com/ViTAE-Transformer/Remote-Sensing-RVSA .
ieeexplore.ieee.org
Showing the best result for this search. See all results