CEKD: Cross ensemble knowledge distillation for augmented fine-grained data

K Zhang, J Fan, S Huang, Y Qiao, X Yu, F Qin�- Applied Intelligence, 2022 - Springer
K Zhang, J Fan, S Huang, Y Qiao, X Yu, F Qin
Applied Intelligence, 2022Springer
Data augmentation has been proved effective in training deep models. Existing data
augmentation methods tackle fine-grained problem by blending image pairs and fusing
corresponding labels according to the statistics of mixed pixels, which produces additional
noise harmful to the performance of networks. Motivated by this, we present a simple yet
effective cross ensemble knowledge distillation (CEKD) model for fine-grained feature
learning. We innovatively propose a cross distillation module to provide additional�…
Abstract
Data augmentation has been proved effective in training deep models. Existing data augmentation methods tackle fine-grained problem by blending image pairs and fusing corresponding labels according to the statistics of mixed pixels, which produces additional noise harmful to the performance of networks. Motivated by this, we present a simple yet effective cross ensemble knowledge distillation (CEKD) model for fine-grained feature learning. We innovatively propose a cross distillation module to provide additional supervision to alleviate the noise problem, and propose a collaborative ensemble module to overcome the target conflict problem. The proposed model can be trained in an end-to-end manner, and only requires image-level label supervision. Extensive experiments on widely used fine-grained benchmarks demonstrate the effectiveness of our proposed model. Specifically, with the backbone of ResNet-101, CEKD obtains the accuracy of 89.59%, 95.96% and 94.56% in three datasets respectively, outperforming state-of-the-art API-Net by 0.99%, 1.06% and 1.16%.
Springer
Showing the best result for this search. See all results