Kıvanç, Ş.G.; Şen, B.; Nar, F.; Ok, A.Ö. Reducing Model Complexity in Neural Networks by Using Pyramid Training Approaches. Appl. Sci.2024, 14, 5898.
Kıvanç, Ş.G.; Şen, B.; Nar, F.; Ok, A.Ö. Reducing Model Complexity in Neural Networks by Using Pyramid Training Approaches. Appl. Sci. 2024, 14, 5898.
Kıvanç, Ş.G.; Şen, B.; Nar, F.; Ok, A.Ö. Reducing Model Complexity in Neural Networks by Using Pyramid Training Approaches. Appl. Sci.2024, 14, 5898.
Kıvanç, Ş.G.; Şen, B.; Nar, F.; Ok, A.Ö. Reducing Model Complexity in Neural Networks by Using Pyramid Training Approaches. Appl. Sci. 2024, 14, 5898.
Abstract
Throughout the evolution of machine learning, the size of models has steadily increased as researchers strive for higher accuracy by adding more layers. This escalation in model complexity necessitates enhanced hardware capabilities. Today, state-of-the-art machine learning models have become so large that effectively training them requires substantial hardware resources, which may be readily available to large companies but not to students or independent researchers. To make the research on machine learning models more accessible, this study introduces a size reduction technique that leverages stages in Pyramid Training and Similarity Comparison. Our results demonstrate that pyramid training can reduce model complexity while maintaining accuracy of conventional full-sized models, offering a scalable and resource-efficient solution for researchers and practitioners in hardware-constrained environments.
Computer Science and Mathematics, Computer Science
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.