Performance models and energy-optimal scheduling of DNNs on many-core hardware with dynamic power management

B Vogginger, F Kelber, S Balamuthu Sampath…�- Proceedings of the�…, 2023 - dl.acm.org
B Vogginger, F Kelber, S Balamuthu Sampath, J Partzsch, C Mayr
Proceedings of the 2023 Workshop on Compilers, Deployment, and Tooling for�…, 2023dl.acm.org
Processing of deep neural networks (DNNs) at the edge may be limited by power or energy
constraints of the used embedded hardware system. It is therefore desirable for the compiler
to create efficient executables for given DNN models meeting the specific constraints. Here,
we consider a low-power many-core hardware with 152 processing elements (PE), each
containing an ARM M4F processor, 128 KB SRAM and a custom accelerator for DNN
inference. Dynamic power management allows each core to switch between a high-speed�…
Processing of deep neural networks (DNNs) at the edge may be limited by power or energy constraints of the used embedded hardware system. It is therefore desirable for the compiler to create efficient executables for given DNN models meeting the specific constraints. Here, we consider a low-power many-core hardware with 152 processing elements (PE), each containing an ARM M4F processor, 128 KB SRAM and a custom accelerator for DNN inference. Dynamic power management allows each core to switch between a high-speed and a low-power mode within tens of nanoseconds. For an energy-optimal parallelization of DNNs on the hardware, we first develop analytical performance models to predict the time and energy for executing a DNN layer with the custom accelerator. The models are fitted and validated using measurements on a prototype chip. In a second step we develop concepts for the energy-optimal parallelization of DNNs under latency constraints and evaluate them deploying the performance models: By dynamically switching between the operating modes more than 10% of energy can be saved compared to the case running at high-speed mode only. The presented methodology and concepts are easily transferable to other many-core edge processors.
ACM Digital Library
Showing the best result for this search. See all results