Google
Feb 11, 2022In this work, we propose a general framework, called FILM-QNN, to quantize and accelerate multiple DNN models across different embedded FPGA devices.
In this work, we propose a general framework, called FILM-QNN, to quantize and accelerate multiple DNN models across different em- bedded FPGA devices. First,�...
This work proposes a general framework, called FILM-QNN, to quantize and accelerate multiple DNN models across different embedded FPGA devices.
FILM-QNN: Efficient FPGA Acceleration of Deep Neural Networks with Intra-Layer, Mixed-Precision Quantization. M. Sun, Z. Li, A. Lu, Y. Li, S. Chang, X. Ma, X�...
People also ask
Jan 1, 2022FILM-QNN: Efficient FPGA Acceleration of Deep Neural Networks with Intra-Layer, Mixed-Precision Quantization. Retrieved from https://par.nsf.gov�...
Film-qnn: Efficient fpga acceleration of deep neural networks with intra-layer, mixed-precision quantization. 2022 | OTHER.
Co-authors ; Film-qnn: Efficient fpga acceleration of deep neural networks with intra-layer, mixed-precision quantization. M Sun, Z Li, A Lu, Y Li, SE Chang, X�...
This repo collects papers, docs, codes about model quantization for anyone who wants to do research on it. We are continuously improving the project.
Co-authors ; Film-qnn: Efficient fpga acceleration of deep neural networks with intra-layer, mixed-precision quantization. M Sun, Z Li, A Lu, Y Li, SE Chang, X�...
Video for FILM-QNN: Efficient FPGA Acceleration of Deep Neural Networks with Intra-Layer, Mixed-Precision Quantization.
May 20, 2022First, we propose the novel intra-layer, mixed-precision quantization ... [FPGA 2022] FILM ...
Duration: 17:17
Posted: May 20, 2022