This is a presentation project, exploring the usage of NVCaffe to run CTPN. It was created to make use of the latest features in NVCaffe to improve memory and inference performance of CTPN.
NVCaffe: Link to fork used in the project
CTPN: Link to fork used in the project
- Open this notebook - ctpn_with_nvcaffe.ipynb
- Click
Open in Colab
- Connect to a Runtime with GPU and execute.
Have fun!
- Comparision of inference runtime of baseline against NVCaffe TRT Layer
- Memory consumption with
FP16
- CTPN with
TensorRT
andTriton Inference Server
- Inference Optimization with Python
Multiprocessing
andAsyncio