Inference Optimization with NVIDIA TensorRT
Автор: NCSAatIllinois
Загружено: 2022-04-18
Просмотров: 16540
Описание:
In many applications of deep learning models, we would benefit from reduced latency (time taken for inference). This tutorial will introduce NVIDIA TensorRT, an SDK for high-performance deep learning inference. We will go through all the steps necessary to convert a trained deep learning model to an inference-optimized model on HAL.
Speakers: Nikil Ravi and Pranshu Chaturvedi, UIUC
Webinar Date: April 13, 2022
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: