Fast Inference: Applying Large Machine Learning Models on Small Devices
Автор: Lamarr Institute
Загружено: 2021-12-20
Просмотров: 226
Описание:
With the ongoing integration of Machine Learning (ML) into everyday life, e.g. in the form of the Internet of Things, the resource consumption for Machine Learning models becomes an increasingly important issue. Not only is the training of ML models becoming more costly, but the continuous application of ML models is also reaching a critical resource consumption. Hence there is a dire need for more resource-efficient model training and model application. In the first half of this talk, scientist Sebastian Buschjäger highlights how ensemble pruning can improve the accuracy-resource trade-off of Random Forests by removing unnecessary trees from the forest. He then introduces a new technique called leaf-refinement to further improve the performance of small random forests. In the second half of the talk, Sebastian Buschjäger discusses the FastInference tool which aims to unite these different approaches into a single framework.
This talk was originally given during a virtual visit of the University of Waikato, New Zealand.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: