How does Docker run machine learning on AI accelerators (NVIDIA GPUs, AWS Inferentia)
Автор: Shashank Prasanna
Загружено: 2022-09-01
Просмотров: 957
Описание:
(This talk was presented at Docker Community All Hands)
The use of specialized processors for specialized tasks date back to the 70s where CPUs were paired with coprocessor for floating-point calculation. Fast forward to today, most machine learning (ML) computations are run on a combination of CPUs and GPUs or specialized AI hardware accelerators such as AWS Inferentia and Intel Habana Gaudi. Docker and container technologies have become indispensable tools when it come to scaling machine learning, but how does it work when you have more than one type of processors on the same system? Does Docker still guarentee all its benefits when working with different processors?
In this talk, I’ll discuss how Docker containers works on CPU-only systems and then discuss how you can use it in heterogenous systems with multiple processors. Based on current trends and the introduction of newer AI silicon (GPUs, TPUs, GraphCore, AWS Tranium & Inferentia, Intel Habana Gaudi and more) it’s evident that the future all machine learning workloads will run on multiple-processors and I’ll discuss the role of containers in this future.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: