Maximizing AI Infrastructure Efficiency at Scale: Insights from LinkedIn’s GPU Fleet
Автор: IgniteGTM
Загружено: 2025-12-09
Просмотров: 93
Описание:
📍 Recorded live at AI INFRA SUMMIT 5, Convene San Francisco
AI is growing smarter and more resource-intensive, making efficiency a critical priority for modern infrastructure. In this talk, Animesh (LinkedIn) dives into the strategies behind running a highly scalable and cost-effective AI platform. He shares practical insights from managing LinkedIn’s GPU fleet and optimizing the AI pipeline from hardware to software.
Highlights include:
Understanding the role of efficiency across GPUs, networking, and compute layers
Advanced scheduling techniques for global, network-aware, and workload-aware optimization
Software-level improvements in frameworks like PyTorch and TensorFlow
Leveraging data strategies to reduce AI compute costs and improve ROI
Balancing large-scale model training with inference for billions of users
📣 Super early bird available — sign up for the next AI INFRA SUMMIT → https://luma.com/aiinfra5
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: