Is the Nvidia Monopoly Over? Fine-tuning our Liquid AI LLMs on AMD MI325X
Автор: Mathias Lechner
Загружено: 2026-02-10
Просмотров: 1451
Описание:
Is training on AMD hardware actually viable in 2026? For years, the narrative has been that AMD Instinct GPUs are great for inference but a nightmare for training due to software fragmentation. This is not true anymore.
In this video, I fine-tune our Liquid AI's LFM2.5-1.2B-Instruct using the AMD Instinct MI325X GPUs.
💡 The Hardware Context:
For those unfamiliar, the AMD MI325X is the direct competitor and equivalent to the NVIDIA H200. It is the current heavy-duty workhorse for AI, boasting over 256GB of HBM (memory). While it isn't the absolute newest silicon on the market, it represents the tier of hardware that most serious labs are deploying right now.
🛠️ The Stack:
The most impressive part of this demo isn't the speed, it's the simplicity.
OS: Linux (Ubuntu)
Driver: ROCm 7.1
Libraries: Standard PyTorch + Hugging Face Transformers
Provider: Tensorwave
There are no custom Docker containers, no obscure forks, and no complex kernel hacks. If you know how to fine-tune on Nvidia, you now know how to fine-tune on AMD.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: