ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

CPU LLM #4: The DNA of LLMs - How Matrix Multiplication Optimization Delivers 6x Performance Gains

Автор: ANTSHIV ROBOTICS

Загружено: 2025-07-18

Просмотров: 566

Описание: 🚀 Ever wondered how large language models (LLMs) can run efficiently on CPUs? It all comes down to optimizing the "DNA" of AI: General Matrix Multiplication (GEMM) kernels!

In this video, we take you on a deep dive into the world of CPU-optimized LLM runtimes, built from scratch in pure C. We explore how highly optimized GEMM (General Matrix Multiply) kernels are the fundamental building blocks for modern AI inference and training, driving massive performance gains.

What you'll learn:
The Importance of GEMM: Understand why C=alphaAB+betaC is the workhorse behind neural networks, including linear layers, attention mechanisms, and convolutional layers.
Memory Layout Matters: Discover how smart memory allocation and avoiding costly transposes are crucial for CPU performance.
Four Levels of Optimization: We break down the engineering of distinct GEMM kernels:
Naive Parallel GEMM: Our baseline with basic triple-loop implementation and OpenMP.
Simple AVX-512 Parallel GEMM: Introducing Intel AVX-512 intrinsics for significant vectorization speedup.
Fine-Grained Blocked GEMM: Combining AVX-512 with cache blocking (64x64 blocks) to improve data locality and cache utilization.

Token-Parallel Orchestration: Our key innovation! This higher-level strategy distributes input tokens across multiple CPU cores, each executing a serial blocked GEMM for maximum CPU utilization and near-perfect scaling.
Real-World Performance: See the significant speedups achieved, with Token-Parallel Orchestration delivering over 6x performance gain compared to the Naive approach for both MLP and QKV GEMM operations.

The Bigger Vision: Learn how this GEMM work is the foundational Phase 1 of building a complete CPU-native AI runtime, with future plans for a full forward pass, backward pass, optimizer kernels, and even mixed-precision training. Our ultimate vision is to democratize AI by making high-performance inference accessible on any CPU.

This project emphasizes a comprehensive benchmarking approach to guide kernel selection and ensure numerical stability.

Codebase Highlights:
The accompanying C codebase demonstrates these optimizations, featuring:
Optimal memory layout with 64-byte alignment and 2MB Huge Pages for zero fragmentation.
Hardware-aware optimization leveraging AVX-512 intrinsics.
An integrated benchmarking framework for transparent and reproducible results.

Watch now to understand the "DNA of AI" and how it's being optimized for the CPU!

You can join our discord channel here:
  / discord  

** Open Source Repositories in github **
The github repository to access the Drone code:
► https://github.com/antshiv/BLEDroneCo...

The handheld controller code:
]
► https://github.com/antshiv/BLEHandhel...

The github repository to access the thrust stand files:
► https://github.com/antshiv/ThrustStand

*** MCU Development Environment:
► NXP Microcontrollers- McuXpresso
► Microchip Microcontrollers including Arduino- Microchip Studio
► Linux + VI + ARM GCC

Linux Environment:
► VirtualBox + Linux Mint
► Window Manager - Awesome WM

Electronic Tools I use:
► Oscilloscope Siglent SDS1104X-E - https://amzn.to/3nRcziY
► Power source - Yihua YH-605D
► Preheater Hotplate - Youyue946c - https://amzn.to/356DhgS
► Soldering Station - Yihua 937D - https://amzn.to/33VXm9b
► Hot Air gun - Sparkfun 303d
► Logic Analyzer - Salae - https://amzn.to/3AoQ4qy
► Third hand - PCBite Kit - https://amzn.to/3JCYZbr
► Solder fume Extractor - https://amzn.to/3H2a0kE
► Microscope - https://amzn.to/3vQXz9d

Software Tools I use:
► PCB Design - Altium
► Mechanical Part modelling - Solidworks
► 3d Modelling and design prototyping - 3ds Max
► Rendering Engine - VRay
► Mathematical Modelling and model based design - MATLAB and Simulink

Links:
► Website: https://www.antshiv.com
► Blog: https://shivasnotes.com
► Patreon page:   / antshiv_robotics  

DISCLAIMERS:
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.

This video was not paid for by outside persons or manufacturers.
No gear was supplied to me for this video.

The content of this video and my opinions were not reviewed or paid for by any outside persons.

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
CPU LLM #4: The DNA of LLMs - How Matrix Multiplication Optimization Delivers 6x Performance Gains

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

CPU LLM #5: Optimizing LayerNorm in C with AVX-512

CPU LLM #5: Optimizing LayerNorm in C with AVX-512

Бухчон ханок. Корея 360°

Бухчон ханок. Корея 360°

Купил МОНСТРА на 32 ГБ VRAM за 45к. Что может серверная Tesla V100 в ИГРАХ?

Купил МОНСТРА на 32 ГБ VRAM за 45к. Что может серверная Tesla V100 в ИГРАХ?

CPU LLM #1: The Memory Layout That Makes CPU LLMs Faster.

CPU LLM #1: The Memory Layout That Makes CPU LLMs Faster.

Вайб кодинг на ИИ

Вайб кодинг на ИИ

Большинство разработчиков не понимают, как работают токены LLM.

Большинство разработчиков не понимают, как работают токены LLM.

Почему AI генерит мусор — и как заставить его писать нормальный код

Почему AI генерит мусор — и как заставить его писать нормальный код

Bare-Metal C | Введение (Часть 1)

Bare-Metal C | Введение (Часть 1)

CPU LLM #2: The Memory Trick That Makes Multi-Core CPUs Fly for AI

CPU LLM #2: The Memory Trick That Makes Multi-Core CPUs Fly for AI

Kubernetes — Простым Языком на Понятном Примере

Kubernetes — Простым Языком на Понятном Примере

Run Local LLMs on Hardware from $50 to $50,000 - We Test and Compare!

Run Local LLMs on Hardware from $50 to $50,000 - We Test and Compare!

Лучший документальный фильм про создание ИИ

Лучший документальный фильм про создание ИИ

.kkrieger - Инженерное Безумие Размером 96KB

.kkrieger - Инженерное Безумие Размером 96KB

Mini Project: How to program a GPU? | CUDA C/C++

Mini Project: How to program a GPU? | CUDA C/C++

Getting started with HPC and Drones – Building an End-to-End System

Getting started with HPC and Drones – Building an End-to-End System

DGX Spark & Strix Halo vs. EPYC 7702 & Threadripper 7995WX as a Home Ai Server

DGX Spark & Strix Halo vs. EPYC 7702 & Threadripper 7995WX as a Home Ai Server

Билл Гейтс В ЯРОСТИ: Lenovo заменяет Windows на Linux!

Билл Гейтс В ЯРОСТИ: Lenovo заменяет Windows на Linux!

CPU LLM #0: The Complete Guide to Training Transformer Models (SFT, RL, PEFT, LLMs)

CPU LLM #0: The Complete Guide to Training Transformer Models (SFT, RL, PEFT, LLMs)

Deep Dive into LLMs like ChatGPT

Deep Dive into LLMs like ChatGPT

Из дата-центра в игровой компьютер - Nvidia Tesla V100 в работе и играх.

Из дата-центра в игровой компьютер - Nvidia Tesla V100 в работе и играх.

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]