GPU. TPU. OPU? — The Next Step in AI Compute
Автор: Neurophos
Загружено: 2024-09-11
Просмотров: 3570
Описание:
A melted chip doesn't run – Unless you put it on a slope.
The "thermal wall" is a hard limit on GPU speedups. Transistors keep getting smaller, but that does little for heat generation.
Newer AI hardware architectures run cooler by reducing the need to shuffle intermediate results between processor and storage, thanks to in-memory compute.
The more calculations it can fit into one "chunk," the less data shuffling is required. Chunk size is crucial when running today's massive neural networks and GPTs.
THE PROBLEM
Making chunks larger with electric processors increases inductance and capacitance, which slows down clock speeds—not ideal.
Making chunks larger with light-based processors avoids clock speed issues, but the optical "transistors" are so large that fitting enough of them on a chip is infeasible.
*Until now
Our physics breakthrough allows us to pack 8,000 times more optical "transistors" into the same chip area.*
What does this mean? A processor that can digest large AI models at the speed of 100 GPUs, in the form factor of 1 GPU, and the power consumption of 1 GPU.*
Watch how we’re building the future of AI!
#techbreakthrough #photoniccomputing #Neurophos #datacenters #aihardware
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: