ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

Sinusoidal Embeddings PyTorch Code From Scratch!

rectified flow

flow matching

diffusion models

stable diffusion

stable diffusion 3

sd3

sdxl

stable diffusion xl

text to image

image generation

generative ai

pytorch

deep learning

machine learning

ai tutorial

from scratch

neural networks

diffusion transformer

DiT

UNet

MMDiT

normalizing flow

flow models

ode

probability

gaussian noise

training loop

pytorch tutorial

computer vision

open source ai

hugging face

comfyui

Автор: Justin The Jedi

Загружено: 2025-10-16

Просмотров: 134

Описание: 00:00 Begin: main function
02:15 SinusoidalEmbedding class
03:15 linspace and list of omegas
06:40 Multiply frequencies by time
15:10 Correction: randn → rand
16:00 Review
17:00 Correction: + → *


In this video, I code a sinusoidal embedding PyTorch module from scratch in preparation for the rectified flow model next

Sinusoidal Embeddings Theory:
   • Sinusoidal Embeddings Clearly Explained!  

Note:
At 08:30 I said "Because that's just how it works," and I don't like that. The reason the broadcast operation happens is because, in pytorch, say you have tensors X and Y, such that X.shape = (a) and Y.shape = (1). Whatever operator you apply, like {+, -, *, /}, will result in the single value of Y at dimension 0 being broadcast over all 'a' values in X's dimension 0. As long as len(X.shape) == len(Y.shape), whenever there is a dimension of size 1, and the remaining dimensions from either tensor have matching sizes, pytorch automatically broadcasts the singleton dimension over the matching dimension of variable size 'a' from a corresponding tensor.
In the video, we have shape (a, 1) broadcast over shape (1, b), yielding shape (a, b). I realize this explanation is a poor substitute for a live explanation, but I won't be saying "because this is how things magically work" again.

Also, torch.rand(B, 1).squeeze(1) should have been torch.rand(B). The "1" I was passing in to rand was adding the extra dimension. For some reason I thought I needed to pass in a '1' as an upper bound for the sampling. Forgetting it's uniform in [0, 1] by default.

📦 Code & Resources

GitHub:
https://github.com/jbthejedi/rectifie...

Follow for More:
X: @jbthejedi
Instagram:   / justinbarrythejedi  
LinkedIn:   / justin-barry-e  

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
Sinusoidal Embeddings PyTorch Code From Scratch!

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]