Sinusoidal Embeddings PyTorch Code From Scratch!
Автор: Justin The Jedi
Загружено: 2025-10-16
Просмотров: 134
Описание:
00:00 Begin: main function
02:15 SinusoidalEmbedding class
03:15 linspace and list of omegas
06:40 Multiply frequencies by time
15:10 Correction: randn → rand
16:00 Review
17:00 Correction: + → *
In this video, I code a sinusoidal embedding PyTorch module from scratch in preparation for the rectified flow model next
Sinusoidal Embeddings Theory:
• Sinusoidal Embeddings Clearly Explained!
Note:
At 08:30 I said "Because that's just how it works," and I don't like that. The reason the broadcast operation happens is because, in pytorch, say you have tensors X and Y, such that X.shape = (a) and Y.shape = (1). Whatever operator you apply, like {+, -, *, /}, will result in the single value of Y at dimension 0 being broadcast over all 'a' values in X's dimension 0. As long as len(X.shape) == len(Y.shape), whenever there is a dimension of size 1, and the remaining dimensions from either tensor have matching sizes, pytorch automatically broadcasts the singleton dimension over the matching dimension of variable size 'a' from a corresponding tensor.
In the video, we have shape (a, 1) broadcast over shape (1, b), yielding shape (a, b). I realize this explanation is a poor substitute for a live explanation, but I won't be saying "because this is how things magically work" again.
Also, torch.rand(B, 1).squeeze(1) should have been torch.rand(B). The "1" I was passing in to rand was adding the extra dimension. For some reason I thought I needed to pass in a '1' as an upper bound for the sampling. Forgetting it's uniform in [0, 1] by default.
📦 Code & Resources
GitHub:
https://github.com/jbthejedi/rectifie...
Follow for More:
X: @jbthejedi
Instagram: / justinbarrythejedi
LinkedIn: / justin-barry-e
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: