Applied Deep Learning – Class 42 | Target Contextual Embeddings (Q,KV)
Автор: gened
Загружено: 2026-02-19
Просмотров: 3
Описание:
In this session of Applied Deep Learning, we complete our exploration of Self-Attention by introducing and explaining Query, Key, and Value vectors — the core components that make attention work.
This lecture is theory-only, focused on intuition and understanding how self-attention actually computes meaningful representations for sequence data.
📚 In this lecture, we cover:
🔹 What Query, Key, and Value vectors are
Learn how each word in a sentence generates three distinct vectors — Query (Q), Key (K), and Value (V) — that help the model decide what to focus on.
🔹 Example with a sentence
We walk through a sentence example and show how Q, K, and V are assigned to each word.
🔹 Parallel computation using matrices
See how multiple Q, K, V vectors from an entire sentence are used together as matrices to compute attention scores in parallel — not one word at a time.
🔹 Random initialization and learning
Understand that Q, K, and V matrices start with random values, and during training:
✔ Predictions are made
✔ A loss is calculated
✔ Backpropagation updates these vectors
This learning process allows the network to refine attention patterns automatically.
🔹 Why this matters
This mechanism is what lets self-attention models generate contextualized embeddings, where each word’s representation adapts to the full sentence rather than being fixed.
📂 Notebook Link:
https://github.com/GenEd-Tech/Applied...
👍 Like, Share & Subscribe for more AI, NLP & Deep Learning content
💬 Comment if you want the next session on Multi-Head Attention and full Transformer blocks
#DeepLearning #SelfAttention #QueryKeyValue #ContextualEmbeddings #Transformer #NLP #MachineLearning #AI #AppliedDeepLearning
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: