The Right 300 Tokens Beat 100k Noisy Ones at QCon London 20206
Автор: jbaruch
Загружено: 2026-03-18
Просмотров: 19
Описание:
Your agent has 100k tokens of context. It still forgets what you told it two messages ago.
Prompt engineering taught us to craft the perfect instruction. Context engineering asks a different question: what does your model need to see and what should it never see at all? It's the shift from writing prompts to designing context.
In this talk, we'll dissect four antipatterns killing your agents and the architectural fixes that actually work:
The Stuffed Prompt : You crammed everything upfront and hoped for the best. But static context doesn't scale. We'll explore dynamic loading and context refinement : fetching what's needed when it's needed, and staying within your context window without losing signal. (And yes, we'll bust the myth that position doesn't matter—models do lose track of what's buried in the middle.)
The Wrong Tool for the Job : You picked one retrieval method and used it everywhere. But RAG isn't always the answer. Neither are tools. Neither is an exact match. We'll break down when embeddings help, when MCP gives you precision, and when a simple lookup beats both.
The Goldfish Agent : Your agent forgets everything between sessions. Or worse, remembers everything forever. We'll explore short-term and long-term memory, pruning and compaction strategies : what to persist, what to summarize, where to store it, and when to let go.
The Vibes Eval : You shipped because it "felt right." But you can't improve what you don't measure. We'll build eval strategies that prove your context choices work or expose the tokens you're wasting.
Your context window called. It wants its tokens back !
Bonus: We'll use a coding agent to explain these patterns so you'll learn how they work under the hood ; but everything also applies to AI agents in general.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: