Training vs. Memorization: The Jane Austen Thought Experiment
Загружено: 2026-01-06
Просмотров: 113
Описание:
Training vs. Memorization: The Jane Austen Thought Experiment
Is AI just a giant database of quotes? Not exactly.
In Lesson 3 of our Prompt Engineering module, we dive into a thought experiment: What happens if we train a language model only on the works of Jane Austen?
This video breaks down the crucial difference between memorization and statistical mapping. We explore why an AI trained on Austen doesn't just regurgitate her books, but instead learns the "probabilistic essence"—the rhythms, vocabulary, and social patterns—of her 19th-century world.
In this 10-minute lesson, we cover:
Statistical Parrots?: Why "predicting the next word" is a simple mechanic that leads to stunningly complex results.
The Training Data Bias: Why your model can't tell you about a "laptop" or "hip-hop" if it isn't in the training corpus.
Midjourney & Wonder Woman: A real-world look at how training data creates visual bias (e.g., Gal Gadot vs. Lynda Carter).
The Mini-Turing Test: How experts evaluate if a model has truly captured the "soul" of a writer's style.
Closed World vs. The Internet: Setting the stage for our next move—scaling these small patterns to the entire internet to create Large Language Models (LLMs).
Prompt Engineering Insight: Understanding that AI is a "prediction machine" helps us realize why it makes mistakes. As prompt engineers, our job is to navigate these statistical patterns to get the most accurate and useful results possible.
#promptengineering #JaneAusten #AIBias #LargeLanguageModels #GenerativeAI #NortheasternUniversity #TechEthics #MachineLearning #HumanitariansAI #NikBearBrown
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: