DeepSeek Used Claude to Train Censorship. Then Anthropic Cried Theft.
Автор: Lukas Hüttis - KI, Krypto & Web3 Tutorials
Загружено: 2026-02-24
Просмотров: 22
Описание:
Anthropic just accused DeepSeek, Moonshot AI, and MiniMax of running industrial-scale distillation attacks on Claude — 24,000 fake accounts, 16 million exchanges. The evidence looks credible. But Anthropic settled a $1.5 billion lawsuit (Bartz v. Anthropic, 2025) for training Claude on
pirated books. So who exactly owns trained AI knowledge — and who gets to make the rules?
This video breaks down what model distillation is, what Anthropic found, the censorship training detail nobody's covering (DeepSeek used Claude to train political censorship into its own model), the political lobby play behind the timing — and the bigger question the whole industry is avoiding.
📌 KEY TOPICS
• What model distillation is — and why every major lab does it
• Anthropic's evidence: 3 labs, 24K fake accounts, 16M exchanges
• The DeepSeek censorship angle: they used Claude to build Claude's opposite
• Bartz v. Anthropic: the $1.5B copyright settlement over pirated training data
• Anthropic's AI export control lobbying — and why the timing of this report matters
• Who owns trained AI knowledge?
⏱ CHAPTERS
0:00 The Paradox Nobody's Naming
0:38 Anthropic's Accusation — The Full Picture
2:06 What Model Distillation Actually Is
3:29 The Detail Nobody Is Talking About
4:31 The Mirror: Anthropic's Own Copyright Settlement
5:58 The Political Play Behind the Timing
7:21 Who Owns AI Knowledge?
8:28 My Take + Where This Goes
🔗 MENTIONED IN THIS VIDEO
→ Anthropic's Distillation Report:
https://anthropic.com/news/detecting-...
→ My OpenClaw AI Agent Setup Guide:
• OpenClaw Setup Tutorial — Self-Hosted AI A...
→ New to AI agents? Start here:
• The Real Difference Between AI Agents and ...
📚 SOURCES
→ Anthropic — Detecting and Preventing Distillation Attacks:
https://anthropic.com/news/detecting-...
→ Anthropic — Position on the Diffusion Rule / Export Controls:
https://anthropic.com/news/securing-a...
→ NPR — Anthropic settles with authors ($1.5B):
https://www.npr.org/2025/09/05/nx-s1-...
→ Norton Rose — Bartz v. Anthropic case summary:
https://www.insidetechlaw.com/blog/20...
→ TechCrunch — Anthropic accuses Chinese AI labs:
https://techcrunch.com/2026/02/23/ant...
→ Bloomberg — DeepSeek, MiniMax distillation:
https://www.bloomberg.com/news/articl...
📌 ABOUT THIS CHANNEL
I cover AI and Web3 without the hype — tutorials, practical analysis, clear thinking, no speculation dressed up as insight. If you want to understand what's actually happening in AI, not just what people want you to believe, subscribe.
🌐 https://www.lukashuettis.de/?lang=en
📘 German Book "Web3 ohne Bullsh*t": https://www.lukashuettis.de/buch
#AI #DeepSeek #Anthropic #Claude #AIPolicy #ModelDistillation
#ArtificialIntelligence #AIEthics #OpenSourceAI #ExportControls
#ChinaAI #AIRegulation #BartzvAnthropic #DeepSeekClaude
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: