Why “Responsible AI” Might Be the Most Dangerous Lie | Warning Shots #28
Загружено: 2026-02-01
Просмотров: 6835
Описание:
📢 Take Action on AI Risk → http://www.safe.ai/act
💚 Support Our Mission → https://www.every.org/guardrailnow/f/...
What happens when the people building the most powerful AI systems in the world admit the risks, then keep accelerating anyway?
In this episode of Warning Shot, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to break down Dario Amodei’s latest essay, The Adolescent Phase of AI, and why its calm, reassuring tone may be far more dangerous than open alarmism. They unpack how “safe AI” narratives can dull public urgency even as capabilities race ahead and control remains elusive.
The conversation expands to the Doomsday Clock moving closer to midnight, with AI now explicitly named as an extinction-amplifying risk, and the unsettling news that AI systems like Grok are beginning to outperform humans at predicting real-world outcomes. From intelligence explosion dynamics and bioweapons risk to unemployment, prediction markets, and the myth of “surgical” AI safety, this episode asks a hard question: What does responsibility even mean when no one is truly in control?
This is a blunt, unsparing conversation about power, incentives, and why the absence of “adults in the room” may be the defining danger of the AI era.
_____
⏰ Timestamps
00:00 Introduction
01:00 Dario Amodei’s essay and the “safe AI” narrative
04:45 Why dismissing “doomers” misses the real risk
07:20 The myth of “surgical” AI interventions
10:00 Anthropic, trust, and the illusion of control
15:45 The Doomsday Clock moves closer to midnight
20:30 AI as a force multiplier for nuclear and biological risk
23:45 AI outperforming humans at prediction markets
27:30 Intelligence explosion and runaway feedback loops
31:50 Why calm reassurance may be the most dangerous signal
33:30 Final thoughts and warning shot
_____
🔎 In this episode, they explore:
• Why “responsible acceleration” may be incoherent
• How AI amplifies nuclear, biological, and geopolitical risk
• Why prediction superiority is a critical AGI warning sign
• The psychological danger of trusted elites projecting confidence
• Why AI safety narratives can suppress public urgency
• What it means to build systems no one can truly stop
_____
🎙️ About Warning Shots
A weekly show from the AI Risk Network, where three longtime AI risk communicators cut through hype, reassurance, and wishful thinking to confront the reality of AI extinction risk—before it’s too late.
📺 Subscribe for weekly conversations on AI risk, power, and the race we may not survive.
#AISafety #AIRisk #DoomsdayClock #AGI #AIAlignment #Anthropic #AIExtinction #ArtificialIntelligence #TechPolicy #WarningShots
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: