Why AI Development Is Not What You Think with Connor Leahy | TGS 184
Автор: Nate Hagens
Загружено: 2025-06-25
Просмотров: 12791
Описание:
(Conversation recorded on May 21st, 2025)
Recently, the risks about Artificial Intelligence and the need for ‘alignment’ have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there’s been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?
In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer’ – AI generated content that crowds out true human creations, propelled by algorithms that can’t tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies.
What kinds of policy and regulatory approaches could help slow down AI’s acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology’s impacts on mental health, meaning, and societal well-being?
About Connor Leahy:
Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI.
Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH.
Show Notes and More:
https://www.thegreatsimplification.co...
Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie:
• The Great Simplification | Film on Energy,...
---
Support The Institute for the Study of Energy and Our Future:
https://www.thegreatsimplification.co...
Join our Substack newsletter:
https://natehagens.substack.com/
Join our Discord channel and connect with other listeners:
/ discord
---
00:00 - Introduction
02:25 - Defining AI, AGI, and ASI
10:57 - Worst Case Scenario
16:01 - Energy Demand
23:10 - Hallucinations
27:26 - Oversight
31:20 - Risk to Labor
33:46 - Loss of Humanity
41:12 - Addiction
44:05 - Algorithmic Cancer
57:43 - Extinction
01:04:07 - Good AI
01:10:43 - Concerns of LLMs
01:21:11 - What Can We Do?
01:29:10 - Closing Questions
Повторяем попытку...

Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: