ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

Why Superhuman AI Would Kill Us All - Eliezer Yudkowsky

modern wisdom

podcast

chris williamson

Chris Williamson modern wisdom

modern wisdom podcast

chriswillx

Chris Williamson Modern Wisdom Podcast

Eliezer Yudkowsky

AI safety

artificial intelligence ethics

AGI alignment

existential risk

AI apocalypse

OpenAI

MIRI

machine learning

future of humanity

rationality

LessWrong

superintelligence

AI control problem

AI doomer

singularity

AI takeover

AI safety research

AI extinction theories

effective altruism

Ai ethics

ai

Автор: Chris Williamson

Загружено: 2025-10-25

Просмотров: 113232

Описание: Go see Chris live in America - https://chriswilliamson.live

Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute.

Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there’s a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it’s too late?

Expect to learn the problem with building superhuman AI, why AI would have goals we haven’t programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more…

-

00:00 Superhuman AI Could Kill Us All
10:25 How AI is Quietly Destroying Marriages
15:22 AI is an Enemy, Not an Ally
26:11 The Terrifying Truth About AI Alignment
31:52 What Does Superintelligence Advancement Look Like?
45:04 Are LLMs the Architect for Superhuman AI?
52:18 How Close are We to the Point of No Return?
01:01:07 Experts Need to be More Concerned
01:15:01 How Can We Stop Superintelligence Killing Us?
01:23:53 The Bleak Future of Superhuman AI
01:31:55 Could Eliezer Be Wrong?

-

Get access to every episode 10 hours before YouTube by subscribing for free on Spotify - https://spoti.fi/2LSimPn or Apple Podcasts - https://apple.co/2MNqIgw

Get my free Reading List of 100 life-changing books here - https://chriswillx.com/books/

Try my productivity energy drink Neutonic here - https://neutonic.com/modernwisdom

-

Get in touch in the comments below or head to...
Instagram:   / chriswillx  
Twitter:   / chriswillx  
Email: https://chriswillx.com/contact/

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
Why Superhuman AI Would Kill Us All - Eliezer Yudkowsky

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]