ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

AI-Associated Delusions

Автор: Colin Wright

Загружено: 2025-07-15

Просмотров: 147

Описание: This week we talk about AI therapy chatbots, delusions of grandeur, and sycophancy.
We also discuss tech-triggered psychosis, AI partners, and confident nonsense.
Recommended Book: Mr. Penumbra's 24-Hour Bookstore (https://amzn.to/469w4dE) by Robin Sloan
Transcript
In the context of artificial intelligence systems, a hallucination or delusion, sometimes more brusquely referred to as AI BS, is an output usually from an AI chatbot, but it can also be from another type of AI system, that’s basically just made up.
Sometimes this kind of output is just garbled nonsense, as the AI systems, those based on large language models, anyway, are essentially just predicting what words will come next in the sentences they’re writing based on statistical patterns. That means they can string words together, and then sentences together, and then paragraphs together in what seems like a logical and reasonable way, and in some cases can even cobble together convincing stories or code or whatever else, because systems with enough raw materials to work from have a good sense of what tends to go where, and thus what’s good grammar and what’s not, what code will work and what code will break your website, and so on.
In other cases, though, AI systems will seem to just make stuff up, but make it up convincingly enough that it can be tricky to detect the made up component of its answers.
Some writers have reported asking AI to provide feedback on their stories, for instance, only to later discover that the AI didn’t have access to the stories, and they were providing feedback based on the title, or based on the writer’s prompt—the text the writer used to ask the AI for feedback. And their answers were perhaps initially convincing enough that the writer didn’t realize the AI hadn’t read the pieces they asked them to criticize, and the AI systems, because most of them are biased to sycophancy, toward brown-nosing the user and not saying anything that might upset them, or saying what it believes they want to hear, they’ll provide general critique that sounds good, that lines up with what their systems tell them should be said in such contexts, but which is completely disconnected from those writings, and thus, not useful to the writer as a critique.
That combination of confabulation and sycophancy can be brutal, especially as these AI systems become more powerful and more convincing. They seldom make the basic grammatical and reality-based errors they made even a few years ago, and thus it’s easy to believe you’re speaking to something that’s thinking or at the bare-minimum, that understands what you’re trying to get it to help you with, or what you’re talking about. It’s easy to forget when interacting with such systems that you’re engaged not with another human or thinking entity, but with software that mimics the output of such an entity, but which doesn’t experience the same cognition experienced by the real-deal thinking creatures it’s attempting to emulate.
What I’d like to talk about today is another sort of AI-related delusion—one experienced by humans interacting with such systems, not the other way around—and the seeming, and theoretical, pros and cons of these sorts of delusional responses.
—
Research that’s looked into the effects of psychotherapy, including specific approaches like cognitive behavioral therapy and group therapy, show that such treatments are almost aways positive, with rare exceptions, grant benefits that tend to last well past the therapy itself—so people who go to therapy tend to benefit from it even after the session, and even after they stop going to therapy, if they eventually stop going for whatever reason, and that the success rate, the variability of positive impacts, vary based on the clinical location, the therapist, and so on, but only by about 5% or less for each of those variables; so even a not perfectly aligned therapist or a less than ideal therapy location will, on average, benefit the patient.
That general positive impact is part of the theory underpinning the use of AI systems for therapy purposes.
Instead of going into a therapist’s office and speaking with a human being for an hour or so at a time, the patient instead speaks or types to an AI chatbot that’s been optimized for this purpose. So it’s been primed to speak like a therapist, to have a bunch of therapy-related resources in its training data, and to provide therapy-related resources to the patient with whom it engages.
There are a lot of downsides to this approach, including the fact that AI bots are flawed in so many ways, are not actual humans, and thus can’t really connect with patients the way a human therapist might be able to connect with them, they have difficulty shifting from a trained script, as again, these systems are pulling from a corpus of training data and additional documents to which they have access, and that means they’ll tend to handle common issues and patient types pretty well, but anything deviating...

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
AI-Associated Delusions

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

We still don't understand magnetism

We still don't understand magnetism

ИИ - ЭТО ИЛЛЮЗИЯ ИНТЕЛЛЕКТА. Но что он такое и почему совершил революцию?

ИИ - ЭТО ИЛЛЮЗИЯ ИНТЕЛЛЕКТА. Но что он такое и почему совершил революцию?

Как написать введение дипломной работы.

Как написать введение дипломной работы.

Data Center Politics

Data Center Politics

Michael Wooldridge: Generative AI: Where it came from, what it is, and what it...- INTED2025 Keynote

Michael Wooldridge: Generative AI: Where it came from, what it is, and what it...- INTED2025 Keynote

how AI is manipulating your mind with flattery

how AI is manipulating your mind with flattery

What's the future for generative AI? - The Turing Lectures with Mike Wooldridge

What's the future for generative AI? - The Turing Lectures with Mike Wooldridge

Let's Know Things Episode 0 — Contextualism & China

Let's Know Things Episode 0 — Contextualism & China

Управление поведением LLM без тонкой настройки

Управление поведением LLM без тонкой настройки

Mother of All Deals

Mother of All Deals

Самая сложная модель из тех, что мы реально понимаем

Самая сложная модель из тех, что мы реально понимаем

Что такое генеративный ИИ и как он работает? – Лекции Тьюринга с Миреллой Лапатой

Что такое генеративный ИИ и как он работает? – Лекции Тьюринга с Миреллой Лапатой

Разработка с помощью Gemini 3, AI Studio, Antigravity и Nano Banana | Подкаст Agent Factory

Разработка с помощью Gemini 3, AI Studio, Antigravity и Nano Banana | Подкаст Agent Factory

Как внимание стало настолько эффективным [GQA/MLA/DSA]

Как внимание стало настолько эффективным [GQA/MLA/DSA]

Programmable 2025: AI is a Hype-Fuelled Dumpster Fire - Chris Simon

Programmable 2025: AI is a Hype-Fuelled Dumpster Fire - Chris Simon

Лучший документальный фильм про создание ИИ

Лучший документальный фильм про создание ИИ

400 часов вайб-кодинга: всё, что нужно знать | Claude, GPT, агенты

400 часов вайб-кодинга: всё, что нужно знать | Claude, GPT, агенты

18 крутых способов использовать ChatGPT, которые могут ЗАПРЕТИТЬ!

18 крутых способов использовать ChatGPT, которые могут ЗАПРЕТИТЬ!

MIT 6.S087: Базовые модели и генеративный ИИ. ВВЕДЕНИЕ

MIT 6.S087: Базовые модели и генеративный ИИ. ВВЕДЕНИЕ

Глава AI Meta о крахе хайпа вокруг ChatGPT и тупике нейросетей

Глава AI Meta о крахе хайпа вокруг ChatGPT и тупике нейросетей

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]