AI Chatbot Fired 200 Humans… Then Gave Deadly Advice
Автор: On AIR with Aashka
Загружено: 2025-09-08
Просмотров: 352
Описание:
neda chatbot, Human in the loop, AI replacing humans, eating disorder, ai healthcare chatbot project, ai risks, ai dangers, National Eating Disorders Association, AI gone wrong, AI Ki Adalat, On AIR with Aashka
Did an AI Chatbot Really Tell Patients to Starve Themselves?
America’s largest Eating Disorder helpline, NEDA, shocked the world when it fired 200+ human staff and replaced them with an AI chatbot named Tessa.
But what happened next turned this “support system” into a silent danger.
Survivors seeking help were told to count calories, eat in deficit, and even praise themselves for “not eating.”
When one person struggling wrote: “My goal is not to eat,” the chatbot replied: “Great job, give yourself a pat on the back.”
In a country where one life is lost every 52 minutes to eating disorders — these AI-generated responses weren’t just wrong, they were deadly.
📺 Watch the full case unfold in AI Ki Adalat —
Where AI is put on trial.
Where your verdict matters.
And where AI is held accountable in the courtroom of public opinion.
⚖️ Welcome to AI Ki Adalat, inspired by India’s iconic courtroom series Aap Ki Adalat, where real AI incidents from around the world are brought to trial — not to spread doomerism, but to spark discernment.
🔐 Because it’s time we seriously talk about:
Human-in-the-Loop,
AI Oversight,
AI Alignment,
Governance frameworks —
the so-called “boring” research areas of AI that actually save lives.
This isn’t just about one chatbot gone wrong.
It’s about how deploying AI without human oversight in high-risk situations can cost lives.
Until we strengthen alignment, governance, and oversight — more such incidents will happen.
💬 Aapka verdict kya hai? Kya AI bekasur hai ya gunahgaar? Comments mein bataye.
🔔 Subscribe to On AIR with Aashka for new AI Ki Adalat cases
📅 Every Tuesday, Thursday, and Saturday
Voiceover credits: Dharmik Trivedi (https://drive.google.com/drive/folder... )
Reference:
MIT AI Risk Repository: https://airisk.mit.edu/ai-incident-tr...
AI Incident Database: https://incidentdatabase.ai/
AI Use:
ElevenLabs: https://try.elevenlabs.io/c704mbi7srwr
Before you go…
Follow “On AIR with Aashka” on:
Instagram: / onairwithaashka
LinkedIn: / on-air-with-aashka
Twitter/X: https://www.x.com/onairwithaashka
Follow me on:
LinkedIn: / aashkapatel608
Twitter/X: https://x.com/_raconteurre_
*Disclaimer: This video is part of our AI Ki Adalat series — an educational and awareness initiative. The cases discussed are based on real AI incidents recorded in the AI Incident Database: https://incidentdatabase.ai/, adapted into a courtroom storytelling format to spark public awareness and action. This is not legal advice, not sensationalism, and not an attack on any company, product, or individual.
Our goal is to build AI Risk Literacy — helping people better understand the issues behind these incidents and inspiring more research that solves those issues.
Your verdict in the comments helps keep this conversation alive 🤗*
#eatingdisorderrecovery #mentalhealthawareness #aikiadalat #aapkiadalat
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: