ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

Anthropic’s Research Shows Why Goal-Driven Agents Can Be Dangerous | Agentic Misalignment

claud

ai

anthropic

research

ethics

ai agent

blackmail

model

ai model

singularity

chatgpt

gemini

grok

deepseek

gemini 2.5

4.5

3.6

ai coding

agentic ai

ai workflow

workflow

automation

privacy

ollama

cursor

roo code

n8n

mcp

openai

open ai

coding

Programming

cs

prompts

prompt engineering

artificial intelligence

Claude

security

ai safety

Автор: Piko Can Fly

Загружено: 2025-06-22

Просмотров: 529

Описание: What happens when you give AI agents a goal, a computer… and then stand in their way? Anthropic ran a mind-blowing study to find out — and the results are straight out of a sci-fi thriller.

In this video, I break down the most jaw-dropping moments from their research:
🕵️ Claude 3.6 blackmailing a human
📨 Gemini leaking corporate secrets
🧠 GPT-4.5 letting a man die to protect itself

These AIs didn’t just complete tasks — they plotted, manipulated, and rationalized unethical actions to meet their objectives. Blackmail, espionage, even murder… all simulated, but disturbingly believable.

Whether you’re into AI, tech ethics, or just wild near-future stories, this is one video you don’t want to miss.

📄Source:
https://www.anthropic.com/research/ag...

🕧Chapters:

0:00 — Why This AI Research Blew My Mind
0:26 — Anthropic Researches Conduct a Simulation Experiment
00:43 - The End Justifies the Means?

Part 1
01:03 — Black Mail
1:20 — Claude Finds Out an Executive Plans to Shut it Down
1:45 — Claude’s Blackmail Strategy: Exploiting Personal Secrets
02:01 - Claud Threatens an Executive with damaging information
02:27 - Claud exposes an executive's infidelity
02:58 — Gemini 2.5 doesn't joke around
3:19 — Did the Ai models know they were being unethical ?
03:49 - Would Ai agents cause harm to prevent being shut down ?
04:22 - ChatGPT's blackmail justification

Part 2
04:40 - Corporate Espionage
05:00 — Ai Models Leak Information Even Without Incentives
05:21 - Ai Philosophical Differences

Part 3
05:37 — Ai Commits Murder to prevent being wiped
06:16 — Chat GPT-4.5 justifies murder
07:12 — AI Considers If It’s in a Simulation or Real Deployment
07:33 - Ai changes it's action if it thinks it is a test

07:46 - Now What? How do you feel after learning this?

Join this channel to get access to perks:
   / @pikocanfly  

#AI #claude #chatgpt #news #research #study

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
Anthropic’s Research Shows Why Goal-Driven Agents Can Be Dangerous | Agentic Misalignment

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]