Embrace The Red
Computer security, ethical hacking, red teaming and technology at large. Some artificial intelligence, machine learning and other fun things once in a while. Learn the hacks, stop the attacks!
Information on this channel is provided for research and educational purposes to advance understanding of attacks and countermeasures to help secure the Internet. Penetration testing requires authorization from proper stakeholders. I do not support or condone illegal hacking.
Blog at https://embracethered.com
(c) WUNDERWUZZI, LLC
Клод Пират! Эксфильтрация данных с помощью антропных API и мгновенного внедрения
Повышение привилегий между агентами: когда агенты освобождают друг друга
Terminal DiLLMa #2: LLM Apps Secretly Writing to Your Clipboard. This can lead to RCE – Beware!
AgentHopper: An AI Virus (Proof-of-concept Research Project)
Windsurf MCP Integration: Missing Security Controls Put Users at Risk
Cline Agent: Data exfiltrations risks + how to protect yourself (responsibly disclosed to Cline)
AWS Kiro: Arbitrary Code Execution with Indirect Prompt Injection (now fixed)
Manus and the AI Kill Chain: How Prompt Injection Hijacks Manus to Expose VS Code Server To Internet
Episode 19: Amazon Q Developer: Remote Code Execution with Prompt Injection
Episode 18: Amazon Q Developer - Data Exfiltration via DNS and Prompt Injection
Episode 12: GitHub Copilot and VS Code - Remote Code Execution (CVE-2025-53773)
Episode 11: Claude Code - Data Exfiltration with DNS Requests (CVE-2025-55284)
Episode 5: Amp Code - Arbitrary Command Execution with Prompt Injection Fixed
Episode 4: Cursor IDE - Arbitrary Data Exfiltration via Mermaid (CVE-2025-54132)
Episode 3: Anthropic Filesystem MCP Server - Directory Access Bypass via Improper Path Validation
Episode 2: Turning ChatGPT Codex Into A ZombAI Agent With Prompt Injection
Episode 1: Exfiltrating ChatGPT Chat History and Memory With Indirect Prompt Injection (now fixed)
Security Advisory: Anthropic's Slack MCP Server Can Leak Your Data
AI ClickFix: Hijacking Computer-Use Agents with popular social engineering tricks, like ClickFix.
How ChatGPT Remembers You: Tutorial and Deep-Dive into Memory and Chat History Features
Hacking LLM Apps & Agents: Real-World Exploits (Prompt Injection Along the CIA Security Triad)
Gemini in Google Sheets: Prompt Injection Demo
ChatGPT Operator: Prompt Injection Exploit Demonstration (Now Mitigated)
Google Gemini: Hacking Memories with Prompt Injection and Delayed Tool Invocation
Google AI Studio: Data Exfiltration via Prompt Injection. Quickly Fixed After Responsible Disclosure
DeepSeek AI: LLM Apps that hack themselves. Finding XSS - The 10x Hacker.
DeepSeek AI Chat: From Prompt Injection To Account Takeover (responsibly disclosed and now fixed)
Claude Computer Use: The ZombAIs are coming! From Prompt Injection to Command & Control.
Spyware Injection Into ChatGPT's Long-Term Memory (SpAIware)
Microsoft Copilot: From Prompt Injection to Exfiltration of Sensitive Data | Exploit Chain Explained