AI Threat Landscape: Model Poisoning and Prompt Injection
Автор: SecGuy
Загружено: 2026-02-17
Просмотров: 95
Описание:
When you type a password, the computer knows it's data. But when you talk to an AI, your instructions and your data are the exact same thing—just tokens in a stream. That single flaw is the root of every AI attack. In this video, Sec Guy explains the math behind Universal Adversarial Triggers, reveals how Indirect Prompt Injection can turn a resume into a weapon, and shows why Token Smuggling allows malware to slip right past your firewall.
[Exam Ready Route - FREE]
Pass your certification for $0.
✅ Training Videos & Practice Tests
✅ Sec Guy Mobile Lab (On-the-go training powered by AI voice)
✅ Discord Access (Study sessions & Industry networking)
👉 Start Here: https://secguy.org
[Job Ready Route - MEMBERSHIP]
Stop studying and start working. Get the hands-on experience hiring managers are asking for.
🔥 Hands-On Labs: Python, Encryption, Hashing, AI, & CTFs
🔥 Salary Negotiator Workshop
🔥 Experience Builder: Real-world projects to fill your resume
👉 Get Hired: https://secguy.org
[Exam Domain Checklist]
This video covers critical objectives for the following exams:
Security+
[ ] Domain 2.6: Artificial Intelligence (Prompt Injection, Training Data Poisoning)
[ ] Domain 2.2: Vulnerabilities (Supply Chain Attacks - Poisoned Models)
CISSP
[ ] Domain 8: Software Development Security (Input Validation in AI Systems)
[ ] Domain 1: Security and Risk Management (AI Risk Assessment)
CISM
[ ] Domain 2: Information Risk Management (Emerging Tech Risks: AI & ML)
CRISC
[ ] Domain 2: IT Risk Assessment (Adversarial AI & Model Theft)
CCSP
[ ] Domain 4: Cloud Application Security (Securing AI APIs & Rate Limiting)
SecurityX (CompTIA)
[ ] Domain 3.0: Security Operations (Detecting Adversarial ML Attacks)
GIAC GSEC (SANS)
[ ] Emerging Threats: AI & LLM Security
AWS CSS (Certified Security – Specialty)
[ ] Domain 1: Threat Detection (Anomalous API Usage & Cost Attacks)
Pentest+ (CompTIA)
[ ] Domain 3: Attacks and Exploits (Prompt Injection & Jailbreaking LLMs)
CEH (Certified Ethical Hacker)
[ ] Domain 10: Web Server & Application Hacking (AI-Specific Injection Vectors)
SecAI+
[ ] AI Security: Universal Adversarial Triggers (UAT), Indirect Injection, Token Smuggling, Model Inversion
[Timestamps]
0:00 - Intro: Data vs. Instructions (The Core Flaw)
0:48 - Context Mixing: The "System Prompt" Vulnerability
1:28 - Type 1: Persona Modification ("Do Anything Now" / DAN)
1:54 - Type 2: Logical Bypass (Translation & Educational Intent)
2:25 - Type 3: Universal Adversarial Triggers (The Math of "ZXCVB")
3:05 - Indirect Prompt Injection: The Resume Scanner Attack (Zero Click)
3:50 - RAG Poisoning: When the AI Searches a Malicious Site
4:22 - Token Smuggling: Bypassing Firewalls via Payload Splitting
4:54 - Availability Attacks: Wallet Exhaustion & Recursive Loops
5:37 - Defense: Prompt Firewalls & Canary Tokens
5:56 - Homework: Glitch Tokens
6:14 - Outro: Train Hard, Stay Secure.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: