OpenAI's Security Blind Spot. How We Stop It | Professor Bo Li
Автор: FounderCoHo
Загружено: 2026-01-14
Просмотров: 10205
Описание:
Professor Bo Li, co-founder and CEO of Virtue AI and one of the world's leading researchers in trustworthy machine learning and AI security. She is a professor at the University of Illinois Urbana-Champaign and a recipient of MIT Technology Review's Innovators Under 35. Professor Li's research has shaped how we understand adversarial attacks, robustness, privacy, and the evaluation of large language models. Her team's work—from physical adversarial stop signs to the award-winning DecodingTrust framework—has become foundational to how academia, industry, and government assess AI safety today. Across academia, policy, and industry, she is defining what it means to build AI systems we can trust.
What happens when AI systems gain memory, tools, and the ability to act before we know how to secure them? In this conversation, Professor Li explains why AI security has reached a critical inflection point as models evolve into autonomous agents. The attack surface expands beyond prompts to actions, workflows, and entire systems. Drawing from years of adversarial ML research and real-world red teaming with organizations like OpenAI and major enterprises, she highlights a key insight: securing individual models isn't enough. AI must be protected at the system level with security designed in from the start. From sandboxing agents to real-time guardrails, Professor Li outlines what it will take to deploy AI safely at scale and how defenders can begin to catch up.
References
Professor Bo Li's Linkedin – / lxbosky
Professor Bo Li's X - https://x.com/uiuc_aisecure
Jing Conan Wang (Host) – / jingconan
FounderCoHo – / foundercoho
Timestamp
00:00 - Highlights
01:43 - Introduction and Interview Focus
03:31 - Early research in AI security before it became popular
05:14 - The adversarial stop sign breakthrough
06:18 - Transition from academia to entrepreneurship
08:40 - Finding co-founders and team formation
09:43 - Early customer validation with OpenAI
13:41 - Automated red teaming at scale
20:39 - Red teaming for different AI use cases
31:31 - Enterprise customers and safety concerns
38:18 - Founding Story and Fundraising
45:53 - RAG security and prompt injection risks
54:21 - Vision and Growth for Virtue AI
55:46 - Biggest takeaways from the journey
57:29 - Recommended resources and closing thoughts
59:03 - Outro
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: