The Myth of Explainable AI in High-Stakes Decisions
Автор: Chandra Nath
Загружено: 2026-01-04
Просмотров: 1
Описание:
Artificial Intelligence is increasingly discussed as if it were a stable, explainable, and legally containable system.
It is not.
In this video, we examine AI as a dangerous abstraction—a concept that policymakers, lawyers, and regulators often misunderstand when applying legacy legal and ethical frameworks to probabilistic, non-deterministic systems.
Drawing from computer science, military decision-making, and governance experience, this talk explains:
Why modern AI systems cannot be fully explained or traced in the way law demands
How demands for perfect explainability can undermine national security and operational effectiveness
The growing gap between deterministic legal reasoning and probabilistic machine intelligence
Why treating AI as a conventional “tool” is intellectually and strategically flawed
What this means for courts, commanders, regulators, and democratic accountability
This is not an anti-AI argument.
It is a warning against oversimplification—and against legislating abstractions instead of realities.
📌 Audience:
Policy makers • Legal professionals • Military officers • AI researchers • Governance reformers • Technologists
📌 Context:
Professional Military Education (PME) • AI Governance • National Security • Law & Technology
Artificial Intelligence, AI Governance, Explainable AI, Military Ethics, AI and Law, National Security, Command Judgment, Technology Policy, AI Risk
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: