ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

When AI Decisions Go Wrong at Scale—And How to Prevent It With Ran Aroussi

Автор: Scrum Master Toolbox Podcast

Загружено: 2026-02-16

Просмотров: 29

Описание: BONUS: When AI Decisions Go Wrong at Scale—And How to Prevent It We've spent years asking what AI can do. But the next frontier isn't more capability—it's something far less glamorous and far more dangerous if we get it wrong. In this episode, Ran Aroussi shares why observability, transparency, and governance may be the difference between AI that empowers humans and AI that quietly drifts out of alignment.

The Gap Between Demos and Deployable Systems "I've noticed that I watched well-designed agents make perfectly reasonable decisions based on their training, but in a context where the decision was catastrophically wrong. And there was really no way of knowing what had happened until the damage was already there."



Ran's journey from building algorithmic trading systems to creating MUXI, an open framework for production-ready AI agents, revealed a fundamental truth: the skills needed to build impressive AI demos are completely different from those needed to deploy reliable systems at scale. Coming from the EdTech space where he handled billions of ad impressions daily and over a million concurrent users, Ran brings a perspective shaped by real-world production demands.

The moment of realization came when he saw that the non-deterministic nature of AI meant that traditional software engineering approaches simply don't apply. While traditional bugs are reproducible, AI systems can produce different results from identical inputs—and that changes everything about how we need to approach deployment.

Why Leaders Misunderstand Production AI "When you chat with ChatGPT, you go there and it pretty much works all the time for you. But when you deploy a system in production, you have users with unimaginable different use cases, different problems, and different ways of phrasing themselves."



The biggest misconception leaders have is assuming that because AI works well in their personal testing, it will work equally well at scale. When you test AI with your own biases and limited imagination for scenarios, you're essentially seeing a curated experience.

Real users bring infinite variation: non-native English speakers constructing sentences differently, unexpected use cases, and edge cases no one anticipated. The input space for AI systems is practically infinite because it's language-based, making comprehensive testing impossible.

Multi-Layered Protection for Production AI "You have to put in deterministic filters between the AI and what you get back to the user."



Ran outlines a comprehensive approach to protecting AI systems in production:



Model version locking: Just as you wouldn't randomly upgrade Python versions without testing, lock your AI model versions to ensure consistent behavior

Guardrails in prompts: Set clear boundaries about what the AI should never do or share

Deterministic filters: Language firewalls that catch personal information, harmful content, or unexpected outputs before they reach users

Comprehensive logging: Detailed traces of every decision, tool call, and data flow for debugging and pattern detection



The key insight is that these layers must work together—no single approach provides sufficient protection for production systems.

Observability in Agentic Workflows "With agentic AI, you have decision-making, task decomposition, tools that it decided to call, and what data to pass to them. So there's a lot of things that you should at least be able to trace back."



Observability for agentic systems is fundamentally different from traditional LLM observability. When a user asks "What do I have to do today?", the system must determine who is asking, which tools are relevant to their role, what their preferences are, and how to format the response.

Each user triggers a completely different dynamic workflow. Ran emphasizes the need for multi-layered access to observability data: engineers need full debugging access with appropriate security clearances, while managers need topic-level views without personal information. The goal is building a knowledge graph of interactions that allows pattern detection and continuous improvement.

Governance as Human-AI Partnership "Governance isn't about control—it's about keeping people in the loop so AI amplifies, not replaces, human judgment."



The most powerful reframing in this conversation is viewing governance not as red tape but as a partnership model. Some actions—like answering support tickets—can be fully automated with occasional human review. Others—like approving million-dollar financial transfers—require human confirmation before execution. The key is designing systems where AI can do the preparation work while humans retain decision authority at critical checkpoints. This mirrors how we build trust with human colleagues: through repeated successful interactions...

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
When AI Decisions Go Wrong at Scale—And How to Prevent It With Ran Aroussi

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

Я прошёл 30 курсов по разработке искусственного интеллекта: вот 5 лучших.

Я прошёл 30 курсов по разработке искусственного интеллекта: вот 5 лучших.

When Lack of Trust Turns Teams Into Isolated Individuals | Prabhleen Kaur

When Lack of Trust Turns Teams Into Isolated Individuals | Prabhleen Kaur

Как я масштабировал своё приложение на NextJS + Supabase для работы с 10 000 пользователей

Как я масштабировал своё приложение на NextJS + Supabase для работы с 10 000 пользователей

Наблюдаемость и оценка эффективности агентов ИИ: простой анализ.

Наблюдаемость и оценка эффективности агентов ИИ: простой анализ.

Вы не отстаёте. Как освоить ИИ за 17 минут.

Вы не отстаёте. Как освоить ИИ за 17 минут.

Искусственный интеллект только начал заниматься НАСТОЯЩЕЙ наукой.

Искусственный интеллект только начал заниматься НАСТОЯЩЕЙ наукой.

Автоматизация взлома оборудования с помощью кода Клода

Автоматизация взлома оборудования с помощью кода Клода

Letting Teams Own Their Process Through Working Agreements | Prabhleen Kaur

Letting Teams Own Their Process Through Working Agreements | Prabhleen Kaur

Секретный технологический сдвиг, который погубит 90% компаний-разработчиков программного обеспече...

Секретный технологический сдвиг, который погубит 90% компаний-разработчиков программного обеспече...

Как я создал ИДЕАЛЬНОГО ИИ-агента за 1 неделю (и почему я НЕ МОГУ его выпустить)

Как я создал ИДЕАЛЬНОГО ИИ-агента за 1 неделю (и почему я НЕ МОГУ его выпустить)

Портников жестко высказался о финале войны. Это разрушит все ваши иллюзии!

Портников жестко высказался о финале войны. Это разрушит все ваши иллюзии!

Как Долго будет ПРАВИТЬ ПУТИН? - Екатерина Шульман

Как Долго будет ПРАВИТЬ ПУТИН? - Екатерина Шульман

How Anthropic's $30 Billion Deal Proves the Bubble Already Popped

How Anthropic's $30 Billion Deal Proves the Bubble Already Popped

The Art of Coaching Product Owners on What vs. How | Prabhleen Kaur

The Art of Coaching Product Owners on What vs. How | Prabhleen Kaur

Why The Ultra Rich Are Moving to Milan

Why The Ultra Rich Are Moving to Milan

Лучшие проекты по созданию ИИ-агентов: Base44, JDoodle.ai MCP, Cline CLI, ZenMux и EditWithAva.

Лучшие проекты по созданию ИИ-агентов: Base44, JDoodle.ai MCP, Cline CLI, ZenMux и EditWithAva.

NotebookLM: ПОЛНЫЙ ГАЙД — собираю команду экспертов С НУЛЯ за вечер (БЕСПЛАТНО)

NotebookLM: ПОЛНЫЙ ГАЙД — собираю команду экспертов С НУЛЯ за вечер (БЕСПЛАТНО)

Почему «Трансформеры» заменяют CNN?

Почему «Трансформеры» заменяют CNN?

OpenClaw Creator: Почему 80% приложений исчезнут

OpenClaw Creator: Почему 80% приложений исчезнут

Проблема нержавеющей стали

Проблема нержавеющей стали

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]