The Strawberry Question That Exposed a Big AI Problem
Автор: Dominic Ligot
Загружено: 2026-01-10
Просмотров: 157
Описание:
In 2024, the internet obsessed over a small mistake. An AI was asked how many R’s were in “strawberry.” It got it wrong. The error went viral, and many people used it as proof that AI couldn’t be trusted.
But the real lesson wasn’t about spelling. It was about expectations.
AI doesn’t work like a search engine with a brain. It doesn’t look up facts. It predicts language based on patterns. Most of the time, that works. Sometimes, especially with short or unusual questions, it fails.
I tried another test. I asked how many T’s were in “Rappler.” There are none. Yet several AI systems confidently claimed otherwise. Some even explained their answers. All of them were wrong.
This is what we call hallucination. When there isn’t enough context, AI fills the gap by guessing. And because it’s designed to sound helpful, the guess often feels convincing.
That’s the real risk. Not that AI makes mistakes, but that it makes them confidently.
AI is not a source of truth. It’s a tool for working with information, summarizing, simplifying, and brainstorming. It performs poorly when treated as an authority.
The strawberry problem isn’t a joke. It’s a reminder that intelligence isn’t one skill, and confidence isn’t accuracy. AI is already part of everyday life. So are its errors. Learning to live with both is the real challenge.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: