A study of 52 developers just challenged everything I thought I knew about AI productivity.
Автор: Tamara Lechner
Загружено: 2026-02-09
Просмотров: 22
Описание:
Researchers gave half the group access to a GPT-4o coding assistant. The other half got documentation only.
The result?
AI users scored 17% lower on debugging, code reading, and conceptual understanding.
And here's the part that surprised me: AI didn't even make them faster. The time "saved" on code generation was spent composing prompts and second-guessing outputs.
But the most concerning finding was about debugging—the skill you need most to verify AI output. The AI group encountered one error on average. The control group encountered three. Those errors were growth zone opportunities.
The researchers call it the "supervision gap": as AI shifts our role from creator to supervisor, we may never develop the skills we need to supervise effectively.
This maps directly to the AI for Human Flourishing framework's Growth & Development dimension. AI that accelerates output while degrading capability isn't a productivity tool—it's a career risk.
The study found six patterns of AI use. Only one—"Conceptual Inquiry," asking why rather than how—preserved learning outcomes.
How is your organization designing AI use to protect skill development? I'm curious what policies or practices are working.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: