Apples, Oranges, and the Epistemic Fault Lines Between Humans and AI | Favela & Silberstein | #51
Автор: Madhur Mangalam, PhD
Загружено: 2026-02-03
Просмотров: 159
Описание:
Today on BeyondPhrenology, I sit down with Luis Favela and Michael Silberstein for a blunt audit of a new confusion spreading through cognitive science: the lazy habit of treating humans and AI as comparable systems just because they can produce similar outputs. In "Apples, Oranges, and the Epistemic Fault Lines Between Humans and AI," we dig into what gets erased when performance becomes the only metric that matters—history, embodiment, development, need, vulnerability, and the organismic constraints that make human cognition a way of being alive, not a pattern-matching trick in a box.
We interrogate the metaphors doing the damage: "intelligence," "representation," "understanding," "hallucination," "agency." We ask what those words actually commit us to—and what they conceal when imported wholesale from human psychology into machine learning. The result is a conversation about category errors dressed up as insight, and why the current AI discourse keeps flattening explanation into analogy.
This isn't an "AI is amazing" episode or an "AI is doomed" episode. It's a demand for conceptual hygiene: to stop pretending apples and oranges are the same fruit because they're both round, and to rebuild our questions around mechanisms, constraints, and forms of life. Call it an intellectual autopsy—aimed, again, at resurrection.
Also on Spotify: https://open.spotify.com/episode/3nrx...
TIMELINE
00:00:00 — Human cognition and LLMs: Why the comparison fails
00:13:50 — Confusing the map for the territory
00:19:05 — Clarifying the distinction between neuroscience and cognitive neuroscience
00:23:20 — Explanation is not reduction: metaphors, mechanisms, and the measurement gap
00:28:05 — Clarifying the core challenge facing cognitive neuroscience
00:37:55 — When philosophy overwhelms neuroscience: How excess theory can do harm
00:41:25 — Mega-labs and corporations flooding the literature with bad theories
00:49:45 — LLMs as assistive technologies vs. comparators for human cognition
00:52:30 — The strange anthropomorphization of AI systems
00:58:20 — Why we should reward scientists who admit they were wrong late in their careers
01:05:00 — The hubris behind claims of artificial general intelligence (AGI)
01:11:00 — Seeing LLMs for what they are
01:14:35 — Agency, information-processing, and other misleading comparisons (aka bullshit)
01:24:50 — Organismic intelligence vs. "brain-in-a-vat" GenAI
01:32:55 — The proliferation of bad theories of consciousness and the question of when robots "become human"
01:52:30 — Infinite flexibility with ideas of consciousness
01:55:20 — Moltbook and other interesting stuff
Luis Favela: https://hpsc.indiana.edu/about/facult...
Michael Silberstein: https://facultysites.etown.edu/silber...
Luis's and Michael's work:
The Ecological Brain: https://www.taylorfrancis.com/books/m...
Emergence in Context: https://global.oup.com/academic/produ...
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: