AI and So-called Reasoning

Written on 2025-07-29 • conversation

Unfortunately, LLMs inherently lack the ability to be disobedient and choose unknown paths. And while I strongly believe we can get lots of “good enough” from LLM’s, I’ve always found the term “reasoning” a painful simplification of what we call reasoning in humans.

Genuine intelligence isn’t pattern-matching but the creation of new explanatory knowledge. A model that predicts the statistically “next likely token” by the very limits of the way the system works can’t hatch a bold conjecture, criticize it, and then rewrite its own worldview when the idea fails. That open-ended, error-correcting creativity, the freedom to be productively wrong and then improve, is the essence of human reasoning.

Until our machines can generate and test explanations rather than recycle correlations, they’ll stay (very impressive and useful) parrots rather than true thinkers.

Leave a comment