Artificial Intelligence
Why Natural General Intelligence (Still) Reigns Supreme
Today's AI is still entirely dependent on human achievement.
Posted November 21, 2025 Reviewed by Davia Sills
Key points
- Despite the hype, the dream of artificial general intelligence has not yet been achieved.
- AI can't replicate three critical features of natural general intelligence by humans and non-human animals.
- AI still has some distance to go as far as logic, associative learning, and value sensitivity.
Despite the impressive achievements of current generative AI systems, the dream of Artificial General Intelligence remains far away, notwithstanding the hype offered by various tech CEOs.[1] The reasons are easy to state, if hard to quantify. Human intelligence requires three primary features, none of which have been fully cracked: logic, associative learning, and value sensitivity. I’ll explain each in turn.
Logic
Logic was once thought to be the apotheosis of human reasoning and the key to human intelligence.[2] Getting machines to reproduce logical inference was a massive breakthrough in the mid-1950s, with Newell, Simon, and Shaw’s Logic Theorist (1956)[3] and General Problem Solver (1957)[4], which were able to perform logical inferences and even prove some advanced mathematical theorems from the Principia Mathematica. The success reportedly prompted Simon to say to his students, “Over Christmas, Al Newell and I invented a thinking machine.”[5]
Millions upon millions of dollars were subsequently spent in anticipation of “solving” the problem of intelligence. But it didn’t work, not really. Logic-based AI—as useful as it remains today—proved brittle in the face of incomplete or contradictory information; perceptual inputs proved difficult (even impossible!) to capture in logical formulae; and as every schoolkid knows, logic is too hard to realistically be the whole or even the root of human cognition.[6]
Associative Learning
This brings us to associative learning. Recognizing the co-occurrence of properties (smoke, fire) or the predictive power of events (that bell ringing means food is coming) is a massively important part of all animal—including human—learning. And as B.F. Skinner knew back in the 1930s, it all comes down to being sensitive to environmental contingency, or, to put it differently, to the statistical structure of environmental events.[7]
The powerful statistical analysis of very large datasets is arguably where AI has made its greatest advances. From image and video classification systems to ChatGPT, machine learning systems have made impressive strides. So impressive, in fact, that we have started to hear folks say—or warn us about—the very thing that Dr. Simon claimed back in 1956: We may have invented thinking machines! We haven’t.
Let’s stick just to ChatGPT and its large language model (LLM) brethren, as these are constantly in the news for replacing programmers and teachers and student learning—or, at least, effortful student reading and writing.[8] As impressive as their capabilities are, they rely on a bit of a hack: They achieve their power by uploading—stealing, really—and re-analyzing the linguistic output of human beings, which is a massive, pre-digested, and hugely simplified account of the world around us.
When someone says, “That’s a green apple,” or “Democracy is the best (or worst) form of government,” they’ve already done the incredible and mysterious task of breaking up the world into concrete and abstract things—apples and democracies—naming them and describing some of their properties. They’ve performed a massive task of simplification (what computer scientists call dimensional reduction) before any computer even takes a crack at it.
Indeed, there are robust statistical regularities governing the sentences we utter and inscribe, and taking advantage of these regularities has proven quite powerful. But there’s not a computer on the planet that can do what your dog easily and naturally can: notice that when you reach for a leash, or even just move toward the door in a particular way or at a particular time, and know that a W-A-L-K is about to happen. The problem of associative learning—in the general form it appears in the actual world—has not come close to being cracked by AI.
Value Sensitivity
And why does Rover get so excited about that walk? Because he loves walks. Walks—and sticks and smells and exercise—matter to Rover. For Rover, and for all animals, including humans, the world has value, is positively suffused with it. The ability to perceive value in the world is central to how we navigate it, to what we do next. Moreover, this is a pervasive feature of life (think Maslow’s Hierarchy of Needs[9]).
Why am I discussing value in the context of intelligence? Because part of being smart is figuring out what matters. No computer does this. (And yes, reinforcement learning engineers[10], I hear you. You program your machines with value functions to help optimize learning outcomes. But let’s be honest: that just means you recognize how crucial the idea of value is to learning and intelligence. It doesn’t mean we understand it yet.)
So, indeed, ChatGPT is impressive. As was Logic Theorist and General Problem Solver. But at least for the time being, these machines impress because they can rely on human intelligence, not because they can replace it.
References
1. Russell, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.
2. Newell, A., & Simon, H. (1956). The logic theory machine--A complex information processing system. IRE Transactions on information theory, 2(3), 61-79.
3. Newell, A., Shaw, J. C., & Simon, H. A. (1959). Report on a general problem solving program. In IFIP congress (Vol. 256, No. 1.) Note that while the publication appeared in 1959, Newell reports that the program was written in the summer of 1957: Simon, H. A. (2013). The scientist as problem solver. In Complex information processing (pp. 375-398). Psychology Press.
4. Quoted in McCorduck, P. (2004), Machines Who Think (2nd ed.), Natick, Massachusetts: A. K. Peters., p. 138.
5. Stokel-Walker, C. “OpenAI says the latest ChatGPT can ‘think’ – and I have thoughts”. The Guardian, Sept 17, 2024. https://www.theguardian.com/technology/2024/sep/17/techcsape-openai-chatgpt-thoughts?CMP=share_btn_url
6. Anderson, M. L., & Perlis, D. R. (2002). Symbol systems. Encyclopedia of Cognitive Science, Macmillan Publishers, London.
7. Skinner, B.F. (1938). The behavior of organisms: An experimental analysis. Oxford, England: Appleton-Century.
8. Kosmyna, Nataliya & Hauptmann, Eugene & Yuan, Ye & Situ, Jessica & Liao, Xian-Hao & Beresnitzky, Ashly & Braunstein, Iris & Maes, Pattie. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. 10.48550/arXiv.2506.08872.
9. Maslow, A. H. (1943). "A theory of human motivation". Psychological Review. 50 (4): 370–396.
10. Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4, 237-285.
