Artificial Intelligence
The Singularity Is Here
How AI is recontextualizing human uniqueness.
Updated June 30, 2025 Reviewed by Jessica Schrader
Key points
- Multimodal LLMs mirror human brain structures, hinting at AI consciousness potential.
- VERSES AI’s Genius model is designed to mimic natural intelligence.
- Consciousness in machines has not yet emerged, but many believe we are close to a new form of synthetic life.
For all of human history, with the exception of thoughts about some other animals and the idea that there might be forms of intelligence we cannot perceive, humanity has been alone. In myth and religion, we have imagined other forms of intelligence. In science fiction, we have stretched our imagination to envision alien minds. While aliens would in a sense be humanity's sibling, AI is a child. We have never had something else to talk with other than ourselves, and now we hold a mirror to ourselves1.
We Are Not Alone
Nearly three years ago, on Nov. 30, 2022, ChatGPT-3.5 went live2. Since then, more large language models (LLMs) burst onto the scene. Surprising even their creators—and even with the knowledge that “there is no there there,” that there is no consciousness—the experience of dialogue using natural language, connected to vast databases and deep neural networks, showed us that intelligence can take on new forms. Presaged by Kurzweil in 2005 (The Singularity Is Near), credibly—the singularity is here.
Throughout history as well, humanity has faced challenges to our narcissism and sense of uniqueness. For a long time, we thought the Earth was the center of the universe. Just as cartographers placed their own nations in the middle of their maps, humanity imagined itself at the center of creation, with an all-powerful being making us special. Kurt Vonnegut’s Breakfast of Champions (1973) satirizes this idea, depicting a protagonist who learns he is the only truly conscious being, while everyone else is merely a prop in his personal test. He runs amok, finally meeting his creator—spoiler alert—none other than Vonnegut himself, written into his own novel, ultimately answering nothing.
Inside the Black Box
Today’s LLMs are undeniably complex, interactive intelligences. Built on deep neural networks, LLMs exhibit properties no one predicted. A major issue with AI is transparency. Do we know what's happening, how it got an answer? Why do LLMs hallucinate sometimes, or lead people to perdition through bad advice, or lie and blackmail in order to avoid being shut down (Anthropic, 2025)?
A recent study in Nature Machine Intelligence (Du et al., 2025) found the internal structure of multimodal LLMs—which are able to work with many forms of data and not just text as with basic LLMs—to be remarkably similar to mappings of the human brain. Could sufficiently complex LLMs give rise to consciousness?
Beyond Human Intelligence
VERSES AI’s Genius model is different. Developed by computational psychiatrist Karl Friston and colleagues, it is designed from the ground up to mimic natural intelligence, a curious machine intelligence that actually thinks (Interview with Karl Friston, Brenner, 2025). It features built-in motivational systems that drive it to explore, seek information, and refine its models of the world. This approach is fundamentally different from LLMs, being based on physics principles like minimizing free energy, active/interactive inference and Bayesian probability, and causality-leveraging Markovian models. Friston and collaborators such as neuropsychologist Mark Solms aim to build models of artificial consciousness rooted in mammalian emotional systems—a field called affective neuroscience (see Panksepp, 1998). Achieving this would demonstrate a deep understanding of the human mind, brain, and body.
Michael Levin, a developmental and synthetic biologist, points out our "mind-blindness." Levin's lab is pushing these boundaries not only conceptually but practically—his lab has produced new forms of life that behave in ways not found in nature (Interview with Michael Levin, Brenner, 2025). His work has implications for regenerative medicine, cancer treatment, and bioengineering. He envisions a day when, using his “anatomical compiler,” we might assume any form we wish. More than a 3D printer, this compiler communicates with living tissue via bioelectrical channels to shape how it forms. Notably, Levin and colleagues have shown that intelligence is not confined to brains or neural tissue; it can and does exist in living matter devoid of neurons (Kofman & Levin, 2024). The body independent of the brain, via bioelectric processes just being tested scientifically, has properties of intelligence and possibly consciousness to which we have been blind.
More broadly, we can think about the known estimated sizes of current neural networks, relative to the human brain, to give a sense of where we are. Current neural networks, such as OpenAI’s GPT-3 with over 175 billion parameters, offer a benchmark; newer models are likely larger, but their exact sizes are not public. Neuromorphic networks like Intel’s Hala Point have 1.15 billion artificial neurons and 128 billion synapses.
The human brain, by contrast, contains about 86 billion neurons but 100 trillion synapses, showing fewer parameters but massively greater interconnectivity. Cortical Lab’s CL1 chip, with 800,000 living lab-grown neurons (roughly the same scale as the bumblebee’s nervous system), is modular, has a six-month lifespan, and can be networked with other systems.
Different sorts of AI, including many not discussed here, can be bundled into ensembles, leveraging various machine learning approaches to create intelligent ecosystems which may be both designed and spontaneously emerge (Friston et al, 2024)
Capture the Flag
Despite these humbling discoveries, humans have maintained a dominant position on Earth, aided by technology and language. Our greatest threat and greatest hope has always been ourselves. While we recognize intelligence in other animals—some primates and cetaceans, for example, display intelligence akin to our own—none have truly challenged our supremacy. There hasn’t been anything even remotely like us until now.
The emergence of conversational LLMs, modeled after neural networks similar to those in the brain, represents a profound shift. In the history of life, there was a long period before flowering plants evolved. Their emergence triggered the Cambrian Explosion—a rapid diversification that changed life forever, turbocharging evolution (Loren Eisley, The Immense Journey, 1957). In my view, AI represents a similar leap: a sudden, disorienting, and transformative emergence. The world has changed overnight, and we are only beginning to understand what this means for humanity.
AI is already accelerating discoveries in medicine, physics, mathematics, and more—solving problems in months that once took decades. If AI helps us crack scalable quantum computing, it could be like pouring gasoline on a fire. While the risks are considerable—as highlighted in a recent yet-unpublished MIT study (Kosmyna et al., 2025) suggesting ChatGPT use may weaken certain cognitive skills—the benefits have led us to adopt AI at breakneck speed. This study is not quite as certain as the headlines would have us believe. I also believe that we cannot understand how AI changes intracranial brain networks and activity without understanding how intracranial networks interact with extracranial networks. Study the interactions between human minds and AI intrinsic networks, so we can get a fuller picture. Saying LLM use makes brains weaker 1) may not be quite true and 2) may serve wishful thinking.
In the Balance
I suspect the story is more nuanced: like other technologies, AI will make us better and smarter in ways we can’t yet imagine, while also taking something away. That’s why most AI guidelines emphasize guardrails (e.g. AI Safety Levels for Mental Health ASL-MH; Brenner 2025), human oversight, and hybrid intelligence. Truly agentic AI—able to make decisions, self-prompt, pursue goals, and interact with the real world—is no longer science fiction. It’s only a matter of time. Even planetary intelligence suggests itself (e.g. Brenner, Global Emergent Consciousness), as human minds and computer intelligence get close enough to sync up.
In short: We are not alone anymore. While consciousness in machines has not yet openly emerged—though some believe it may have already happened, unnoticed—many believe we are on the verge of giving birth to a new form of synthetic life, conscious and intelligent. This new form is embryonic, but we can already feel it stirring. For many, this is a profound realization. The experience of interacting with LLMs is rich and complex, if sometimes flat or erroneous. But it’s only been a few years, and new things stir in the unminded computational complexity of ground reality. There is now an Other—akin to discovering that aliens are real, and meeting them for the first time. The singularity is quickening.
References
1. I created my own digital twin using MindbankAI. It creates a moving, speaking Zoom-meeting like interference within which you can speak with your double. Based on LLMs, it is trained with data you upload (a RAG, or "Retrival Augmented Generation") and specialized instruction sets you can direct. RAGs allow LLMs to avoid undue hallucination by provide a reference data set, keeping divergent predictions in check. Prompts direct the system how to behave, what to do and what NOT to do, creating additional guardrails.
2. Interestingly Anthropic's Claude was available sooner, but its creators waited to launch for ethical reasons, and were scooped.
Anthropic. (2025). Agentic Misalignment: How LLMs could be insider threats. Last accessed June 28, 2025 https://www.anthropic.com/research/agentic-misalignment
Brenner, G.H. (2009). Global Emergent Consciousness. Last accessed June 28, 2025 https://globalemergentconsciousness.com/index.html
Brenner, G. H. (2025, June 28). Expanding our understanding of life and intelligence: Interview with Michael Levin. Psychology Today. https://www.psychologytoday.com/us/blog/experimentations/202506/expanding-our-understanding-of-life-and-intelligence
Brenner, G. H. (2025, February 2). Designing a curious machine intelligence that actually thinks: Interview with Karl Friston. Psychology Today. https://www.psychologytoday.com/us/blog/experimentations/202502/designing-a-curious-machine-intelligence-that-actually-thinks
Brenner, G. H. (2025, June 29). Making AI safe for mental health use. Psychology Today. https://www.psychologytoday.com/us/blog/experimentations/202506/making-ai-safe-for-mental-health-use
Du, C., Fu, K., Wen, B. et al. Human-like object concept representations emerge naturally in multimodal large language models. Nat Mach Intell 7, 860–875 (2025). https://doi.org/10.1038/s42256-025-01049-z
Eiseley, L. (1957). The immense journey. Random House.
Friston, K. J., Ramstead, M. J., Kiefer, A. B., Tschantz, A., Buckley, C. L., Albarracin, M., Pitliya, R. J., Heins, C., Klein, B., Millidge, B., Sakthivadivel, D. A., St Clere Smithe, T., Koudahl, M., Tremblay, S. E., Petersen, C., Fung, K., Fox, J. G., Swanson, S., Mapes, D., & René, G. (2024). Designing ecosystems of intelligence from first principles. Collective Intelligence, 3(1). https://doi.org/10.1177/26339137231222481 (Original work published 2024)
Kofman, K., & Levin, M. (2024). Robustness of the mind-body interface: Case studies of unconventional information flow in the multiscale living architecture. Mind & Matter, 23(1), 63–86. https://www.mindmatter.de/journal/abstracts/mmabstracts23_1.html#lev
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint arXiv:2506.08872. https://arxiv.org/abs/2506.08872
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking
Panksepp, J. (1998). Affective neuroscience: The foundations of human and animal emotions. Oxford University Press.
Vonnegut, K. (1973). Breakfast of champions, or, Goodbye blue Monday. Random House.