Artificial Intelligence
AI and the Probabilistic Self
Who are you before you choose who to be?
Posted April 9, 2025 Reviewed by Kaja Perina
Key points
- LLMs generate words by sampling probabilities—not by logic, but by statistical collapse.
- Human identity may work the same way, as a fluid self collapsing into context-driven roles.
- Unlike LLMs, we can defy the expected to choose meaning over probability.
I’ve spent the last few years immersed in the strange and compelling world of large language models (LLMs)—writing, speaking, and, yes, obsessing over their inner workings.
From the collapse of thought in hyperdimensional space to the Cognitive DAO, I’ve explored how these machines don’t just process language—they provoke new ways of understanding thought itself.
What fascinates me most is how LLMs generate language not through linear reasoning or fixed logic, but through probability. They evaluate a prompt, consider the surrounding data context, and then sample a next word from a distribution of possibilities. There is no single, inevitable path. Every word is a kind of cognitive dice roll.
At first glance, this feels foreign—mechanical, even alien. But the more I reflect on it, the more I wonder if this isn’t a machine thing?
What if it’s a mind thing?
What if we, too, move through the world not with a singular, fixed identity, but with a field of internal probabilities, collapsing into selfhood moment by moment?
When you walk into a room, you don’t always show up as the same "you." There’s the confident speaker, the quiet parent, the challenger, the harmonizer. These aren’t masks. They’re versions of you—a range of selves shaped by memory, context, intention, and social cues. The self, in this view, is not a singular, unbroken thread. It’s a cloud of possibility that collapses into action. And like an LLM, that collapse is contextual, fluid, and often probabilistic.
Psychology has long hinted at this multiplicity of self. Erving Goffman, famously described identity as performance—a dynamic, socially responsive act. And there's little denying that our online personalities certainly support some level of multiplicity. Even neuroscience also supports this. The brain is a prediction machine. It samples and updates from prior experience to determine what to expect next.
Our sense of self may be less about continuity and more about coherence in the moment.
In this light, maybe an LLM isn’t so foreign. It mirrors something curiously human—the act of choosing from within a field of internal probabilities.
But here’s the interesting twist. LLMs optimize for likelihood. Humans can defy it. We don’t always collapse to what’s expected or safe. Sometimes we act in direct opposition to our predicted self—taking risks, showing grace, disrupting patterns. Where a model would default to coherence, a human might reach for a more complex realization, dare I say transcendence.
So who are you before you choose? A cluster of probabilities, a latent narrative or even a cognitive superposition. And this act of choosing—of collapsing into form—isn’t just how we move through the world, it’s how we become who we are.