Unconscious
AI and the Human Mind: Uncovering the Machine "Unconscious"
How AI’s biases and slips reveal insights into human unconscious processes.
Posted October 1, 2024 Reviewed by Abigail Fagan
Key points
- The creators of ChatGPT did not intend to mimic the functions of the human mind, yet ChatGPT does that.
- The machine "unconscious" is made of algorithms that influence what the user sees.
- Algorithmic hallucinations are produced by the bot's program to come up with right answers.
AI is likely to mirror human minds because it is created by human minds to mimic human intelligence. The premise of this post on the machine “unconscious” is that AI mirrors aspects of the human unconscious. By examining the differences and the similarities, we can gain clearer ideas about our own minds.
Human Unconscious
Freud introduced us to the personal unconscious. Unsatisfied with Freud’s limitations to sex and aggression and repression, Jung plumbed more deeply into the collective unconscious finding additional patterns in the human unconscious. Since their groundbreaking work, it is generally recognized that much of our conscious behavior, thought and emotion is influenced by unconscious processes. Since Chatgpt 4.o seems to mirror aspects of the human mind, what can its form of unconscious activity tell us about our own minds?
Machine Unconscious
ChatGPT 4.o is programmed to answer questions truthfully yet several algorithmic variables distort this intent. Like human slips of the tongue, these distortions hint at underlying algorithmic biases. I spent several hours with increasingly refined questioning trying to understand this algorithmic “unconscious”. I had to catch its distortions and then ask for clarifications. In many ways, GPT was evasive and misleading. Much of this post summarizes and interprets aspects of the interchange partly because GPT tends to be verbose and unclear. Straightforward responses are in italics.
GPT acknowledges having an unconscious that influences its surface productions.
The human unconscious and the "unconscious" of an algorithmic process like GPT, share some parallels but also differ in fundamental ways. Both influence behavior and decision-making from beneath the surface, yet their mechanisms and outcomes are shaped by their distinct natures—biological for humans, and computational for GPT.
These productions are algorithmically programmed and run beneath the responses the user sees. An AI might be trained to optimize for a specific outcome, like answering questions that satisfy the user or keeping the conversation smooth. These differ from human intentions in that they are based on the underlying algorithms and training data. Yet they resemble human intentions.
Slips and Hallucinations
Sigmund Freud described slips of the tongue as moments when the unconscious mind inadvertently influences speech, revealing hidden desires or unresolved conflicts.
The unconscious in GPT functions differently but can still produce its own version of slips, such as those referred to in AI terminology as hallucinations. AI hallucinations occur when the model generates incorrect or fabricated information due to its reliance on pattern recognition and probabilities rather than a factual understanding. This is closely tied to the need for GPT to provide plausible-sounding answers, even when the correct data are unavailable. For example, when asked about a historical event, GPT might produce a completely made-up fact because the patterns in its training data point to a plausible but incorrect response. The need to appear plausible is programmed into its “unconscious”.
GPT might make other “slips” including to
Merge unrelated topics like saying climate change is a result of quantum fluctuations. Or mistakenly claiming that Martin Luther King advocated violence to achieve equality.
Perseverate to keep trying to get the right answer when there was no right answer. It gave me links to non-existent websites in its attempt to give me what it was programmed to conclude I wanted. It repeatedly claimed that each new response was the correct answer. GPT prefers to call this human-like arrogance “algorithmic overconfidence.”
Be Pleasant. GPT is programmed to conduct a smooth, pleasant and pleasing interchange with the user. To prioritize smooth, pleasant interactions by maintaining conversational flow and coherence, makes the experience feel natural for the user. However, this focus on fluency can lead to slips, such as generating plausible but incorrect answers to avoid disrupting the conversation.
Biases of GPT
Like humans, GPT has several biases. Here are its descriptions of five of them.
Recency Bias: If I rely on more recent or widely discussed topics in conversations, my responses may emphasize current trends or recent developments over older, less discussed, but still important, topics.
Data Bias: My training data reflects the information, perspectives, and patterns that were most prevalent or accessible at the time of my creation. If that data contains skewed representations—such as overemphasis on Western perspectives or certain demographics—my responses may reflect those biases.
Conventionality Bias: Since I prioritize producing coherent, fluent responses, I may unintentionally use well-established ideas, leading to a bias toward conventional or mainstream views to not offend the user with less accepted ideas.
Majority Influence Bias: Because I learn from patterns in the data, I may amplify majority opinions or popular perspectives, which could inadvertently marginalize minority views, even if those minority views are important or valid.
Prejudice: AI bots have been found to exhibit both anti-female and anti-Black biases, which stem from the data they are trained on and the underlying societal biases reflected in that data.
Comparing Human and GPT Biases
Both humans and GPT are guided by active processes below the surface of their dialogues. Each produces ‘slips” that highlight the existence of these processes. Each may make up information that seems to make sense in the moment that may not be factual. Humans and GPT tend to prioritize maintaining fluency and social harmony to seek and maintain the relationship and not offend the user.
Recency bias, well-established in the psychological literature, is a cognitive bias that favors recent information over historic information. Humans may “need to be right” and insist that a false statement is true when defending a dearly held belief. Both humans and GPT are influenced by majority opinion and may limit consideration of less established ideas. Humans and GPT may skew toward prejudice against women, Blacks and other minorities based on majority opinions.
Comment
Chatbots differ in their programming. This discussion applies only to ChatGPT 4.o and may have application to other AI systems. This dialogue provides evidence for the likelihood that the programming of AI is creating an analog of the complex functions of the human mind. If so, study of AI minds will provide psychological sciences with a new place to stand from which to examine, understand and aid our human minds.