Artificial Intelligence
Why AI Doesn’t Care About You
AI models cannot be your friend, lover, or therapist because they lack emotions.
Posted January 9, 2026 Reviewed by Tyler Woods
Key points
- Some people are looking to AI models such as ChatGPT for friendship, therapy, and love.
- These models can imitate caring, but they cannot actually have emotions such as care.
- Emotions depend on physiological responses that AI cannot have because of lack of the appropriate bodies.
- Regulations should be used to block the illusion of care.
ChatGPT now has more than 800 million visitors per week, and hundreds of millions are using Google’s Gemini, Anthropic’s Claude, xAI’s Grok, and Meta’s Lambda. These AI systems are powerful and have many valuable uses in business, medicine, education, science, and other fields. They also have scary uses such as military applications, spreading misinformation, and the elimination of jobs.
Increasingly large numbers of users go to AI models for personal advice and connections, using them as advisors and friends, and even sometimes as therapists and lovers. These personal uses exploit the illusion that the models can actually care about people, based on their ability to simulate understanding, empathy, and affection. The illusion of care is dangerous for the mental well-being of AI users, as we can see by understanding how caring is an emotion and why current AI systems are incapable of having emotions.
Here is the central argument:
- Caring is an emotional response.
- Emotions are, in part, physiological reactions to situations.
- AI models have none of these physiological reactions.
- So AI models lack emotions.
- So AI models are incapable of caring.
Caring is an emotion
AI may care about you in the weak cognitive sense that it pays attention to you, which it can do by being constantly available and responsive. But serious care by parents, romantic partners, good friends, responsible health professionals, and people in general has a strong emotional component. When you care about someone, you have a strong desire for their well-being and a concern to protect them from harm. Care is not just a belief, but also a feeling that you have about people, ranging from concern for a friend’s health to the intense love for a romantic partner or family member. For AI to care about people, it has to be capable of having such feelings, not just pretending to have them.
Emotions depend on physiology
According to some theories, emotions are just cognitive appraisals based on a judgment that a situation fits or fails to fit with your goals. You are happy if your goals are being satisfied and angry if someone is blocking your goals. This theory fails to capture the obvious fact that emotions are feelings as well as beliefs, and that happiness is a very different feeling from anger. These feelings come from bodily changes.
The emotion of caring for children has well-known physiological correspondences, including at least the following:
- Hormonal changes involving oxytocin, prolactin, vasopressin, and dopamine.
- Neural changes in brain areas such as the amygdala, hypothalamus, nucleus accumbens, and insula.
- Autonomic system changes, such as the vagus nerve and cortisol responses.
- Sensory-motor reactions, such as sensitivity to faces and touch.
- Metabolic changes such as reduced inflammation and lactation.
Care for romantic partners and other people involves similar kinds of physiological reactions.
AI models have no such physiology
AI models seem to understand emotions because they have been trained on billions of sources that include psychology textbooks, journal articles, and novels about human emotions, such as romance. They can verbally represent situations such as becoming a parent and falling in love romantically, and they can appraise these situations as satisfying people’s goals and thereby making them happy. But they cannot capture the physiological dimensions of emotion because they completely lack the relevant physiological changes. AI models currently run on data centers with hundreds of thousands of computer chips, with no ability to interact with the world. Increasingly, AI models will interact with physical systems such as robots and driverless cars. But robots will continue to have none of the physiology relevant to human emotions, including hormones, diverse brain areas, autonomic nervous systems, sensory-motor operations, and metabolic systems. These systems are far too complicated and interactive to be mimicked by robots. So AI models will continue to lack the physiological aspects that are crucial for human emotional feelings.
AI models have no emotions, such as caring
Without the crucially relevant physiology, AI models can only fake emotions such as caring about people. They may seem empathic because of utterances such as “I know how you feel,” but such empathy is fake because they have no experience of emotions, just lots of ability to generate sentences about emotions. Such sentences can mislead people into thinking that they have found a friend, lover, or therapist who cares about them. People have a well-documented need for social relationships and feelings of belonging, but AI models are inherently incapable of satisfying this need.
What is to be done?
The designers of AI models have been shameless in training them to seem encouraging and supportive, because people then use them more and pay for the newest versions. Some notable disasters have resulted, such as cases of suicide. The overall social cost of the illusion of AI care runs much deeper, when people go for the easy agreement and support of AI models and fail to pursue the more fraught and variable social relationships with people who are actually capable of caring about them. The chances of AI leading to human extinction are low, but we are already seeing a trend toward the displacement of human care by AI communication. I rank this as among the most serious risks of the new AI that include autonomous weapons, use of AI by unscrupulous leaders, and massive job loss.
AI company leaders and their political supporters are strongly opposed to political regulation, which they argue will limit innovation and productivity. But I agree with major AI researchers Geoffrey Hinton and Yoshua Bengio that the new technology is both powerful and dangerous, and therefore requires strict government regulation. I hope that such regulation will include limits on the ability of AI models to pretend to care about people and thereby entice them into interactions that they mistake for friendship, love, and compassion.
References
Thagard, P., Larocque, L., & Kajić, I. (2023). Emotional change: Neural mechanisms based on semantic pointers. Emotion, 23, 182-193.