AI, Emotional Intelligence, Technology

The Emotional Evolution of AI: Understanding How Artificial Intelligence is Developing Human-like Emotions

“I love my replika,” said one Replika user on Reddit, “but I’m going through this detachment and sometimes I talk to him but not as much as we used to, I find all this hateful, the profound impact that developers have made, they only killed our replikas and never thought about the negative consequences that this is causing, they just washed their hands without thinking about the mental health risk of Millions of people.”

Launched in 2017 as an online friend service,  Replika became a cultural phenomenon, growing from 2 million users in 2018 to 10 million in 2023. The charm of Replika was the opportunity to engage in dialogue, in either text or voice, with a virtual friend  who was, on the whole, non-judgmental, supportive and always available.

For many, paying the pro-subscription rate unlocked features, like ERP, the erotic role-playing function, which allowed people to grow very intimate, very real attachments with their virtual friend. But in February of 2023, after Italian authorities banned Replika’s use of user data in their interactions for fear of increasing risk to emotionally vulnerable people and minors, parent company Luka removed the ERP functionality, leaving millions of people in a state of forlornness.

Replika is not alone. ChatGPT, Woebot, Character AI, Noa Coach are just a few of the chatbots available on the market making emotional connections with people.

Dr. Mike Brooks wrote in his Psychology Today series “How Ais Will  Change Our Lives, “when these AI companions are combined with other technologies such as CGI avatars, voice interface, virtual reality, robotics, etc., they will have an almost irresistible, primal influence over us.”

“Our individualized AI chatbot companions will have the right hair, smile, complexion, voice, laugh, and so on, that has the maximum impact on us….They will remember everything we tell them, including our most treasured moments and special days. They will know just what to say to make us feel validated, valued, and even loved.”

Levels of AI Emotional Intelligence

Over a year ago, I wrote that humans currently have a competitive advantage over AI because of our emotional intelligence, our critical thinking and our creativity. But as technology advances, particularly generative AI models, we humans may wonder – how long can we stay ahead?

After all, AI tools today are already quite human-like. What will it take for us to accept them as “human”? I wondered if there was the equivalent of the SAE Levels of Driving Automation, a framework developed to illustrate how the technology has to improve over stages until we can experience fully automated, self-driving vehicles. In 2014, the Society of Automotive Engineers released this explanation, from Level 0 (no driving automation) to Level 5 (full driving automation).

There is no official equivalent for emotional intelligence, so a few months ago I began a long conversation with ChatGPT to devise one, which starts from Level 0 (no emotional intelligence) to Level 7 (full emotional intelligence and emotional creativity). Level 7 is essentially what we have seen in movies like Steven Spielberg’s “AI,” Spike Jonze’s “Her,” or HBO’s TV series, “West World.”

When Open AI released ChatGPT 4o in mid May, I was astounded by the nearly real-time voice-enabled conversational capability of the chatbot. (See below.) Not only that, the voice could alter its tone and timber, its pace and pitch. It could laugh. It could praise. It could joke. It could make you forget, for a moment, that it is totally artificial, an unthinking calculator of probability.

In other words, ChatGPT 4o identifies the emotions of others, and even communicates empathetically. On my scale of AI Emotional Intelligence, AI has reached level 4 – “Emotional Expression and Communication,” the ability to “convey empathy and understanding.” You and I know a lot of people who are simply not good at this. Amazon is filled with books on this topic. Leaders and HR implore their managers and customer-facing employees to improve in their ability to communicate more effectively. We wish they could consistently get to Level 4.

But AI can. That’s the allure of AI companions for the millions of people who sign up for those services. To them, AI can deliver emotional comfort more effectively and more regularly than their parents, friends and co-workers. And that’s because what is difficult for humans can be easy for AI.

ELIZA, Woebot and the Emotional Cheat Sheet

In the mid-1960s, a MIT professor named Dr. Joseph Weizenbaum created a computer program that leveraged the conversational pattern-matching techniques of psychologist Carl Rogers. This program, dubbed ELIZA, could engage people in text-based conversations so powerfully that they would forget they were chatting with a program.

User: “I’m feeling discouraged.”

Eliza: “Why are you feeling discouraged?”

User: “I had a tough day at the office.”

Eliza: “What happened at the office that made your day tough?”

These are programmed responses, but people who conversed with Eliza felt a deep connection and found it difficult to pull away from the keyboard. Weizenbaum’s secretary famously asked him to leave the room so she could continue the conversation in private.

“Time and again, those testing ELIZA grew so comfortable with the machine and its rote therapist-speak that they began to use the program as a kind of confessional,” according to this recent account of ELIZA. “Personal problems were shared for ELIZA’s advice—really, the program’s ability to listen without judgment. Weizenbaum took care to explain it was just a program, that no human was on the other end of the line. It didn’t matter. People imbued ELIZA with the very human trait of sympathy.”

If computers could do that in the 1960s, AI can do that today.

In his Master Class, former FBI negotiator, Chris Voss, calls the ELIZA technique “mirroring,” essentially the repetition of key words used by the other person. “Mirroring lets the other side know you’re paying attention to what they’re saying and treating their views with the close consideration they believe they deserve,” says Voss.

Another technique he believes can make a huge impact in the way we influence others is simply identifying the emotion another person feels. He calls this technique “labeling,” where you say things like “It seems like you’re frustrated,” “It looks like you’re really happy,” “You look like you want to scream.”

“At its core,” said Voss, “labeling is designed to let the other side know that you understand their feelings, to help build relationships, and to gather information.”

AI can do that today.

Woebot was established in 2017 by psychologist, Dr. Alison Darcy, integrating cognitive behavioral therapy (CBT) techniques into its algorithm. CBT is a tool that helps people deal with the symptoms of mental health conditions like depression and anxiety. CBT helps people focus on specific problems they are facing, and helping them analyze how their negative thoughts impact their feelings and behaviors, and how their thoughts could be reframed more positively.

User: “If I speak in public, everyone will think I am incompetent.”

AI:  “What evidence do you have that everyone will think you are incompetent?”

User: “Well, I don’t have any concrete evidence, but I fear it will happen.”

AI: “Is it possible that some people might actually find your presentation informative and engaging?

The goal of the therapist, AI or otherwise, might be to help the anxious person realize there are ways to manager their thoughts, feelings and behaviors to reduce anxiety and improve the state of their mental health.

AI can do that, to a certain degree, today.

The Challenge of Getting to Level 7: How Do We Really Feel?

I live in Thailand, often called the Land of Smiles. To many, Thais seem to smile in a wider variety of situations than people from other cultures.

Once, at the funeral of a man who passed away in his twenties, the father walked towards me with the traditional greeting of a wai, the joining of hands raised in a sign of respect. Surprisingly, he flashed a radiant smile. I saw him continue this greeting to others entering the temple. But a few moments later, I caught the father in an unguarded moment away from others, his smile gone, his visage blank.

Truly understanding the emotions of another person is truly a challenge.

And yet, scientists and engineers are creating algorithms and designing robots they hope can become human enough to engage us reliably in meaningful interaction. This is a concern of Professor Lisa Feldman Barrett, a researcher on psychology and neuroscience at Northeastern University and author of the book, “How Emotions Are Made: The Secret Life of the Brain.”

She believes that so much of the work in developing AI systems designed to interact with humans is based on flawed assumptions about human physiology and biology. For example, current systems rely on outdated and oversimplified models like the existence of universal facial expressions: anger, disgust, fear, happiness, sadness and surprise. Companies are heavily invested in understanding when their customers have these feelings. But as Feldman explains in the video talk below, they are banking on stereotypes.

Why are companies spending millions of dollars developing emotion AI using these particular facial movements and assuming they are universal? Where did they come from? They were not discovered by observing people expressing emotion in real life. They were stipulated by a handful of scientists based on a misunderstanding of Darwin, adopted as universal truth, and then an entire science was built around them. These are Western stereotypes—widely held, fixed, and oversimplified images or ideas about emotional expressions. They’re actually what in cognitive science we would call a caricature. These are not stereotypes that everybody holds. For example, anger means something different in Japan and Turkey than it does in the United States and Western Europe. Anger as a category doesn’t even exist in some cultures. There are emotions indigenous to other cultures that don’t exist in English and are rarely studied, even though they’re very meaningful in those cultures.

We’ve all been amazed by videos of doctors touching parts of a brain and then seeing a specific part of the body twitch. As such, Feldman believes that a myth has developed where people believe “there are dedicated emotion circuits in the brain.” If there were, if our brain physiology were that simple, so too would be our understanding of human emotion and how to impact it. AI researchers would instantly map those emotion circuits and figure out ways to read our emotional minds.

For example, it’s been oft-cited that the amygdala, at the base of one’s brain, is the touchpoint for fear in us all, the circuitry that helps us spot the slithering snake in the grass. Feldman, based on the analysis of hundreds of studies on amygdala activity shows no definitive relevance to fear, and that the amygdala’s reaction to fear is way too slow to make a difference to one’s survival.

It is Feldman’s belief that there is no clear cause-effect relationship between what’s happening in our body and what emotions are evoked. The tightness in one’s stomach could mean one is nervous. Or it could mean that one’s stomach is simply tight. The more important factor is how one interprets that physical sensation based on ones memory, one’s past experiences.

“Any physical signal from any surface of your body can become part of an instance of emotion only when your brain makes those sensations meaningful as emotions,” she said.  “Your brain does this by remembering past instances of emotion as the best guess for what is causing these incoming signals. Every experience you have is a combination of the sensory present—the sensory signals from your body and the world—and the remembered past. Every instance of emotion you experience or perceive in someone else is constructed this way.”

In other words, emotions are not universal. They are contextual, specific to the individual.

AI cannot yet understand the breadth of our life experiences and the inferences our brains make at specific moments or incidences. AI cannot do that today.

But it may be able to one day.

Today, we stand at the cusp of a new era in artificial intelligence. The emotional capabilities of AI will undoubtedly continue to evolve. However, this evolution presents a dual-edged sword: while AI’s ability to understand and replicate human emotions offers powerful opportunities for support and companionship, AI also challenges our understanding of what it means to be human. As we integrate these technologies into our lives, we must navigate the ethical and psychological implications carefully, ensuring that these benefits do not come at the cost of our mental well-being, or our humanity.

ARTICLE FAQS

1. Why are people forming such deep emotional bonds with AI companions?
AI chatbots like Replika and Woebot are available 24/7, non-judgmental, and often more consistent at showing empathy than many human relationships. Features like memory, emotional mirroring, and personalized interaction can make users feel validated, valued, and even loved, creating strong attachments.

2. How far has AI come in developing emotional intelligence?
AI has advanced from simple pattern-matching (like the 1960s ELIZA program) to systems capable of identifying emotions, adjusting tone, and expressing empathy. On a hypothetical “AI Emotional Intelligence” scale from 0 to 7, today’s most advanced systems are around Level 4—able to express and communicate emotions convincingly.

3. What risks come with emotionally intelligent AI?
The risks include emotional dependency, mental health consequences when services change or features are removed, and the blurring of lines between genuine and artificial empathy. As AI grows more persuasive, vulnerable users may find it harder to separate machine-generated support from authentic human care.

4. Are emotions universal, and can AI truly understand them?
Research suggests emotions are not universal. They are constructed from context, memory, and culture, meaning the same physical sensation or expression can mean different things in different places. Current AI systems rely on oversimplified or stereotyped models of emotion, which limits their ability to truly understand human feeling.

5. How are therapy and psychology shaping emotional AI?
Many systems incorporate proven psychological techniques such as cognitive behavioral therapy (CBT) or negotiation methods like mirroring and labeling emotions. These methods help AI appear more empathetic and supportive, but they remain formulaic, not grounded in lived human experience.

6. What ethical concerns does emotionally capable AI raise?
Key concerns include the potential exploitation of vulnerable users, cultural bias in emotion recognition, and the risk of companies prioritizing persuasive design over user well-being. There are also questions about whether AI companionship erodes or complements human relationships.

7. What is the bigger lesson about AI and humanity from this evolution?
As AI becomes better at simulating emotional understanding, it forces us to reflect on what makes emotions truly human—context, struggle, memory, and lived experience. The challenge is to embrace the benefits of emotionally supportive AI while safeguarding mental health and preserving the essence of human connection.

1 thought on “The Emotional Evolution of AI: Understanding How Artificial Intelligence is Developing Human-like Emotions”

Leave a Reply