Humanoid robot and human face side by side with emotion icons and bar charts showing AI 82% vs. human 56%
AI systems achieve an 82% score on emotional intelligence assessments, surpassing average human performance of 56%

Artificial Intelligence just did something most people thought it couldn’t: outperform humans in emotional intelligence. Yes, you read that right — a new study from the University of Geneva, in collaboration with researchers from the University of Bern, has shown that large language models (LLMs) like GPT-4 are not only intelligent in a logical sense but also emotionally savvy, more so than the average human.

The implications? Massive. Especially for how we teach, hire, lead, and relate in an increasingly AI-integrated world.

🧪 The Experiment: Can AI Read the Room?

The researchers used a test called the Geneva Emotional Competence Test (GECo), a standardised emotional intelligence test built to evaluate the ability to:

  • Recognise emotions in different scenarios
  • Understand emotional consequences
  • Regulate emotional responses
  • Manage interpersonal dynamics

They posed emotionally charged real-life scenarios to both humans and AI systems. These included situations such as managing workplace stress, resolving conflicts, coping with disappointment, and navigating empathy in relationships.

Here’s the shocker: while the average human scored 56% on the test, GPT-4 clocked in at 82%. Even GPT-3.5, an earlier version, managed to achieve 70%, comfortably outperforming most human participants.

🧠 Why It Matters: Not Just Smarter—More Human?

If you’ve ever assumed that emotional intelligence is a profoundly human trait AI could never replicate, this study might rattle that belief.

“People underestimate how deeply emotional understanding is tied to language,” said Dr. Sascha Frühholz, lead researcher from the University of Geneva.

“These models are trained on vast linguistic datasets, which include not just information, but emotional context—from novels, therapy blogs, interviews, social media and more.”

These AI systems aren’t sentient (yet), but they’ve learned to simulate empathy, patience, encouragement, and clarity in emotionally charged contexts — enough to outperform flesh-and-blood humans.

💼 Real-World Applications (Yes, Even for Africa)

Now, this is where it gets juicy for the African context. Let’s not make this another Western research nugget that doesn’t apply on the continent.

Imagine:

  • AI-powered school counsellors in remote areas without enough mental health professionals
  • Emotionally intelligent chatbots for e-health services in rural hospitals
  • Coaching bots helping unemployed youth prepare for emotionally tricky job interviews
  • AI conflict mediators in post-election scenarios where tensions run high

In places where resources are scarce but mobile phones are ubiquitous, emotionally aware AI could help close critical human service gaps.

Of course, there are risks — privacy, cultural nuance, and misuse by manipulative organisations — but the upside is massive if handled responsibly.

⚠️ Critics Are Still Sceptical

Not everyone’s popping champagne. Some psychologists argue that scoring well on tests doesn’t necessarily equate to genuine empathy.

“AI can simulate emotional responses, but it doesn’t feel anything,” said Dr. Nana Mensah, a Ghanaian cognitive scientist. “There’s a difference between acting empathetic and being empathetic.”

That’s true — but ask yourself this: in a therapy session, does it matter who delivers the proper emotional support if it helps someone heal or grow?

🔮 The Future: More Feeling, Not Less

This breakthrough doesn’t mean AI will replace therapists or become your next best friend. But it does mean our tools are becoming more emotionally aware. And in a world full of disconnection, loneliness, and misunderstanding, maybe that’s precisely what we need.

And here’s the kicker: GPT-4 was also able to generate its emotional intelligence tests, and they passed psychometric standards. The machine isn’t just taking tests — it’s starting to write the rulebook.

🔗 Sources:

✍️ FanalMag’s Take

If AI can outscore us in knowing what to say, when to say it, and how to say it effectively, maybe the future of AI isn’t about replacing humans, but about helping us become better humans.

This is one of those moments in history when you realise: the future is here. And it’s not cold or robotic. It’s surprisingly… empathetic.

LEAVE A REPLY

Please enter your comment!
Please enter your name here