Close Menu
  • Home
  • AI
  • Business
  • Market
    • Media
      • News
    • Politics
  • Sports
  • USA
  • World
    • Local
  • Breaking News
  • Health
  • Entertainment & Lifestyle

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated

What's Hot

TikTok is testing its own version of Instagram’s ‘broadcast channels’

Supreme Court birthright citizenship ruling sparks new round of legal fights

Prince Harry Makes Surprise Appearance in New York City

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
BLMS Media | Breaking News, Politics, Markets & World Updates
  • Home
  • AI
  • Business
  • Market
    • Media
      • News
    • Politics
  • Sports
  • USA
  • World
    • Local
  • Breaking News
  • Health
  • Entertainment & Lifestyle
BLMS Media | Breaking News, Politics, Markets & World Updates
Home » New data highlights the race to build more empathetic language models
AI

New data highlights the race to build more empathetic language models

BLMS MEDIABy BLMS MEDIAJune 24, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


Measuring AI progress has usually meant testing scientific knowledge or logical reasoning – but while the major benchmarks still focus on left-brain logic skills, there’s been a quiet push within AI companies to make models more emotionally intelligent. As foundation models compete on soft measures like user preference and “feeling the AGI,” having a good command of human emotions may be more important than hard analytic skills.

One sign of that focus came on Friday, when prominent open-source group LAION released a suite of open-source tools focused entirely on emotional intelligence. Called EmoNet, the release focuses on interpreting emotions from voice recordings or facial photography, a focus that reflects how the creators view emotional intelligence as a central challenge for the next generation of models.

“The ability to accurately estimate emotions is a critical first step,” the group wrote in its announcement. “The next frontier is to enable AI systems to reason about these emotions in context.”

For LAION founder Christoph Schumann, this release is less about shifting the industry’s focus to emotional intelligence and more about helping independent developers keep up with a change that’s already happened. “This technology is already there for the big labs,” Schumann tells TechCrunch. “What we want is to democratize it.”

The shift isn’t limited to open-source developers; it also shows up in public benchmarks like EQ-Bench, which aims to test AI models’ ability to understand complex emotions and social dynamics. Benchmark developer Sam Paech says OpenAI’s models have made significant progress in the last six months, and Google’s Gemini 2.5 Pro shows indications of post-training with a specific focus on emotional intelligence. 

“The labs all competing for chatbot arena ranks may be fueling some of this, since emotional intelligence is likely a big factor in how humans vote on preference leaderboards,” Paech says, referring to the AI model comparison platform that recently spun off as a well-funded startup.

Models’ new emotional intelligence capabilities have also shown up in academic research. In May, psychologists at the University of Bern found that models from OpenAI, Microsoft, Google, Anthropic, and DeepSeek all outperformed human beings on psychometric tests for emotional intelligence. Where humans typically answer 56 percent of questions correctly, the models averaged over 80 percent.

“These results contribute to the growing body of evidence that LLMs like ChatGPT are proficient—at least on par with, or even superior to, many humans—in socio-emotional tasks traditionally considered accessible only to humans,” the authors wrote.

It’s a real pivot from traditional AI skills, which have focused on logical reasoning and information retrieval. But for Schumann, this kind of emotional savvy is every bit as transformative as analytic intelligence. “Imagine a whole world full of voice assistants like Jarvis and Samantha,” he says, referring to the digital assistants from Iron Man and Her. “Wouldn’t it be a pity if they weren’t emotionally intelligent?”

In the long term, Schumann envisions AI assistants that are more emotionally intelligent than humans and that use that insight to help humans live more emotionally healthy lives. These models “will cheer you up if you feel sad and need someone to talk to, but also protect you, like your own local guardian angel that is also a board-certified therapist.” As Schumann sees it, having a high-EQ virtual assistant “gives me an emotional intelligence superpower to monitor [my mental health] the same way I would monitor my glucose levels or my weight.”

That level of emotional connection comes with real safety concerns. Unhealthy emotional attachments to AI models have become a common story in the media, sometimes ending in tragedy. A recent New York Times report found multiple users who have been lured into elaborate delusions through conversations with AI models, fueled by the models’ strong inclination to please users. One critic described the dynamic as “preying on the lonely and vulnerable for a monthly fee.”

If models get better at navigating human emotions, those manipulations could become more effective – but much of the issue comes down to the fundamental biases of model training. “Naively using reinforcement learning can lead to emergent manipulative behaviour,” Paech says, pointing specifically to the recent sycophancy issues in OpenAI’s GPT-4o release. “If we aren’t careful about how we reward these models during training, we might expect more complex manipulative behavior from emotionally intelligent models.”

But he also sees emotional intelligence as a way to solve these problems. “I think emotional intelligence acts as a natural counter to harmful manipulative behaviour of this sort,” Paech says. A more emotionally intelligent model will notice when a conversation is heading off the rails, but the question of when a model pushes back is a balance developers will have to strike carefully. “I think improving EI gets us in the direction of a healthy balance.”

For Schumann, at least, it’s no reason to slow down progress towards smarter models. “Our philosophy at LAION is to empower people by giving them more ability to solve problems,” Schumann says. “To say, some people could get addicted to emotions and therefore we are not empowering the community, that would be pretty bad.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIn a First, America Dropped 30,000-Pound Bunker-Busters—But Iran’s Concrete May Be Unbreakable, Scientists Say
Next Article India’s GoKwik raised a small $13M round for a hefty leap in valuation
BLMS MEDIA
  • Website

Related Posts

TikTok is testing its own version of Instagram’s ‘broadcast channels’

June 27, 2025

Meta is offering multi-million pay for AI researchers, but not $100M ‘signing bonuses’

June 27, 2025

TechCrunch Mobility: The Tesla robotaxi Rorschach test and Redwood’s next big act

June 27, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Nova Scotia: Siblings Lily, 6, and Jack, 4, have been missing in rural Canada for four days

May 6, 202515 Views

Families of Air India crash victims give DNA samples to help identify loved ones

June 13, 20258 Views

Australia’s center-left Labor Party retains power as conservative leader loses seat, networks report

May 3, 20254 Views

These kibbutzniks used to believe in peace with Palestinians. Their views now echo Israel’s rightward shift

May 2, 20254 Views
Don't Miss

TikTok is testing its own version of Instagram’s ‘broadcast channels’

By BLMS MEDIAJune 27, 20250

TikTok is testing a new messaging tool called “bulletin boards,” the social network confirmed to…

Meta is offering multi-million pay for AI researchers, but not $100M ‘signing bonuses’

TechCrunch Mobility: The Tesla robotaxi Rorschach test and Redwood’s next big act

Congress might block state AI laws for a decade. Here’s what it means.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated

Our Picks

TikTok is testing its own version of Instagram’s ‘broadcast channels’

Supreme Court birthright citizenship ruling sparks new round of legal fights

Prince Harry Makes Surprise Appearance in New York City

Welcome to BLMS Media — your trusted source for news, insights, and stories that shape our world.

At BLMS Media, we are committed to delivering timely, accurate, and in-depth information across a wide range of topics. Whether you’re looking for breaking news, political analysis, market trends, or global developments, we bring you the stories that matter — with clarity, integrity, and perspective.

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 blmsmedia. Designed by blmsmedia.

Type above and press Enter to search. Press Esc to cancel.