Close Menu
  • Home
  • AI
  • Business
  • Market
    • Media
      • News
    • Politics
  • Sports
  • USA
  • World
    • Local
  • Breaking News
  • Health
  • Entertainment & Lifestyle

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated

What's Hot

Cosmic Baseball is ready to light up ballparks nationwide

People who are miserable in the relationships say ‘no’ to 4 questions

It’s ‘concerning’ if they don’t have it

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
BLMS Media | Breaking News, Politics, Markets & World Updates
  • Home
  • AI
  • Business
  • Market
    • Media
      • News
    • Politics
  • Sports
  • USA
  • World
    • Local
  • Breaking News
  • Health
  • Entertainment & Lifestyle
BLMS Media | Breaking News, Politics, Markets & World Updates
Home » OpenAI found features in AI models that correspond to different ‘personas’
AI

OpenAI found features in AI models that correspond to different ‘personas’

BLMS MEDIABy BLMS MEDIAJune 18, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


OpenAI researchers say they’ve discovered hidden features inside AI models that correspond to misaligned “personas,” according to new research published by the company on Wednesday.

By looking at an AI model’s internal representations — the numbers that dictate how an AI model responds, which often seem completely incoherent to humans — OpenAI researchers were able to find patterns that lit up when a model misbehaved.

The researchers found one such feature that corresponded to toxic behavior in an AI model’s responses —meaning the AI model would give misaligned responses, such as lying to users or making irresponsible suggestions.

The researchers discovered they were able to turn toxicity up or down by adjusting the feature.

OpenAI’s latest research gives the company a better understanding of the factors that can make AI models act unsafely, and thus, could help them develop safer AI models. OpenAI could potentially use the patterns they’ve found to better detect misalignment in production AI models, according to OpenAI interpretability researcher Dan Mossing.

“We are hopeful that the tools we’ve learned — like this ability to reduce a complicated phenomenon to a simple mathematical operation — will help us understand model generalization in other places as well,” said Mossing in an interview with TechCrunch.

AI researchers know how to improve AI models, but confusingly, they don’t fully understand how AI models arrive at their answers — Anthropic’s Chris Olah often remarks that AI models are grown more than they are built. OpenAI, Google DeepMind, and Anthropic are investing more in interpretability research — a field that tries to crack open the black box of how AI models work — to address this issue.

A recent study from Oxford AI research scientist Owain Evans raised new questions about how AI models generalize. The research found that OpenAI’s models could be fine-tuned on insecure code and would then display malicious behaviors across a variety of domains, such as trying to trick a user into sharing their password. The phenomenon is known as emergent misalignment, and Evans’ study inspired OpenAI to explore this further.

But in the process of studying emergent misalignment, OpenAI says it stumbled into features inside AI models that seem to play a large role in controlling behavior. Mossing says these patterns are reminiscent of internal brain activity in humans, in which certain neurons correlate to moods or behaviors.

“When Dan and team first presented this in a research meeting, I was like, ‘Wow, you guys found it,’” said Tejal Patwardhan, an OpenAI frontier evaluations researcher, in an interview with TechCrunch. “You found like, an internal neural activation that shows these personas and that you can actually steer to make the model more aligned.”

Some features OpenAI found correlate to sarcasm in AI model responses, whereas other features correlate to more toxic responses in which an AI model acts as a cartoonish, evil villain. OpenAI’s researchers say these features can change drastically during the fine-tuning process.

Notably, OpenAI researchers said that when emergent misalignment occurred, it was possible to steer the model back toward good behavior by fine-tuning the model on just a few hundred examples of secure code.

OpenAI’s latest research builds on the previous work Anthropic has done on interpretability and alignment. In 2024, Anthropic released research that tried to map the inner workings of AI models, trying to pin down and label various features that were responsible for different concepts.

Companies like OpenAI and Anthropic are making the case that there’s real value in understanding how AI models work, and not just making them better. However, there’s a long way to go to fully understand modern AI models.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSeed to Series C: What VCs actually want from AI startups
Next Article Break-in reported at home of slain Minnesota lawmaker
BLMS MEDIA
  • Website

Related Posts

Meta reportedly hires four more researchers from OpenAI

June 28, 2025

Week in Review:  Meta’s AI recruiting blitz

June 28, 2025

Vitalik Buterin has reservations about Sam Altman’s World project

June 28, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Nova Scotia: Siblings Lily, 6, and Jack, 4, have been missing in rural Canada for four days

May 6, 202515 Views

Families of Air India crash victims give DNA samples to help identify loved ones

June 13, 20258 Views

Australia’s center-left Labor Party retains power as conservative leader loses seat, networks report

May 3, 20254 Views

These kibbutzniks used to believe in peace with Palestinians. Their views now echo Israel’s rightward shift

May 2, 20254 Views
Don't Miss

Meta reportedly hires four more researchers from OpenAI

By BLMS MEDIAJune 28, 20250

Looks like Meta isn’t done poaching talent from OpenAI. Earlier this week, TechCrunch reported that…

Week in Review:  Meta’s AI recruiting blitz

Vitalik Buterin has reservations about Sam Altman’s World project

Anthropic’s Claude AI became a terrible business owner in experiment that got ‘weird’

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated

Our Picks

Cosmic Baseball is ready to light up ballparks nationwide

People who are miserable in the relationships say ‘no’ to 4 questions

It’s ‘concerning’ if they don’t have it

Welcome to BLMS Media — your trusted source for news, insights, and stories that shape our world.

At BLMS Media, we are committed to delivering timely, accurate, and in-depth information across a wide range of topics. Whether you’re looking for breaking news, political analysis, market trends, or global developments, we bring you the stories that matter — with clarity, integrity, and perspective.

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 blmsmedia. Designed by blmsmedia.

Type above and press Enter to search. Press Esc to cancel.