Close Menu
  • Home
  • AI
  • Business
  • Market
    • Media
      • News
    • Politics
  • Sports
  • USA
  • World
    • Local
  • Breaking News
  • Health
  • Entertainment & Lifestyle

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated

What's Hot

Jeff Bezos, Lauren Sánchez’s First Appearance as Married Couple

Meta reportedly hires four more researchers from OpenAI

‘Rust’ crew settles lawsuit against film producers and Alec Baldwin in fatal shooting

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
BLMS Media | Breaking News, Politics, Markets & World Updates
  • Home
  • AI
  • Business
  • Market
    • Media
      • News
    • Politics
  • Sports
  • USA
  • World
    • Local
  • Breaking News
  • Health
  • Entertainment & Lifestyle
BLMS Media | Breaking News, Politics, Markets & World Updates
Home » Anthropic CEO wants to open the black box of AI models by 2027
AI

Anthropic CEO wants to open the black box of AI models by 2027

BLMS MEDIABy BLMS MEDIAApril 24, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


Anthropic CEO Dario Amodei published an essay Thursday highlighting how little researchers understand about the inner workings of the world’s leading AI models. To address that, Amodei set an ambitious goal for Anthropic to reliably detect most AI model problems by 2027.

Amodei acknowledges the challenge ahead. In “The Urgency of Interpretability,” the CEO says Anthropic has made early breakthroughs in tracing how models arrive at their answers — but emphasizes that far more research is needed to decode these systems as they grow more powerful.

“I am very concerned about deploying such systems without a better handle on interpretability,” Amodei wrote in the essay. “These systems will be absolutely central to the economy, technology, and national security, and will be capable of so much autonomy that I consider it basically unacceptable for humanity to be totally ignorant of how they work.”

Anthropic is one of the pioneering companies in mechanistic interpretability, a field that aims to open the black box of AI models and understand why they make the decisions they do. Despite the rapid performance improvements of the tech industry’s AI models, we still have relatively little idea how these systems arrive at decisions.

For example, OpenAI recently launched new reasoning AI models, o3 and o4-mini, that perform better on some tasks, but also hallucinate more than its other models. The company doesn’t know why it’s happening.

“When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate,” Amodei wrote in the essay.

In the essay, Amodei notes that Anthropic co-founder Chris Olah says that AI models are “grown more than they are built.” In other words, AI researchers have found ways to improve AI model intelligence, but they don’t quite know why.

In the essay, Amodei says it could be dangerous to reach AGI — or as he calls it, “a country of geniuses in a data center” — without understanding how these models work. In a previous essay, Amodei claimed the tech industry could reach such a milestone by 2026 or 2027, but believes we’re much further out from fully understanding these AI models.

In the long term, Amodei says Anthropic would like to, essentially, conduct “brain scans” or “MRIs” of state-of-the-art AI models. These checkups would help identify a wide range of issues in AI models, including their tendencies to lie or seek power, or other weakness, he says. This could take five to 10 years to achieve, but these measures will be necessary to test and deploy Anthropic’s future AI models, he added.

Anthropic has made a few research breakthroughs that have allowed it to better understand how its AI models work. For example, the company recently found ways to trace an AI model’s thinking pathways through, what the company call, circuits. Anthropic identified one circuit that helps AI models understand which U.S. cities are located in which U.S. states. The company has only found a few of these circuits but estimates there are millions within AI models.

Anthropic has been investing in interpretability research itself and recently made its first investment in a startup working on interpretability. While interpretability is largely seen as a field of safety research today, Amodei notes that, eventually, explaining how AI models arrive at their answers could present a commercial advantage.

In the essay, Amodei called on OpenAI and Google DeepMind to increase their research efforts in the field. Beyond the friendly nudge, Anthropic’s CEO asked for governments to impose “light-touch” regulations to encourage interpretability research, such as requirements for companies to disclose their safety and security practices. In the essay, Amodei also says the U.S. should put export controls on chips to China, in order to limit the likelihood of an out-of-control, global AI race.

Anthropic has always stood out from OpenAI and Google for its focus on safety. While other tech companies pushed back on California’s controversial AI safety bill, SB 1047, Anthropic issued modest support and recommendations for the bill, which would have set safety reporting standards for frontier AI model developers.

In this case, Anthropic seems to be pushing for an industry-wide effort to better understand AI models, not just increasing their capabilities.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI rolls out a ‘lightweight’ version of its ChatGPT deep research tool
Next Article Viewpoint: Do the Chinese TikToks ‘exposing’ designer brands spell the end of luxury?
BLMS MEDIA
  • Website

Related Posts

Meta reportedly hires four more researchers from OpenAI

June 28, 2025

Week in Review:  Meta’s AI recruiting blitz

June 28, 2025

Vitalik Buterin has reservations about Sam Altman’s World project

June 28, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Nova Scotia: Siblings Lily, 6, and Jack, 4, have been missing in rural Canada for four days

May 6, 202515 Views

Families of Air India crash victims give DNA samples to help identify loved ones

June 13, 20258 Views

Australia’s center-left Labor Party retains power as conservative leader loses seat, networks report

May 3, 20254 Views

These kibbutzniks used to believe in peace with Palestinians. Their views now echo Israel’s rightward shift

May 2, 20254 Views
Don't Miss

Meta reportedly hires four more researchers from OpenAI

By BLMS MEDIAJune 28, 20250

Looks like Meta isn’t done poaching talent from OpenAI. Earlier this week, TechCrunch reported that…

Week in Review:  Meta’s AI recruiting blitz

Vitalik Buterin has reservations about Sam Altman’s World project

Anthropic’s Claude AI became a terrible business owner in experiment that got ‘weird’

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated

Our Picks

Jeff Bezos, Lauren Sánchez’s First Appearance as Married Couple

Meta reportedly hires four more researchers from OpenAI

‘Rust’ crew settles lawsuit against film producers and Alec Baldwin in fatal shooting

Welcome to BLMS Media — your trusted source for news, insights, and stories that shape our world.

At BLMS Media, we are committed to delivering timely, accurate, and in-depth information across a wide range of topics. Whether you’re looking for breaking news, political analysis, market trends, or global developments, we bring you the stories that matter — with clarity, integrity, and perspective.

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 blmsmedia. Designed by blmsmedia.

Type above and press Enter to search. Press Esc to cancel.