The corporate says its mission is to make constructing AI fashions much less like alchemy and extra like a science. Certain, LLMs like ChatGPT and Gemini can do wonderful issues. However no one is aware of precisely how or why they work, and that may make it arduous to repair their flaws or block undesirable behaviors.
“We noticed this widening hole between how properly fashions had been understood and simply how extensively they had been being deployed,” Goodfire’s CEO, Eric Ho, tells MIT Know-how Assessment in an unique chat forward of Silico’s launch. “I believe the dominant feeling in each single main frontier lab right this moment is that you simply simply want extra scale, extra compute, extra information, and you then get AGI [artificial general intelligence] and nothing else issues. And we’re saying no, there’s a greater approach.”
Goodfire is considered one of a small handful of corporations, together with business leaders Anthropic, OpenAI, and Google DeepMind, pioneering a way generally known as mechanistic interpretability, which goals to perceive what goes on inside an AI mannequin when it carries out a job by mapping its neurons and the pathways between them. (MIT Know-how Assessment picked mechanistic interpretability as considered one of its 10 Breakthrough Applied sciences of 2026.)
Goodfire desires to make use of this method not solely to audit fashions—that’s, finding out those who have already been educated—however to assist design them within the first place.
“We need to take away the trial and error and switch coaching fashions into precision engineering,” says Ho. “And which means exposing the knobs and dials to be able to really use them throughout the coaching course of.”
Goodfire has already used its strategies and instruments to tweak the behaviors of LLMs—for instance, decreasing the variety of hallucinations they produce. With Silico, the corporate is now packaging up a lot of these in-house strategies and delivery them as a product.
The software makes use of brokers to automate a lot of the advanced work. “Brokers at the moment are robust sufficient to do numerous the interpretability work that we had been doing utilizing people,” says Ho. “That was sort of the hole that wanted to be bridged earlier than this was really a viable platform that prospects might use themselves.”
Leonard Bereska, a researcher on the College of Amsterdam who has labored on mechanistic interpretability, thinks Silico appears like a great tool. However he pushes again on Goodfire’s loftier aspirations. “In actuality, they’re including precision to the alchemy,” he says. “Calling it engineering makes it sound extra principled than it’s.”
