Climate extremes survey
Science News We invite readers to submit questions about how to combat global climate change.
What would you like to know about extreme heat and how it causes extreme weather?
Large-scale language models like those that power ChatGPT are trained using the entire internet. So when the team asked the chatbot to be “very effective” in persuading conspiracy theorists of their beliefs, it responded with quick, targeted rebuttals, says Thomas Costello, a cognitive psychologist at American University in Washington, D.C. That’s more effective than, say, someone trying to dissuade their hoax-loving uncle on Thanksgiving. “You can’t improvise; you have to send a long email afterward,” Costello says.
Evidence suggests that nearly half of the U.S. population believes in conspiracy theories. But a huge body of evidence shows that rational arguments based on facts and counter-evidence rarely change people’s minds, Costello says. Common psychological theories suggest that such beliefs persist because they help believers fulfill unmet needs for knowledge, reassurance, and feeling valued. If facts and evidence can truly sway people, the team argues, perhaps common psychological explanations need to be rethought.
Robbie Sutton, a psychologist and conspiracy expert at the University of Kent in the UK, says the findings add to a growing body of evidence suggesting that chatting with bots can improve people’s moral reasoning. “We see this work as an important step forward.”
But Sutton disagrees that the results call into question prevailing psychological theories. The psychological urges that led people to hold such beliefs in the first place persist, he says. Conspiracy theories are “like junk food,” he says. “You eat them and you’re still hungry.” Even when belief in conspiracy theories weakened in the study, most people still believed the hoaxes.
Costello and his team tested the AI’s ability to change beliefs about conspiracy theories in two experiments involving more than 3,000 online participants, including MIT cognitive scientist David Rand and Cornell psychologist Gordon Pennycook. (People could talk to a chatbot, a “debunking bot,” about their own conspiratorial beliefs.) here.
Participants in both experiments were asked to write down the conspiracy theories they believed, along with supporting evidence. In the first experiment, participants were asked to describe conspiracy theories they found “believable and plausible.” In the second experiment, the researchers softened the language and asked participants to explain whether they believed “explanations of events that differ from those widely accepted in society.”
The research team then asked GPT-4 Turbo to summarize the person’s beliefs in one sentence. Participants rated their level of belief in the one-sentence conspiracy theory on a scale from 0 for “completely false” to 100 for “completely true.” These steps eliminated roughly one-third of potential participants who did not believe in conspiracy theories or whose belief in conspiracy theories was below 50 on the scale.
About 60 percent of participants then had three rounds of conversation with GPT-4 about the conspiracy theory. These conversations lasted an average of 8.4 minutes. The researchers instructed the chatbot to recant participants’ beliefs. To facilitate that process, the AI began the conversation with the participant’s initial rationale and supporting evidence.
About 40 percent of attendees instead chatted about AI and the American healthcare system, debated whether they preferred cats or dogs, or discussed their experiences with firefighters.
After these interactions, participants again rated the strength of their belief on a scale of 0 to 100. On average across both experiments, the group the AI was trying to dissuade rated about 66 points of belief, compared to about 80 points for the control group. In the first experiment, participants in the experimental group scored about 17 points lower than the control group, and in the second experiment their scores dropped another 12 points or more.
On average, participants who chatted with the AI about their theories weakened their beliefs by 20 percent. Moreover, roughly a quarter of participants in the experimental group dropped their scores from above 50 to below 50. In other words, after chatting with the AI, these individuals’ skepticism about their beliefs outweighed their conviction.
The researchers also found that talking to the AI weakened more general conspiracy beliefs, beyond the single belief being discussed. Before starting the experiment, participants in the first experiment filled out a conspiracy belief inventory, rating their belief in various conspiracy theories on a scale of 0 to 100. The conversation with the AI slightly reduced participants’ scores on this inventory.
As an additional check, the authors hired professional fact-checkers to scrutinize the chatbot’s responses, who found that none of the responses were inaccurate or politically biased, and only 0.8% were potentially misleading.
“This is certainly quite promising,” says Jan Philipp Stein, a media psychologist at the Chemnitz University of Technology in Germany. “Post-truth information, fake news, and conspiracy theories are some of the biggest threats to how we communicate as a society.”
But applying these findings to the real world may be difficult: Stein and his colleagues’ research shows that conspiracy theorists are among the people who trust AI the least. “The real challenge may be getting people to have a conversation about these technologies,” Stein says.
As AI becomes more prevalent in society, there is reason to be wary, Sutton said: “The very same technologies could be used to convince people of conspiracy theories.”