He believes this trait could be built into AI systems, but he’s not sure.
“I think so,” Altman said when asked the question in an interview with Deborah Spahr, senior associate dean at Harvard Business School.
The question of AI rebellion was once purely the preserve of Isaac Asimov’s science fiction or James Cameron’s action films. But since the rise of AI, this has become, if not a hot topic, at least a topic of discussion worthy of serious consideration. What was once considered a weirdo’s idea is now a real regulatory question.
OpenAI’s relationship with the government has been “quite constructive,” Altman said. He added that a project as wide-ranging and large-scale as AI development should have been a government project.
“In a well-functioning society, this would be a government project,” Altman said. “Given that it hasn’t happened, I think it would be better if it happened like this as an American project.”
The federal government has yet to make significant progress on AI safety laws. In California, there was a push to pass a law that would hold AI developers accountable for catastrophes such as when AI is used to develop weapons of mass destruction or attack critical infrastructure. The bill passed Congress but was vetoed by California Governor Gavin Newsom.
Some prominent figures in AI have warned that ensuring that AI is fully aligned with humanity’s interests is a key issue. Nobel laureate Jeffrey Hinton, known as the godfather of AI, said, “I don’t see a way to guarantee safety.” Tesla CEO Elon Musk has regularly warned that AI could lead to the extinction of humanity. Musk was instrumental in founding OpenAI and gave the nonprofit a significant amount of funding in its early years. Funds left behind by Altman.”thank you” despite the fact that Musk is suing him.
There are multiple organizations, including the nonprofit Alignment Research Center and startup companies. Secure super intelligence Founded by the former chief scientific officer of OpenAI, organizations dedicated solely to this issue have emerged in recent years.
OpenAI did not respond to a request for comment.
AI as currently designed is well-suited to coordination, Altman said. As such, he argues, it’s easier than you think to ensure that AI doesn’t harm humanity.
“One of the things that has worked surprisingly well is the ability to tailor the AI system to behave in a certain way,” he said. “So I think if we can clarify what that means in different cases, we can make the system work that way.”
Altman also provides typically unique insight into how OpenAI and other developers can “articulate” exactly the principles and ideals needed to ensure AI is on our side. I have an idea. It’s about using AI to survey the general public. He suggested asking AI chatbot users about their values and using their answers to decide how to tune AI to protect humanity.
“I’m interested in thought experiments.” [in which] The AI will chat with you for a few hours about your values,” he said. It “does the same thing for me and does the same thing for other people. And it says, ‘Okay, I can’t make everyone happy all the time.'”
Altman hopes that by communicating with and understanding billions of people at a “deeper level,” AI will be able to identify broader challenges facing society. From there, AI would be able to reach a consensus on what needs to be done to achieve the general well-being of the population.
AI has a dedicated in-house team; super alignmenttasked with keeping future digital superintelligence from going out of control and causing untold harm. In December 2023, the group published an initial research paper indicating that it was working on the process of building one large-scale language model. would oversee another. This spring, the leaders of that team, Sutskever and Jan Leike, left OpenAI. According to report From CNBC at the time.
Rike said he has growing disagreements with OpenAI’s management over its approach to safety as the company works toward artificial general intelligence, a term that refers to AI that is as smart as humans. He said he was leaving it alone.
“Building machines that are smarter than humans is an inherently risky endeavor,” Reich said. I wrote “OpenAI has a great responsibility on behalf of all humanity, but in recent years safety culture and processes have taken a backseat to shiny products.”
When Reich left, Altman I wrote to X He was “very grateful.” [his] Contributing to openai [sic] Alignment research and safety culture. ”