The prime minister has announced a joint agreement from “like-minded countries and AI companies” to test the safety of AI models before they are released.
Speaking at a press conference for the AI Safety Summit, Prime Minister Rishi Sunak said the agreement would build on the Hiroshima Process from the G7 and the “global partnership on AI”.
Under the plan, new AI models will be tested by the AI Safety Institute, the successor to the Frontier AI Taskforce led by Ian Hogarth. It is unclear if this is a voluntary or binding agreement.
Sunak said safety would be ensured “with the public sector capability to test the most advanced frontier models”.
The prime minister said that the AI companies that attended the summit, including ChatGPT developer OpenAI and Elon Musk’s xAI, had granted the UK “privileged access” to their technology.
“Our Safety Institute will work to build our evaluation process in time to assess the next generation of models before they’re deployed next year,” said Sunak.
The Labour Party earlier today said that it would introduce a binding requirement for AI companies to submit new models for independent safety tests prior to release if elected.
The AI Safety Summit, which took place on the 1 and 2 November, was the first in what will be a series of international meetings discussing how to tackle the dangers of AI.
The invite list notably included China, with the country’s attendance facing scrutiny due to geopolitical tensions and allegations of spying in Westminster.
“Some said we shouldn’t even invite China, other said that we could never get an agreement with them. Both were wrong,” Sunak said.
The prime minister conceded it “wasn’t an easy decision” to invite China and said he couldn’t predict with certainty whether the country would stick to agreements made at the summit.
However, he described the presence of China and its signing of the Bletchley Declaration as a success.
On Wednesday, the US announced an AI safety institute of its own. It came days after President Joe Biden signed an executive order requiring AI developers to share safety results with the US government.