Over a dozen global tech companies, including OpenAI, Amazon, Microsoft and Google DeepMind, have committed to setting out AI safety frameworks to mitigate or prevent harm from the technology.
The companies have agreed that under extreme circumstances wherein these risks cannot be adequately mitigated, they will “not develop or deploy a model or system at all”.
The voluntary commitment, which also includes Chinese firm Zhipu.ai and UAE’s Technology Innovation Institute, was made on the opening day of the AI Seoul Summit, the follow-up to the Bletchley AI Safety Summit, jointly hosted by the UK and the Republic of Korea.
The participating firms have agreed to publish safety frameworks explaining how they will measure the risks of their AI models. The frameworks will include severe risks that could be “deemed intolerable” and how to prevent those from getting to that stage.
“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” said Prime Minister Rishi Sunak.
“It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”
The agreement follows the Bletchley Declaration, an agreement between the 27 participating nations of the first AI Safety Summit to collaborate on ensuring protections against AI harm.
“The true potential of AI will only be unleashed if we’re able to grip the risks. It is on all of us to make sure AI is developed safely and today’s agreement means we now have bolstered commitments from AI companies and better representation across the globe,” said Technology Secretary Michelle Donelan.
“With more powerful AI models coming online, and more safety testing set to happen around the world, we are leading the charge to manage AI risks so we can seize its transformative potential for economic growth.”
The full list of 16 firms includes:
- Amazon
- Anthropic
- Cohere
- Google / Google DeepMind
- G42
- IBM
- Inflection AI
- Meta
- Microsoft
- Mistral AI
- Naver
- Open AI
- Samsung Electronics
- Technology Innovation Institute
- xAI
- Zhipu.ai
The commitment holds parallel to an agreement from the Bletchley Park summit last November for “like-minded countries and AI companies” to test the safety of AI models before they are released, which was also non-binding.
Politico reported last month that Google DeepMind is the only major AI lab that has allowed the UK’s AI Safety Institute to perform pre-deployment safety tests.