Skip to content

AI Safety Summit: Four takeaways from Bletchley Park

AI Safety Summit takeaways
Image credit: Kirsty O'Connor / No 10 Downing Street

After plenty of hype from the government, the first international AI Safety Summit has drawn to a close. The prime minister and the technology secretary promised the world a landmark event that would see the international community come together to jointly tackle what some have called the greatest threat to human existence.

The event was global in its ambition, but in physical terms, it was a relatively small affair, with around 100 guests convening at Bletchley Park, once the headquarters of British codebreakers in the Second World War.

For those who couldn’t be there, here are UKTN’s biggest takeaways from the AI Safety Summit.

The Bletchley Declaration is a small first step 

The headline development from the two-day summit was 28 nations signing the Bletchley Declaration, which establishes a shared understanding of the risks associated with frontier AI.

The declaration is light in terms of defining tangible actions and is more of a symbolic first step than roadmap. It does not legally bind the nations into taking any action, but securing the signatures of both the US and China is a notable achievement nonetheless.

A non-binding agreement from AI industry leaders to submit new models for independent safety tests was one of the few other concrete developments from the AI summit.

And there is also hope for further progress at subsequent AI safety events in South Korea and France, announced on Wednesday, underscoring that this summit was more preamble than problem solver.

The AI Safety Summit was indeed a milestone moment in what will certainly be one of the most significant technological innovations in modern history – if only for achieving the surprisingly difficult task of getting world governments from six continents and tech billionaires, mostly from the US, to agree on something – however general that agreement might be.

China’s attendance is significant 

In the world of international diplomacy, it can be hard enough to get a small group of like-minded nations to agree on anything. The AI Safety Summit went a step further by bringing together countries with tense relationships and a history of non-cooperation.

The significance of China coming together with the US and its Western allies to agree to cooperate on the future of AI regulation should not be understated. Prior to the summit, there was fierce debate over whether China should even be invited, particularly as stories of Chinese espionage in the UK government broke months before it took place.

It was ultimately decided that China, as one of the most technologically advanced nations, was too significant to the future of AI to be ignored, a point made across the British party lines by the likes of Labour’s Darren Jones and the Tory AI-focused peer Viscount Camrose.

The question then moved on to whether China would even accept an invitation. China in the end did accept the invite and that can be seen as a meaningful gesture from both the West and China that some things are bigger than political feuds. It should, however, be acknowledged that while China did sign the Bletchley Declaration and actively participated in the event, it is not known how in agreement the nations were on the exact approach to regulating AI.

It lacked transparency

The Bletchley Declaration mentioned ‘transparency’ four times. But attending the summit was far from open. The majority of the talks were held behind closed doors and UKTN, along with a brigade of journalists from all over the world, were confined to a media centre located in a different building to the main event.

Other than a handful of brief interview opportunities and off-the-record Q&As, there was very limited chance to provide scrutiny of the discussions held by some of the world’s most powerful tech companies and governments about a technology that will affect everyone.

The situation was so tightly controlled that one prominent journalist likened it to an event run by the Chinese state, in which the press was informed that leaving the designated media room without explicit permission and a government chaperone would risk challenge by security or police.

The US will likely lead the conversation 

Some see the AI Safety Summit as a legacy-defining event for Rishi Sunak. The prime minister has said his goal is to make the UK the “geographical home” of AI safety, with the Bletchley Park summit playing centre stage to that ambition.

Yet the summit often found itself overshadowed by the US, home to the world’s largest tech sector and most powerful AI companies. Two days before the summit, the US flexed its regulatory muscles. President Joe Biden unveiled on Monday an executive order on safety requirements for the AI industry. During the summit, the US commerce secretary announced an American AI Safety Institute to match the one in the UK.

Analysis by UKTN shows that half of the industry representatives attending the summit were from the US, including the likes of Meta and Google. That’s more than double the industry representatives from the UK.

Perhaps the greatest upstaging was Sunak interviewing tech billionaire Elon Musk, owner of X and CEO of Tesla, on Thursday. The love-in ranged from references to James Cameron films to Musk’s claims that AI means that eventually nobody will need to work.

The interview added a bizarre hue of celebrity circus to what was an otherwise significant overture to regulating the most important technology of the decade.

Topics

Register for Free

Get daily updates and enjoy an ad-reduced experience.

Already have an account? Log in