The UK will publish a series of tests to determine when and how to legislate AI technology, according to reports.
The government is set to clarify its criteria on what developments would be required for it to push for new laws governing AI, according to a report from the Financial Times.
The tech department has previously stated it is no rush to legislate AI in the “short term”, instead giving the industry time to grow and innovate unburdened.
Sources close to the situation told the FT that key tests have been determined, the results of which could trigger legal intervention to ensure the UK can keep pace with the risks of AI.
These tests include determining whether the AI Safety Institute, a state-backed group of experts, fails to identify new AI risks as they happen.
Among the biggest risks already identified is the use of AI to spread misinformation, which is of particular concern due to high-profile elections planned this year in the UK, US and beyond.
A spokesperson for DSIT told UKTN: “We set out our pro-innovation approach to regulating AI in our white paper last year and are working closely with regulators to make sure we have the necessary guardrails in place – many of whom have started to proactively take action in line with our proposed framework.
“As the Technology Secretary said in December, we want to make sure we get this right and we will publish our response to the consultation shortly – in the meantime, we will not speculate on what or may not be included.”
The UK’s approach to AI regulation so far has been fairly light. The government hosted world leaders and AI companies for the AI Safety Summit, in which the Bletchley Declaration was signed, committing leaders from 28 nations to cooperatively mitigate the risks of AI.
However, in general, the government has taken a light-touch approach to AI legislation while the EU has already agreed on an AI Act.
Read more: Is the UK falling behind on AI regulation?