UK’s AI Ambitions Hit by US Delays and Trump’s Protectionist Approach
UK’s AI Safety Institute Faces Challenges
The UK’s ambitions of taking a central role in policing artificial intelligence globally have been hit by its struggles to launch an outpost in the US and an incoming Trump administration that is threatening to take a "starkly" different approach to AI regulation.
Delayed Expansion Plans
The British government is looking to bolster its AI Safety Institute (AISI), established last year with a £50mn budget and 100 staff, as it seeks to solidify its position as the world’s best-resourced body investigating the risks around AI. Leading tech companies including OpenAI and Google have allowed the AISI to test and review their latest AI models. However, plans to expand further by opening a San Francisco office in May were delayed due to elections in both the US and UK and difficulty recruiting for the Silicon Valley outpost, according to people with knowledge of the matter.
Focusing on National Security
In an effort to maintain its influence, people close to the UK government believe it will increasingly position the AISI as an organisation focused on national security, with direct links to intelligence agency GCHQ. Amid a tense period of relations between the UK’s left-leaning Labour government and the incoming US administration, some believe the AISI’s security work could function as a powerful diplomatic tool.
Trump’s Protectionist Approach
The increasing emphasis reflects changing priorities in the US, home to the world’s leading AI companies. President-elect Donald Trump has vowed to cancel President Joe Biden’s executive order on artificial intelligence, which established a US AI Safety Institute. Trump is also appointing venture capitalist David Sacks as his AI and crypto tsar, with tech investors known to be concerned about the overregulation of AI start-ups.
UK’s Plan to Put AISI on Statutory Footing
The UK government also plans to put its AISI on a statutory footing. Leading companies, including OpenAI, Anthropic and Meta have all volunteered to grant AISI access to new models for safety evaluations before they are released to businesses and consumers. Under the proposed UK legislation, those voluntary commitments would be made mandatory.
Challenges and Conflicts
Despite these links, there have been points of conflict with AI companies. The AISI has complained it was not given enough time to test models before they were released, as the tech companies raced each other to launch their latest offerings to the public. However, Google, OpenAI, and Anthropic were among those who welcomed its work.
Conclusion
The UK’s AI Safety Institute faces significant challenges as it seeks to establish itself as a global leader in AI regulation. The delayed expansion plans and the incoming Trump administration’s protectionist approach to AI regulation threaten to undermine the UK’s ambitions. However, the UK government’s plan to put the AISI on a statutory footing and its focus on national security could help maintain its influence in the global AI landscape.
FAQs
Q: What is the UK’s AI Safety Institute?
A: The UK’s AI Safety Institute (AISI) is a government-funded body established to investigate the risks around artificial intelligence.
Q: What is the purpose of the AISI?
A: The AISI aims to identify and mitigate the risks associated with AI, ensuring that the technology is developed and used safely and responsibly.
Q: Who has access to the AISI’s research?
A: The AISI’s research is publicly available, and leading companies including OpenAI, Anthropic, and Meta have volunteered to grant the institute access to new models for safety evaluations before they are released to businesses and consumers.
Q: What is the UK government’s plan for the AISI?
A: The UK government plans to put the AISI on a statutory footing, making the voluntary commitments of leading companies mandatory.

