When it comes to AI, the UK has been at the forefront of innovation, being home to the invention of the Turing machine to modern breakthroughs in neutral networks.
Reflecting this strong innovation culture is the nation’s AI strategy, which currently leaves it to national departments and regulators to manage AI risks in their own sector, rather than pushing for AI-specific laws.
How is the UK’s strategy going so far? In this article, I’ll explore the key legal developments in the UK AI market, and how they compare with its neighbour across the channel, the European Union (EU).
As of March 2024, the UK doesn’t have a binding federal law specifically regulating the development, deployment and use of AI.
Similar to the US, the UK relies on a fragmented patchwork of existing laws.
But guess what – that’s all part of the government’s plan.
Under its national AI strategy, the UK has expressed ambitions to become a “global AI superpower” within the next decade.
And they have a good shot at it – the UK currently has the third highest number of AI companies in the world (after the US and China), and is the second most likely global destination for AI research.
So when it comes to regulation, the British favor a light pro-innovation approach. In fact, under their national AI whitepaper released in March 2023, the government said they will not rush to introduce new targeted legislation on AI yet.
This position was reaffirmed by Prime Minister Rishi Sunak at the Global AI Safety Summit hosted by the UK in October 2023 (again, another strategic move by the UK to catch the global spotlight).
But the UK government is not completely hands off either. In March 2023, the government established a base set of ethical principles to guide the use and development of AI systems across all sectors.
These ethic principles are:
The idea is that the central government will rely on regulators to interpret and apply the above ethical principles within their respective sectors. This allows each regulator to deal with risks and issues of AI in a way that works for their sector.
To date, here are some recent developments from various regulators in the UK:
The EU’s incoming AI Act will specifically regulate AI systems across all sectors based on their risk level under the banner of one single law.
The EU and UK thus sit at two ends of a spectrum. The former has gone for a centralized ‘one law fits all’ approach while the latter prefers a lighter decentralised framework.
While it might be too early to say whose approach is better, the British approach does seem less rigid, allowing businesses to innovate more quickly while also giving the government flexibility to respond to harms as they arise.
Given the UK’s competitive ambitions in the AI space, it’s likely that the UK will keep their differences with the EU and maintain their pro-innovation stance.
In early Feb. 2024, the UK government released an updated whitepaper, which reaffirmed their approach and suggests things are working well for now.
While such a system is flexible, the government should not let it lead to a messy patchwork of siloed developments across sectors.
That's what happened in the US where federal and state agencies did their own thing until it got too messy and President Biden had to step in with an executive order to regain coordination among those agencies.
Another limitation of the UK approach is that it relies on the regulators to do their job effectively. This may be hard to guarantee.
For example, the working group at the Copyright Office recently failed to agree on a voluntary code around the use of copyrighted material in AI training. If this deadlock drags on, should the central government step in to take over (and if so, when)?
The UK will need to find the right balance in policymaking responsibilities between the central government and regulators. And this balance will need to adapt and keep pace with advances in AI technology.
But so far, the government seems to be on the right track.
In November 2023, they established the AI Safety Institute, a central hub tasked with testing the safety of emerging types of AI against the above ethical principles. Under their updated whitepaper, the government also recently announced:
These initiatives are not tied to one sector, and should help improve central coordination and consistency among regulator-led activities.
A key characteristic of the British legal system is the ‘common law’ – i.e. a system where courts are bound by past similar decisions (precedents) when determining issues.
Some high-profile AI-related cases have been brought before the UK courts.
Currently, the most prolific one is the lawsuit filed by Getty Images against Stability AI.
It alleges that Stability AI used Getty Images’ stock photos without a licence to train its models. The outcome of this lawsuit will likely have significant implications for intellectual property law around AI.
Could the courts play a role here in shaping UK’s AI regulation?
Realistically, not much. The role of court is limited to interpreting existing laws (e.g. clarifying ambiguities in legislative text) within the context of a litigated case.
A court cannot create new laws. This makes the court system a reactive rather than proactive one. Plus there are practical issues of time and costs which deter litigation in the first place.
So it’s likely that central government policy and regulator-led activities will carry the weight of AI regulation in the UK.
Time will tell whether the UK’s ‘pro-innovation’ framework will truly boost innovation and propel the nation into AI superpower status.
Editor’s Note: This article is not intended to provide legal advice or to be a comprehensive guide or reference.