Where AI Regulation Stands in the EU, According to a Tech Lawyer

Download Now: 100 Free ChatGPT Prompts
Raymond Sun

Published:

When it comes to AI regulation, the European Union (EU) is an early mover.

European Union has been working on the EU AI Act

Over the past 3 years, the EU has been working on the Artificial Intelligence Act (or EU AI Act), which is close to becoming law. It'll will be the first law in the world to broadly regulate AI systems across all sectors based on their risk level.

But will this act support or break the AI industry? In this article, I’ll explain the basics of the EU AI Act, and why I think it’s an imperfect but sensible standard to keep AI systems safe and responsible.

Click Here to Subscribe to HubSpot's AI Newsletter

How do EU laws work?

The EU is not a country, but a political and economic union of (currently) 27 countries across the European continent since 1993.

Each member state remains independent, but the EU as a whole can create regulations that directly bind member states, or ‘directives’ that must be enacted by member state into their own national law.

Member states then work together to develop, negotiate and approve EU laws.

Introducing the EU AI Act

Since 2021, EU lawmakers have been working on the EU AI Act, which will be a ‘one-stop-shop’ law to regulate AI.

The EU AI Act will also fill the gaps left by other EU laws that only regulate specific aspects of AI on a case-by-case basis. For example, the General Data Protection Regulation (GDPR) on privacy, and the Digital Services Act on online safety.

On Feb. 2, EU lawmakers finalized the draft text of the Act, which will now proceed to formal approval in the EU Parliament (and likely come into force this year).

The leaked final text of the EU AI Act can be found here.

How will the EU AI Act work?

The EU AI Act will capture all types of “AI systems” across the board (not just a particular sector) and they will be regulated based on their risk level.

  • Prohibited” systems refer to AI used for manipulating behavior, exploiting vulnerable people, and social scoring (i.e. classifying people on social behavior or personal characteristics). Certain exceptions will apply to law enforcement purposes.
  • High risk” systems include those that aren’t banned, but pose a significant threat to health, safety or fundamental rights.” For example, AI systems used in critical sectors like health, critical infrastructure, education, employment and law enforcement. These will need to follow strict transparency, data governance, risk management, registration and reporting obligations.
  • Limited risk” systems are those used in critical sectors, but aren’t so risky because they only perform narrow procedural or preparatory tasks, or support human assessments. Such systems only need to be documented and be transparent to users (e.g. users should be made aware they are interacting with an AI system or that content was AI-generated).
  • General purpose AI models” (GPAI models) cover large language models like GPT-4 and Gemini, while “General purpose AI systems” (GPAI systems)  are multi-purpose apps built on those models (e.g. ChatGPT). Both GPAI systems and models must come with technical documentation and detailed summaries about their training data.
  • GPAI models with “systemic risk must comply with further rules and codes of practice around testing, governance and security. What’s “systemic risk” will be defined by the EU Commission in due course.

Systems that don’t fall in the above categories will be unregulated. This should be the case for most AI systems, like AI-enabled video games or spam filters.

Open source models will also be exempt from the EU AI Act unless they are integrated into an prohibited or high risk system, or are GPAI models with systemic risk.

It's also important to note the Act will apply both within and outside of the EU (i.e. non-EU businesses who provide AI systems in the EU will need to comply with the act).

Has the EU made the right move?

Overall, EU AI Act represents a sensible start to protect society from harmful and dangerous AI applications that normally wouldn’t be fully monitored under other EU laws.

But three key areas of the Act could be improved.

1. Loose Definition for “AI system”

According to the draft, “AI system” will be defined in the law as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The proposed definition of “AI system” seems loose.

The phrases “machine-based system” and “influence physical or virtual environments” could arguably apply to all machines and software, not just machine learning (which is what we really mean when we say “AI.”)

The only keyword that separates simple software from AI is “infers,” yet it’s not exactly clear what means.

This ambiguity becomes tricky when dealing with software that don’t use machine learning methods but still pose high risk decisions/outputs.

For example, in 2016-2019, the Australian government deployed an automated system called Robodebt.

It was designed to calculate outstanding taxpayer debts by comparing different sources of income data. It technically didn’t use any machine learning, but its algorithm was flawed and miscalculated debts, which led to over 500,000 wrongly issued debt notices.

The errors of Robodebt caused unnecessary stress to many taxpayers, including even some tragic incidents of suicide.

But based on the Act, Robodebt didn’t “infer from the input it receives” as required for AI systems. So, would a system like it be caught under the EU AI Act? It should be but the murky language might cause it to fly under the radar.

2. Gaps in the GPAI Models Rules

The issue with GPAI models is that the same model can be used for safe or dangerous applications. They’re also trained on massive amounts of data which triggers privacy and copyright concerns.

Yet GPAI models (like GPT-4) power so many of our AI apps, and form the bedrock of the AI industry. So you can’t regulate them too hard either.

For now, EU lawmakers have started with modest transparency and testing requirements around GPAI models. But it’s the bare minimum.

The Act still leaves gaps around cybersecurity, content moderation and energy consumption which are other typical issues associated with GPAI models.

That said, models only provide capability.

Even the most powerful model with high “systemic risk” needs an application to have real-world impact. Conversely, a model that poses no “systemic risk” can still be applied in an unsafe or dangerous way.

So as long as rules around “prohibited” and “high risk” systems do their job, then the current GPAI model rules are probably fine for now, and can be slowly bolstered over time.

3. Accountability Still Not Clear

Who’s responsible when AI goes wrong?

The EU AI Act identifies two key roles: “provider” and “deployer.” There are also other roles like “importer” and “distributor,” but let’s focus on the first two.

The “provider” is the one who developed and places the AI system on market. By default, the “provider” will be responsible for ensuring their AI system complies with the Act.

The “deployer” is the user of the AI system. Generally, the “deployer” won’t be responsible for the AI system unless they make a “significant modification” to the AI system (in which case the deployer will be considered the new provider).

But what counts as a “significant modification”? This is unclear. It’s especially tricky when it comes to fine-tuning.

For example, when the deployer trains a model on their own data, does that count as a “significant modification”? If so, then the deployer would become liable for that finetuned model.

This question gets more complicated as the model gets passed down the value chain, and fine-tuned at each step. This is something that the EU lawmakers should clarify.

It's important to note the EU has also proposed separate directive (also in draft) called the ‘AI Liability Directive’ which will set rules around court proceedings relating to the EU AI Act.

Notably, the directive will establish a “presumption of causality." In other words, when a provider breaches an obligation under the EU AI Act, the court can presume the provider had caused the loss/damage suffered by the victim if the victim can show that the provider’s breach was “reasonably likely” linked to that loss/damage.

This is a relatively low bar for the victim to pass, making it a pro-consumer law.

Overall, the EU AI Act represents a sensible move to tackle key AI risks across the board, though some key questions remain open.

No doubt the law will be continuously tested, reviewed and revised.

Editor’s Note: This article is not intended to provide legal advice or to be a comprehensive guide or reference.

Click Here to Subscribe to HubSpot's AI Newsletter

Related Articles

We're committed to your privacy. HubSpot uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy.

Discover the key to unlocking unparalleled productivity with this ultimate guide to revolutionizing your workflow.

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

START FREE OR GET A DEMO