The Artificial Intelligence Act in the EU – what’s likely to change

  • Scope:
  • Artificial Intelligence
  • Business

Artificial Intelligence solutions have been evolving very rapidly for the last few years. So rapidly, that the law cannot keep up. There are many aspects that need to be taken into consideration. From the question of intellectual property rights for things developed by AI, through liability for acts resulting from AI activity, to the topic of personal data being processed by AI, and much more.

Currently, there’s no official regulation regarding AI either on the domestic or international level. Every country makes its own choices when it comes to dealing with problems caused by AI products or services (mostly due to breaches or the issue of intellectual property rights), basing these choices on pre-existing, outdated and poorly matching law regulations.

Just to name few examples: 

  • Amazon used (and later scrapped) an AI-based tool that was automatically rejecting resumes from female engineers. The tool reasoned that since there was an existing minority of women among Amazon coders, then clearly women can’t code. If the same criteria had been selected by a human recruiter, it would be a clear example of discrimination.
  • Uber is accused of using an AI-based tool that fails to recognize the faces of BAME drivers and, in failing, automatically blocks their accounts, rendering them unable to earn money.
  • A woman was killed by an Uber-tested self-driving car. While the crash was the fault of the automated system, the supporting driver was charged with negligent homicide – as the investigation found that he was streaming a TV show during the test. 

While these are just off-the-top-of-head examples of how AI can be found in a position where a human being would be put on trial, the situation when there is no “human” behind a decision is a relatively new situation and needs to be addressed. And that’s the basic idea behind the Artificial Intelligence Act as proposed by the EU.

The proposal for the Artificial Intelligence Act 

In April 2021, the European Commission (EC) issued the first formal legislative proposition for an Artificial Intelligence Act (AIA). It’s the first initiative worldwide that provides a legal framework for AI. It’s really groundbreaking and progressive, but no wonder, since no regulation like this has ever been considered before. 

Since it’s only a draft and has to be put through the European Parliament, which will certainly take up to a few years, a lot might change before the final regulation appears. Yet the proposition itself sparked a debate about the place of AI-based regulations in the legal system as well as for the question of responsibility for the effects of AI-controlled systems. 

What is AI according to the regulation?

The definition of AI in the proposal is super broad – it defines it as software development frameworks that encompass machine learning, expert and logic systems, and Bayesian or statistical approaches. Software products featuring these approaches whose outputs “influence the environments they interact with” are covered as well. 

But let’s focus on some main rules set by the AIA which are likely to remain, even if in a slightly altered form.

The AIA addresses the risks of the various uses of AI systems and aims to promote innovation in the field of AI while keeping it clear who’s responsible for possible consequences if things go wrong (or when things simply aren’t compliant with the regulation). 

The first and most important thing regarding the proposal is its broad scope – the regulation covers any AI system that occurs in the EU market. It’s provider or distributor doesn’t have to be EU-based, the product or service simply needs to be accessible in the EU. 

Since, once binding, the regulation will apply to all AI systems accessible in the EU, even those pre-existing for years by then, it will put a great deal of pressure on providers – they’ll have to adjust their products or services or bear the consequences of non adjustment. The situation is comparable to that of personal data and GDPR, where any company that provides services in the EU needs to adjust to requirements or face the risk of fines. 

The risk-based system 

The AIA contains a risk-based system, completely changing the approach towards liability for actions performed by AI. Right now, for most countries, legal consequences apply once something bad happens (e.g. when a self-driving car kills someone or a courier loses his or her job due to the bias hidden in the system).   

What’s more, the proposition sets up a structure of escalating legal and technical obligations for  the provider, depending on whether the AI product or service is classified as low, medium or high-risk, while a number of AI uses are banned completely. All levels with examples are shown below:

Prohibited Cases

When it comes to prohibited cases of AI usage, the AIA sets a list, which leaves little room for interpretation:

  1. AI systems that use subliminal techniques to manipulate a person’s behavior in a manner that may cause psychological or physical harm;
  2. exploit vulnerabilities of any group of people due to their age, physical, or mental disability in a manner that may cause psychological or physical harm;
  3. enable governments to use general-purpose “social credit scoring;” 
  4. provide real-time remote biometric identification in publicly accessible spaces by law enforcement except in certain time-limited public safety scenarios.

High Risk System

A high risk level solution occurs when the system is making decisions regarding people’s lives, be that either physically or economically. To develop or use a high-risk AI system (e.g. medical uses of AI or credit scoring), an organization must fulfill a range of requirements before the system can go live and be presented on the market.

This includes, but is not limited to, using the highest quality of data for training and testing AI systems, creating a system that is easily see-through and possible for the user to understand and interpret (along with user-friendly documentation).

Limited Risk System

limited risk usage of AI takes place when AI systems interact with people. These systems face similar obligations to those set by the GDPR – the user should be notified so that they know they’re dealing with an AI system and not with a real person (e.g. chatbots). The information obligation however does not apply to situations when it’s considered obvious from the circumstances and the context of use.

Minimal Risk System

Minimal or zero risk AI systems are not included in the AIA – such systems using AI may continue their existence and development without fulfilling any requirements. This applies to systems that have neither contact nor direct influence on human lives, such as spam filters, speech to text converters, automatic translators and similar tech. 

The consequences of not fulfilling the obligations set by the AIA

The penalties are high. Non-compliance with prohibited uses and data governance obligations is punishable with a fine of up to €30M or 6 percent of worldwide annual turnover (whichever is higher). For high-risk AI applications, the top penalty is set to €20M or 4 percent of turnover. The supply of incorrect, incomplete, or misleading information to national competent bodies is subject to a fine of up to €10M or 2 percent of turnover.

Who will feel the impact

Every organization which takes part in creating and distributing AI systems will feel the impact of the new law. Again, this regulation is comparable to GDPR when it comes to the changes imposed on the market, the scale and the fines a company may face. In high-risk systems, users will see the difference as well. 

The AIA creates new regulatory obligations for AI tools used in financial services, education, employment and human resources, law enforcement, industrial AI, medical devices, the automotive industry, machinery, toys and many more. When it comes to high-risk uses of AI, as stated in the AIA, it’s impossible to not think about practices like behavioral advertising based on activity tracking, as used by Google or Facebook. Therefore it’s safe to assume the proposition will be undergoing longlasting negotiations, a collision of the EU’s ideas on safety and the commercial interests of tech-giants. Given the time and effort GDPR related works took before the act finally entered into force – we’re up for a lengthy battle.