Artificial Intelligence through a lawyer’s lens

  • Scope:
  • Artificial Intelligence
  • Business

What are legal challenges that Al brings and how should they be addressed?

There is much excitement and hype around artificial intelligence these days, as companies of all sizes invest more and more money into this field. AI development, however, does not happen in a vacuum. On the contrary, it brings up social, ethical and legal dilemmas which need to be answered to ensure that all can safely reap off the profits that AI brings. Let’s take a closer look at some of them.

Why should we care?

When thinking about AI and the law you may doubt if AI needs to be regulated at all. You might be worried that regulation will make it harder for you to market your product or that you end up being unable to do your job. Such thoughts are not unfounded. 

Without a doubt, misplaced regulation or overregulation may stifle innovation or even completely derail the benefits that AI brings. However, as Google and Alphabet CEO Sundar Pichai stated a few days agocompanies […] cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent […] to make sure that technology is harnessed for good and available to everyone.

Why the call for regulation? 

Whatever the reasons behind this public call, it is certainly true that AI is a double-edged sword. Even taking Elon Musk’s dark visions aside, we simply cannot turn a blind eye to the potential impact that AI can have on our societies. With high volumes of processed data and speedy decision-making process AI can multiply whatever biases we, as humans, demonstrate in our ways of thinking. The result? Discrimination of individuals on a scale that we have not seen before. 

These concerns have a solid justification. We have already heard about individuals being discriminated against by algorithms because of their ethnicity, their gender or their sexuality. These, so-called algorithmic biases, can have profound effects on our lives and societies, from ill-founded dismissals, through illegitimate police searches even up to deterioration in the availability of healthcare for entire ethnic groups.

Can the black box be deciphered? 

Such effects can be additionally multiplied by the black box phenomena, which refers to the low level of explicability and transparency of some of the algorithm-based decisions. As AI-based solutions start to be present in e.g. financial and healthcare sectors, we have to ensure that those who are affected by algorithm-based decision understand reasoning behind these decisions and know what to do when they want to challenge them.  

Who should be liable? 

Finally, what happens when AI makes a mistake? Who should be held liable in such cases? Should a compulsory insurance scheme be introduced for all who use AI-based algorithms? Or, perhaps, the current liability regime is perfectly fit to handle these challenges? Even though these questions may sound futuristic now, with AI development speed outpacing Moore’s law, we must know the answers sooner than later. 

What does harnessing for good mean?

It will probably not be surprising to say that much of the debate around the regulation of AI revolves around one key issue: trust. As stated by the EU Commission’s High Level Expert Group on AI (AI HLEG)trustworthiness is the prerequisite for people and societies to develop, deploy and use AI systems. If AI algorithms and their designers fail at being trustworthy, these unwanted consequences will materialize and render it impossible to use the benefits that AI brings. As further noted by the AI HLEG, a holistic and systemic approach may be needed to ensure that. 

Moreover, these regulations should be accepted globally to avoid the so-called “regulatory shopping” – locating businesses in places where fewer regulations exist, and thus, avoiding the restrictions on how a product or business should be run. The good news is that the issue of trust and ethics seems to be at the forefront not only in the EU debate but also in the US, Australia and Japan, just to name a few.   

How to define AI?

From a lawyer’s standpoint, any discussion about regulation of AI must start with defining what it really is and what it is not. This is necessary to know which rules should be applied and when. Unfortunately, there is no single (uniform) definition of AI in the field of technology sciences, so a legal one is even less likely.

Without a doubt, artificial intelligence is an umbrella term which refers to a larger subset of  technologies, algorithms and technical solutions. What is more, very often people who speak of AI actually refer to advanced robotics. This adds another level of complexity to our equation as it is apparent that the term “AI” refers not only to software-based but also hardware-based solutions.

This complexity is also confirmed by one of the definitions crafted by the European Commission in which AI was described as systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications). I think it does not take a law degree to conclude that with such a vast catalogue of solutions it is virtually impossible to develop a simple definition and one set of rules relating to AI. 

What does AI regulation concern then?

Given the current state of the AI technology it seems that legal regulations related to AI can be categorized into 4 different groups: 

  • general regulations specific to the use of the AI technology  (e.g. automated decision-making or facial recognition); 
  • regulation specific to the use of AI in a given sector or industry (e.g. health, finance, automotive);
  • rules on the accountability for the consequences of the use of AI (e.g. criminal, civil and administrative liability) and ownership of AI creations;
  • ethics codes for AI developers.

What’s next?

In the upcoming weeks we will be taking a closer look at these groups of regulations and addressing ethical considerations concerning the design of AI algorithms, the question of who should be liable for AI’s actions and, finally, the issue of ownership of AI’s scientific and artistic creations.

Read also about Augmenting AI image recognition with partial evidence

Similar Posts

See all posts