ScienceBlogs
Home

The EU’s Approach towards Artificial Intelligence and its Search for a Regulatory Framework

1
1
The EU’s Approach towards Artificial Intelligence and its Search for a Regulatory Framework
Artificial Intelligence - Markus Winkler/Unsplash -

Self-driving cars, intelligent industrial robots and other independently acting machines should, according to experts, no longer be considered the realm of science-fiction but instead a commonplace reality within a few decades. Unsurprisingly therefore, even the European Union had to take a firm stand on the issue of Artificial Intelligence (AI).

With the release of the White Paper on Artificial Intelligence on 19 February 2020, the European Commission provided the basis for the European AI approach based on "excellence" and "trust".

This was proceeded by the “Declaration of cooperation on AI” signed by 25 European countries in April 2018, the final “Ethics Guidelines for Trustworthy AI” prepared by the High-Level Expert Group on Artificial Intelligence on April 2019 and the “Report on liability for AI and other emerging technologies” prepared by the Expert Group on Liability and New Technologies on November 2019.

In October 2020, the Presidency of the European Council released conclusions on the charter of fundamental rights in the context of artificial intelligence and digital change. Its conclusions stress a fundamental rights based approach to AI and provide guidance on citizens’ rights and justice, freedoms, equality, dignity and solidarity, thus addressing issues that are also at the heart of the European Commission’s White Paper.

What’s at stake

The White Paper advocates a sufficiently flexible definition of AI taking into account that ongoing technical progress will require future adjustments but that at the same time provides a precise definition of AI in order to guarantee the necessary legal certainty. The actual valid definition for the purposes of the White Paper corresponds to the concept of AI as a collection of technologies that combines algorithms, data as well as computing power, elaborated by the High Level Expert Group on Artificial Intelligence.

The White Paper lists its possible interconnection of a high-quality infrastructure and regulatory framework with industrial and technological excellence that, in turn, offers the opportunity to “become a global leader in innovation in the data economy and its applications” as well as developing “an AI ecosystem that brings the benefits of the technology to the whole of European society and economy”. As risks, the European Commission points out that citizens might “fear being left powerless in defending their rights and safety when facing the information asymmetries of algorithmic decision-making, and companies are concerned by legal uncertainty”.

The White Paper, therefore, accounts for both the benefits that the uptake of AI entail, such as increasing efficiency in the areas of health care, education, safety, farming and climate protection, as well as potential risks that the use of AI may involve, such as intrusion in our privacy, opaque decision-making, gender-based or other kinds of discrimination and the fraudulent use of AI. On the bases of these considerations, the European Commission argues for an ethical and people-centred approach towards AI that, in turn, is predicated on so called ecosystems of excellence and trust.

Ecosystems of excellence and trust

The White Paper is based on two objectives of a common European AI approach, namely the creation of an ecosystem of excellence as well as an ecosystem of trust.

The ecosystem of excellence is meant to accelerate and amplify AI efforts by a multilevel policy framework, meaning corporate actions on regional, national and European level as well as increased collaboration between the public and the private sector. By such actions, the Commission intends to build synergies and networks of competences between European AI research centers through a European “lighthouse center of research and innovation for AI” in order to allocate innovation and expertise in the fields of health, industry, environment, finance, transport, agri-food value chains, earth observation and space.

The ecosystem of trust aims to realize the explicitly human-centric approach by developing a regulatory framework for AI that provides legal certainty in regard to AI-based innovations for companies and public organizations and that gives citizens the confidence to employ AI applications such as remote biometric identification. The use of such facial recognition technology involves certain risks to fundamental rights, for example to people’s dignity if it does not respect their privacy and personal data protection. Therefore, any use of remote biometric AI identification systems will need to take place on the basis of national or EU law, be weighed up in terms of risk and proportionality and contain adequate safeguards. Consequently, in order to generate an ecosystem of trust, the regulatory framework will have to emerge as the the key element that coercively ensures compliance with EU rules, particularly with regard to the protection of fundamental and consumers’ rights.

In search of an AI responsive regulatory framework…

As the Commission argues for an ethical and people-centered approach towards AI, the question arises as to whether the current EU legal framework can be considered as sufficient with regard to the problems expected to emerge from AI, such as liability-related issues. More precisely, AI technologies that are embedded in products and services may pose new safety risks to consumers. Just think of the example of self-driving cars and the risks involved such as accidents which incur injury and material damage. The lack of clear requirements and characteristics of AI technologies may further produce a lack of safety provisions, including legal uncertainty for customers as well as the businesses that market AI-based products in the EU. Hence, in the case of the example of autonomous cars, difficulties in proving a causal link between a defect in the product and damage occurred as a result may emerge, as the Product Liability Directive identifies the manufacturer liable for damages caused by a defective product.

According to a current discussion in ethics and law, the crucial question between meaningful or actual control remains. This concept limits criminal responsibility by first monitoring whether a person was even in control of the machine before holding them criminally responsible.

The prevailing opinion of jurisprudence experts tends to overwhelmingly reject any kind of paradigm shift in criminal law. As Nicolas Woltmann from the research center for robot-law in Würzburg states, discussions about moving away from the principle that only human beings can behave culpably have begun, albeit cautiously, even though many jurists retain the present legal situation as sufficient, given the current state of technical development.

Here we are in the middle of a long-standing controversy: As the European Parliament Resolution on Civil Law Rules of Robotics (2017) in its paragraph 59f) suggested, the European Commission would need to create “a specific legal status for robots in the long-run”, the European Parliament spoke for ascribing autonomous robots the status of electronic persons that should assume responsibility for their actions. In contrast to this proposal, the High Level Expert Group on Artificial Intelligence of the European Commission, states that establishing a legal personality for robots is inappropriate from an ethical as well as a legal perspective.

Nonetheless, the indisputable fact there is a grey area surrounding how the case law will develop in future has emerged and therefore, especially on the individual level, it will certainly be problematic if criminal responsibility cannot be clarified after a serious accident. Undoubtedly, a dilemma.

… in keeping with existing legislation on product safety and liability legislation

Doubtlessly, the already existing extensive body of EU product safety and liability legislation, complemented by national legislation, remains fully applicable in AI-related issues too. The current EU legislative framework provides legislation through the Race Equality Directive, the Directive on Equal Treatment in Employment and Occupation, the Directives on Equal Treatment between Men and Women in Relation to Employment and Access to Goods and Services, a number of consumer protection rules, as well as rules on personal data protection. Nonetheless, the Commission remains indispensable in assessing the existing legislative framework in relation to AI in order to conduct possible adjustments needed in reference to specific legal instruments.

Current Foci and The Way Forward

By April 2019, the Commission had already established a High-Level Expert Group who finalised a report in June 2020, presenting guidelines for trustworthy AI and its practical use for companies based on findings ascertained through the occasional interrogation of over 350 organizations.

As a result of these surveys, it emerged that legal or regulatory frameworks in terms of transparency, traceability and human oversight do not find coverage in the current legislation in numerous economic sectors which explains why the European Commission aims to create a European governance structure on AI. The structure will endeavour to act as a forum that ensures the regular exchange of information and best practice between national, competent, regulatory and sectorial authorities thereby creating an ecosystem of trust. Additionally, the Commission is considering the establishment of a committee of experts in order to gain further expertise and inputs on AI.

The European Commission also aspires to realize various multiple-level-actions for the purpose of building an ecosystem of excellence. Amongst these actions are the development and updating of digital innovation programs by involving research centres, universities and digital innovation hubs of the member states and, major investments in AI projects by amplifying public-private partnerships in AI in various public administration sectors.

Elena Glockzin

Elena Glockzin

Elena Glockzin holds a bachelor’s degree in Political and Social Studies from the University of Würzburg, Germany and is currently enrolled in the master’s program Public Policies and Administration at the Free University of Bolzano-Bozen. She loves being surrounded by an international environment and since her Erasmus-experience in Bologna she is even more curious about meeting new, open-minded people, listening to their stories and learning about their mindsets. As a mountain and food lover, the native Bavarian feels right at home in South Tyrol.

Tags

Citation

https://doi.org/10.57708/b32172715
Glockzin, E. The EU’s Approach towards Artificial Intelligence and its Search for a Regulatory Framework. https://doi.org/10.57708/B32172715

Related Post

The Rise and Spread of Participatory Budgeting in European Cities
ScienceBlogs
eureka

The Rise and Spread of Participatory Budgeting in European Cities

Martina TrettelMartina Trettel
Climate Action Funding in Germany and Canada  – same goal, different (financial) approaches
ScienceBlogs
eureka

Climate Action Funding in Germany and Canada – same goal, different (financial) approaches

Malin NischwitzMalin Nischwitz
Seeking bolder and stronger climate action: A historic UN resolution requests the ICJ to issue an opinion on climate obligations
ScienceBlogs
eureka

Seeking bolder and stronger climate action: A historic UN resolution requests the ICJ to issue an opinion on climate obligations

Federica CittadinoFederica Cittadino