Close panel

Close panel

Close panel

Close panel

Opinion 27 October 2020

What should be taken into account if Artificial Intelligence is to be regulated?

In this article, Juan Murillo, Senior Manager of Data Strategy at BBVA, and Jesús Lozano, Manager of Digital Regulation at BBVA, analyze the potential implications of Artificial Intelligence regulations and share their insights into the considerations that should be taken into account to ensure that regulatory aspects support the proper development of this discipline in the future.

Artificial Intelligence is a term coined in the 1950s that is usually understood as referring to a single technology, when in reality it encompasses a broad range of techniques and methodologies whose theoretical foundations were laid over 70 years ago. This field has already gone through a number of stages: During the first stage, symbolic AI applications dominated. Symbolic AI is a top-down approach that aspires to parameterize all the alternatives to a problem in order to find the right solution by following a tree of logical rules. The initial achievements of this approach did not meet the high expectations it had raised. As a result, investment flows stopped, and AI entered into two consecutive "winters" during the 1970s and 1980s.

Today, however, we are in the midst of a paradigm shift: In contrast to symbolic AI, connectionist AI – a bottom-up approach that allows learning or discovering patterns in data without following pre-set rules – is increasingly gaining traction (even though its conceptual foundations were also laid decades ago) thanks to the surprising results it is yielding in a broad range of unrelated fields, including image recognition, speech and written language processing, or the development of recommendation systems. AI has been with us for a long time, but it is now, thanks to the data explosion and the surge in computing power at decreasing costs, when the solutions that rely on it are really starting to take off, heralding what many are referring to as the 4th industrial revolution.

On the other hand, technological leaps always come with their own share of pros and cons. History is speckled with dozens of examples: from dynamite, to automobiles or aviation, through nuclear energy or, more recently, the green revolution in agronomy. Their applications elicit ethical dilemmas: they drive progress but they also expose the world to threats that previously did not exist, or magnify others that already did.

This type of controversy spilled beyond expert forums long ago and over onto society due to the growing attention that the topic receives from the media, even from literary and cinematographic fiction. As a result of this, this debate has begun to resonate among legislators, reaching national parliaments, many of which are pondering whether it is necessary to legislate to mitigate the identified risks and how to do it in a way so as to prevent potential downsides, including the slowing down or completely obstructing the development of this technology.

In the EU, regulators started considering AI a pivotal topic in 2017, as a result of a resolution by the European Parliament recommending the European Commission to develop civil law rules on robotics, which even proposed developing an "electronic personality" applicable to advanced robots in the future. In 2018, the Commission published its first communication on Artificial Intelligence and a coordinated action plan with the Member States. Since then, a number of European institutions have carried out different assessments on specific aspects of this technology, including the Ethics guidelines for trustworthy AI, published in April 2019 by a group of experts convened by the European Commission.

However, it has been this year when the European Commission has begun to propose concrete actions with its white paper on Artificial Intelligence, which was submitted for consultation between March and June, and in which, in addition to proposing a large number of measures aimed at promoting the development of this technology in the EU and achieving a position of international leadership, envisaged the possibility of developing a regulation addressing specific AI applications considered to pose a greater hazard to society. The first draft of this regulation is scheduled to be published during the first quarter of 2021. It is not easy to anticipate what the content of this regulation will be, since Member States hold differing views on the level of regulatory intervention there should be in this field, as evidenced by the recent publication of a note by 14 Member States urging the European Commission to refrain from passing excessively strict rules.

Taking into account all of the above, the following considerations should be made.

Regulation should be technology-neutral and focus on applications

Regulating Artificial Intelligence involves three dilemmas:

  • If closely linked to technology, the regulation could quickly become obsolete, given the fast pace at which technology evolves. Ideally, the laws would establish "technologically agnostic" principles valid regardless of the development status of a given technology.
  • A regulation closely linked to technology can hinder technical progress: it could even favor unsophisticated systems over more advanced ones. New legal requirements could encourage choosing simpler systems - for example based on rules and not automated - over more advanced alternatives - such as the ones based on machine learning and automated – despite the latter being capable of delivering better results in terms of accuracy or process efficiency.
  • Regulating a specific technology requires clearly outlining the regulation’s purpose; in other words, perfectly defining and limiting the scope of the regulation to prevent disparate interpretations to arise later on. In the case of AI, the definition proposed by the European Commission's expert taskforce is so broad that it would virtually include any system or process capable of collecting data, processing it, and acting based on this information. Even elements as basic as an automatic door or a thermostat controlling room temperature would fit this definition.

Therefore, if the goal is to mitigate risks, the focus should be kept on applications and their effects, not on methods. It is the "what" and the "why" that should be controlled, not so much the "how".

A tool as cross-cutting as artificial intelligence can serve quite different purposes. From goals with trivial consequences in the field of entertainment - such as the development of video games -, to applications with great impact on people's lives - such as some of the use cases developed in the field of healthcare, justice or transport. Even within these sectors, specific applications must be ranked taking their potential impact into account, applying consistent and quantitative criteria.

Policymakers should focus their efforts on identifying areas where current regulation is either insufficient or non-existent.

This exercise should take the following two questions as the starting point: what are the ill-effects we want to avoid? And, are they not already regulated by current laws?

We want to prevent Artificial Intelligence from being used to discriminate. However current regulations already address this issue, listing a series of personal characteristics that cannot be used to limit a citizen's access to a service, regardless of whether it is a human or an algorithm that grants said access. However, it is true that technology has the potential to magnify and increase the range, in terms of how many people are affected, of the upsides and downsides of any digitized process. Those of us who work in this field are aware that, when using mass data, we need to make sure that neither the data nor the results of its use are biased, so as not to perpetuate or amplify unwanted effects.

We want to avoid deploying systems that can have a significant impact on people's lives if we can’t explain how they work internally. But these issues are already addressed by current regulations on data protection and consumer rights.

We want products and services to be safe. But product safety regulations do not establish clear rules on new developments such as the automatic adaptation of product or service features once it is released in the market, something that can happen rather frequently with those products or services that rely on reinforced learning techniques. In this case, the regulations should be updated so that these types of situations - whether they involve IA or not - are addressed and resolved therein.

We want to maintain a high level of human autonomy, or in other words, we want to keep control over automated processes. In addition, we need to mitigate problems that may arise as machines interact, but perhaps the best way to tackle this would be through sectoral regulations in transport, finance, industry, etc. An example of this are the algorithmic trading systems in stock markets, where machines interact with each other and close investment operations. The proper design of these systems, the traceability of operations, the establishment of alerts in the event of system instability, and the possibility of switching to manual control to avoid a flash crash are already addressed by MiFIDII regulations.

We don’t want activities such as remote biometric identification to be used to limit individuals’ freedoms. But none of the existing regulations fully addresses this use case that may become much more frequent with AI adoption. Therefore, for this specific application, it would be reasonable to develop a specific regulation addressing the risks of this activity, regardless of the technology used to carry it out.

Competent authorities should clarify their expectations and provide guidelines for compliance with current legislation

Due to the groundbreaking nature of some AI techniques and the lack of experience applying them in certain use cases, an obstacle to the development of AI in various fields is the difficulty of providing proof of compliance with existing legal obligations such as non-discrimination or explainability, something that is much easier to do in the case of simpler techniques.

In these cases, it would be of great help if competent authorities would develop guidelines and standards that clarify how to meet existing regulatory requirements when using certain technological developments or if they offered support to the private sector by collaborating in the development of tools and standards that simplify this compliance, as is already the case in these examples.

"Implementing AI ignoring the risks that it may pose would not be ethical, but neither it would be to deprive society of the opportunities that this innovation can bring"

Conclusions

As we have discussed, risks do not lie in the technology itself, but in how it is used. Therefore, establishing additional regulatory requirements on AI -especially in applications that barely affect people's lives- would boost these new technologies’ implementation costs, discourage their adoption across the European Union and limit the competitiveness of EU member states compared to the leaders in this field, the United States and China.

To avoid this, any regulatory intervention should:

  • Be technology-neutral and focus on applications;
  • Focus on areas where current regulation is insufficient or non-existent; and
  • Prioritize the development of guidelines and standards - in collaboration with the private sector, in as far as possible – as well as the clarification of the expectations of authorities in order to facilitate compliance with current legislation.

However, if European authorities choose to develop specific regulations on artificial intelligence, the focus should be put on the most critical use cases that are not covered by specific sector regulations, establishing classification criteria based on their potential downsides.

Indeed, implementing AI ignoring the risks that it may pose would not be ethical, but neither it would be to deprive society of the opportunities that this innovation can bring.

This article was originally published on Finextra.com.