Close panel

Close panel

Close panel

Close panel

"Will the European Artificial Intelligence Act encourage the development of this technology? "

On the morning of Wednesday, March 13th, the European Parliament gave its final approval of the European Artificial Intelligence Act.  According to some EU authorities, this makes the bloc the first international jurisdiction with comprehensive legislation on the use of artificial intelligence (AI).

The adjective ‘comprehensive’ is very significant because in the three year negotiation period (it was initially proposed in April 2021 by the European Commission), both the U.S. and China have taken a much less ambitious approach, publishing legislation on specific aspects of this technology, or on recent developments such as generative AI, so in style since the launch of ChatGPT at the end of 2022.

However, none of these legislative interventions establish in a single text the prohibition of certain uses of artificial intelligence that are considered contrary to European values; additional requirements for general purpose AI systems and systems whose incorrect implementation could harm the health, security or fundamental rights of citizens; and obligations of transparency for systems that interact with citizens, or those that artificially generate images, sound or text.

Beyond the impact of each of these specific requirements and obligations, the big question that remains once the act has been approved is whether it will encourage the development or adoption of ‘trustworthy artificial intelligence’ in the EU, as European authorities emphasize, or, on the contrary, if the existence of these requirements could further separate the EU from the leading countries in this technology, the U.S. and China.

One key aspect related to the impact of the regulations will be the definition of AI system established in the law. Although it is based on OECD recommendations on artificial intelligence which were recently updated and represent a significant improvement over other versions that were considered during the negotiations, it does not explicitly exclude automated rule-based systems that have been used for some time now in some economic activities, even though these systems are generally rather simple, lack autonomy and are explainable.

The big question that remains once the act has been approved is whether it will encourage the development or adoption of ‘trustworthy artificial intelligence’ in the EU.

On the other hand, it is possible that some systems identified as high risk are not subject to equivalent requirements outside of the EU, thus incentivizing their development in other locations. Although this may be true, the requirements established for high risk AI systems fundamentally reflect best practice for the development of computer systems, such as the identification and mitigation of risks, or guaranteeing an appropriate level of cybersecurity. In fact, general purpose systems in the U.S. are subject to the Executive Order signed by President Biden on October 30, 2023 and the law on generative AI in China

In any case, the final impact of this act will greatly depend on three aspects that still need to be determined: the regulatory development of the regulation; its supervisory framework; and coordination among authorities.

It is possible that some systems identified as high risk are not subject to equivalent requirements outside of the EU, thus incentivizing their development in other locations

In terms of the regulatory development the AI Act will be applicable six to 24 months after it enters into force.  During this time period, the European Commission AI Office will publish secondary legislation on specific aspects of the law, such as the interpretation of the definition of AI, banned practices and the practical implementation of the AI Act.

Regarding the supervisory framework, the AI Act specifies that the AI Office is in charge of supervising general purpose AI systems and the rest of the AI systems are supervised by competent authorities in the member states, such as the Spanish Artificial Intelligence Supervision. In the case of financial institutions, the default rule designates financial authorities as market supervision authorities.

The fact that supervision of the AI Act falls on so many authorities could lead to disparate criteria among them and unequal application in each country, or even among sectors. Therefore, the law creates a supranational governance structure that aims to coordinate the action of national authorities and harmonize criteria. The main component of this structure is the previously mentioned AI Office, whose work will be supported by the European Committee on Artificial Intelligence, which includes the participation of national supervisors, a scientific panel of independent experts and a consultative forum consisting of AI suppliers, developers and users.

In the case of financial institutions, the default rule designates financial authorities as market supervision authorities.

In conclusion, although the recently approved AI Act represents a regulatory milestone that could result in reasonable protection of citizens from the proliferation of AI systems, it is still too soon to assess the impact on the development of this technology in the EU, as it will greatly depend on the work carried out in the coming months by the AI Office and the supervisory role of each competent authority.

In particular, it is essential for supervisors to address the implementation with flexibility and proportionality because given that it is a constantly evolving technology and the most ambitious regulation to date, unforeseen events will occur that will require a rapid response.

Similarly, in order to promote the competitiveness of the EU, European authorities should encourage international coordination of regulatory interventions in this field.