Next steps for the EU’s AI Act: towards enabling regulation

Image: Phonlamai Photo/Shutterstock

The EU has embarked on an ambitious journey to become the first major global power to regulate the use of artificial intelligence (AI). Two years ago, the European Commission made a proposal for an AI Act, greeted with enthusiasm by those who feel that regulation is needed to ensure safe use of AI, and with reluctance by those who fear that regulation will kill all AI innovation.

With the Council of the EU and the European Parliament having adopted their positions on the Commission proposal, the negotiations on the AI Act have now moved on to the inter-institutional phase, the so-called trilogues, where the final formulation of the Act will be agreed. Let us outline our main expectations and recommendations for the final stretch of the negotiations.

First things first: what is AI?

One of the most controversial issues in the negotiations on the AI Act has been the very definition of AI. In the original Commission proposal, AI was defined with a list of broad technological areas to be included in the scope of the Act. The Parliament has suggested an even broader (and more vague) definition whereby almost any system “operating with a level of autonomy” could be interpreted as AI. 

The Council’s proposal is more focused, not based on a long list of technology families (some of which may eventually be obsolete) but limiting the definition to complex systems using machine learning and/or logic- and knowledge-based approaches. This definition is broad enough to not require constant updating but narrow enough to only capture the areas of AI that are most likely to pose risks.

This would also make it clear that the Member States can still maintain the right to have national legislation regarding computer systems that do not pose such risks, and should therefore not be affected by the AI Act (e.g. Finland’s legislation concerning automated decision making in public administration).

Don’t kill research and innovation!

The AI Act will create a number of new obligations for the developers and users of AI systems, and the suggestions by the Council and the Parliament to include the so-called general-purpose AI in the scope of the Act would extend these obligations even further, e.g. to developers of open-source solutions.

To ensure that the obligations created by the AI Act do not kill all AI innovation in Europe, it is crucial to adopt the proposal that the Council has made to exclude R&D and non-professional purposes from the scope of the Act. Any regulation must concern only the final products and services, not the underlying research.

For Europe to benefit from the R&D investments, the regulation must fulfill two prerequisites. First, the border between allowed and banned applications should be made clear and understandable. Secondly, the certification procedure for AI-based products and services should be predictable, rapid, and affordable, so that the regulation would not hinder companies, especially SMEs, from benefiting from AI-based innovations.

The way forward: technology-neutral regulation and support for R&D

The difficulties with defining AI have clearly demonstrated how difficult it is to regulate specific technologies. In the future, we recommend keeping regulation as technology-neutral as possible while paying particular attention to not creating any legal barriers for research and innovation. Also, coherence between the various pieces of legislation related to the data economy is crucial.

An enabling regulatory framework must be paired with other measures to support research and innovation, such as investments in competence development and research infrastructures. In case of AI, particular attention must be paid to competences and infrastructures related to the two main building blocks of AI innovation: data and computing.


About the authors

Heikki Ailisto is a research professor at VTT where he coordinates applied AI research. He is also with the Finnish Center for AI flagship (FCAI), where he is a member of the Steering Group and leader of the Industry and Society Program.

Aleksi Kallio is manager of the AI & data analytics group at CSC. The group focuses on large scale computing challenges related to AI technologies, provides expert support for AI research as well as supports public and private organisations in adapting AI solutions.

Petri Myllymäki is a professor of artificial intelligence and machine learning at the University of Helsinki. He is also the Vice-Director of the Finnish Center for AI flagship (FCAI).