European Approach to Artificial Intelligence

The EU’s approach to artificial intelligence centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.

The way we approach Artificial Intelligence (AI) will define the world we live in the future. To help building a resilient Europe for the Digital Decade, people and businesses should be able to enjoy the benefits of AI while feeling safe and protected.

The European AI Strategy aims at making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy. Such an objective translates into the European approach to excellence and trust through concrete rules and actions.

In April 2021, the Commission presented its AI package, including:

In January 2024 The Commission launched the AI innovation package to support Artificial Intelligence startups and SMEs. The package includes several measures to support European startups and SMEs in the development of trustworthy AI that respects EU values and rules.

One key element of this package is the Communication on boosting startups and innovation in trustworthy artificial intelligence that sets out a strategic investment framework in trustworthy AI for the Union to capitalise on its assets, in particular its world-leading supercomputing infrastructure, and to foster an innovative European AI ecosystem.

The main landmark initiative of the Communication is “GenAI4EU” to stimulate the uptake of generative AI across the Union’s key strategic industrial ecosystems and that will encourage the development of large open innovation ecosystems that will foster collaboration between AI startups and deployers of AI in industry as well as the public sector.

A European approach to excellence in AI

Fostering excellence in AI will strengthen Europe’s potential to compete globally.

The EU will achieve this by:

  1. enabling the development and uptake of AI in the EU
  2. becoming the place where AI thrives from the lab to the market
  3. ensuring that AI works for people and is a force for good in society
  4. building strategic leadership in high-impact sectors

The Commission and Member States agreed to boost excellence in AI by joining forces on policy and investments. The 2021 review of the Coordinated Plan on AI outlines a vision to accelerate, act, and align priorities with the current European and global AI landscape and bring AI strategy into action.

Maximising resources and coordinating investments is a critical component of AI excellence. Both the Horizon Europe and Digital Europe programmes will invest €1 billion per year in AI. The Commission will also mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of the digital decade. 

The Recovery and Resilience Facility makes €134 billion available for digital. This will be a game-changer, allowing Europe to amplify its ambitions and become a global leader in developing cutting-edge, trustworthy AI.

Access to high quality data is an essential factor in building high performance, robust AI systems. Initiatives such as the EU Cybersecurity Strategy, the Data act and the Data Governance Act provide the right infrastructure for building such systems. 

A European approach to trust in AI

Building trustworthy AI will create a safe and innovation-friendly environment for users, developers and deployers.

The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

  1. a European legal framework for AI that upholds fundamental rights and addresses safety risks specific to the AI systems;
  2. a civil liability framework – adapting liability rules to the digital age and AI;
  3. a revision of sectoral safety legislation (e.g. Machinery Regulation, General Product Safety Directive).

European proposal for a legal framework on AI

The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard.

This framework gives AI developers, deployers and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover. The legal framework for AI, or AI Act, has a clear, easy to understand approach, based on four different levels of risk: minimal risk, high risk, unacceptable risk, and specific transparency risk. It also introduces dedicated rules for general purpose AI models.

Source of this news article:

Geographical focus:

Thematic domains:

Related posts:

We are always here to talk!

Contact us