EU AI Act

The EU AI Act: Take Steps Now to Start Governing Your AI

On August 1st, the EU AI Act enters into force with key obligations applying from 2 February 2025, 2 August 2025, 2026, and 2027.

August 1, 2024
Author(s)
Evi Fuelle
Lucía Gamboa
Contributor(s)
No items found.
No items found.

The EU Artificial Intelligence (AI) Act was published in the Official Journal of the European Union on July 12, 2024, as “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence.” 

On August 1st, the EU AI Act enters into force with key obligations applying from 2 February 2025, 2 August 2025, 2026, and 2027. The European Commission’s newly created EU AI Office is currently working on codes of practice for GPAI models, and guidelines to further clarify different parts of the EU AI Act, and will oversee implementation and enforcement of the AI Act, in coordination with the EU Member States. The consequences for noncompliance range from penalties of €35 million (or 7 percent of global revenue) to €7.5 million (or 1 percent of revenue), depending on the infringement and size of the company. 

Over the next several months, enterprises should take action to start to meet their compliance obligations, including identifying and classifying their AI systems (including use and development), and putting in place an enterprise-wide, comprehensive AI governance framework. 

So where to start?

The EU AI Act (AIA) applies to a wide range of stakeholders. Enterprise obligations will vary, depending on market use (where and how your AI system will be used), your entity designation (an AI system provider, deployer, distributor, importer), and risk classification (what your AI system will be used for). It’s important to understand these key definitions. 

First, enterprises will need to determine whether or not they are working with an “AI system” as defined by the EU AI Act: 

An artificial intelligence system (AI system) is defined as: A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. (Source: Article 3, Point 1). 

Next, enterprises will need to determine their “entity type” as any of the following:

  • Provider
  • Deployer
  • Distributor
  • Importer
  • Product Manufacturer
  • Authorized Representative 

Based on their entity type, as well as the risk classification of their enterprise use cases (unacceptable, high, General Purpose AI, low or transparency, or minimal), enterprises will be required to fulfill a variety of transparency obligations. Obligations will vary depending on whether you are a developer or provider, the type of model, and the level of risk. 

The main obligations required for high-risk AI systems include: 

  • establishing a risk management system;
  • enabling “human-in-the-loop” or human oversight; 
  • producing transparency reporting; and, 
  • enabling pre-and post-marketing monitoring of AI systems.

To learn more about each of the EU AI Act definitions, and better understand your obligations pursuant to the Act, checkout Credo AI’s “Enterprise Guide.” 

At Credo AI, we are committed to helping organizations navigate the EU AI Act. By fostering transparency, accountability, and human oversight, we can unlock the full potential of AI while safeguarding fundamental rights and promoting societal well-being.

Get started with our team today!

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.