Get ready for the 
EU AI Act with Credo AI

With our GRC for AI platform, you can prepare to implement elements of an AI risk management system and fulfill transparency obligations pursuant to  the EU AI Act by:

Identifying your high-risk AI use cases;
Adopting appropriate and targeted risk management measures to mitigate identified risks for your AI use cases; 
Completing technical documentation requirements; and 
Incorporating automated tools with human oversight to prevent or minimize risks upfront, enabling users to understand, interpret, and confidently use these tools.

What is the EU AI Act?

The EU AI Act is an EU-wide legal framework (Regulation) that sets out clear transparency and reporting obligations for any company placing an AI system on the EU market, or companies whose system outputs are used within the EU (regardless of where systems are developed or deployed). It was originally proposed by the European Commission on 21 April 2021, and has been politically agreed upon by all three EU institutions (8 December 2023). The European Parliament's plenary vote on the proposed Artificial Intelligence Act is expected to take place in mid-March 2024 (according to Parliament's draft agenda).

Following the final vote, the EU AI Act would enter into force after publication in the Official Journal of the European Union (expected Spring 2024).

Fines are expected for:

1. Non-compliance with prohibited AI violations resulting in up to 7% of total worldwide  annual turnover for the preceding financial year or €35M (whichever is higher)  

 2. Non-compliance with most other violations resulting in up to 3% of total worldwide annual turnover for the preceding financial year or €15M (whichever is higher)  

3. Supplying  incorrect, incomplete, or misleading information to notified bodies and national competent authorities in response to a request resulting in up to 1.5% of total worldwide global annual turnover or €7.5M (whichever is higher)

What are businesses responsible for doing?

As an organization building or using AI systems that are placed on the EU market or whose system outputs are used within the EU, you will be responsible for ensuring compliance with the EU AI Act.  

Enterprise obligations will be dependent on the level of risk an AI system poses to people’s safety, security, or fundamental rights along the AI value chain.

 The most significant transparency and reporting requirements will be for  AI systems classified as “high-risk,” as well as general-purpose AI system providers determined to be high-impact or posing “systemic risks.”

Depending on the risk threshold of your systems, enterprises will have some level of  responsibilities that could include:

Registration: Registration of all use cases in the EU database before placing the AI solution on the market or putting it into service. 

Classification: Identification of all high-risk AI use cases. 

Risk Management: Adoption of appropriate and targeted risk management measures to mitigate identified risks.

Data Governance: Confirmation of the use of high-quality training data, adherence to appropriate data governance practices, and assurance that datasets are relevant and unbiased.

Technical Documentation: Keeping records containing information which is necessary to assess the compliance of the AI system with the relevant requirements and facilitate post market monitoring (i.e. the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system and drawn in a clear and comprehensive form). The technical documentation should be kept up to date, appropriately throughout the lifetime of the AI system (note: high risk AI systems should technically allow for automatic recording of events (logs) over the duration of the lifetime of the system).

Human Oversight: Incorporate human-machine interface tools to prevent or minimize risks upfront, enabling users to understand, interpret, and confidently use these tools. 

Accuracy, Robustness, and Security: Ensure consistent accuracy, robustness, and cybersecurity measures throughout the AI system's lifecycle.

Quality Management: Providers of high-risk AI systems must have a quality management system in place documented in a systematic and orderly manner in the form of written policies, procedures and instructions. 

EU Declaration of Conformity: Draft the declaration of conformity for each high-risk AI system, asserting compliance (kept  up to date for 10 years, submitting copies to national authorities, and updating as necessary).

CE Marking: Ensure that the CE marking is affixed in a visible, legible, and indelible manner or digitally accessible for digital systems, thereby indicating compliance with the general principles and applicable European Union laws.

Incident Reporting: Providers of high-risk AI systems placed on the European Union market must report any “serious incident” to the market surveillance authorities of the EU Member States where that incident occurred (immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident). 

Find out how the EU AI Act will impact your organization.

How Credo AI can support you 
in preparing for the EU AI Act

Register your AI Systems:

Register your organization’s AI Use Cases in our AI Registry to get full visibility into and control over where and how your organization is using AI; track internally developed and third-party AI systems, and once the AI Act is finalized, quickly identify which risk category they fall into 

Implement AI Risk Management:

 the Credo AI Governance Platform supports you in implementing governance workflows to manage and mitigate AI risks—a critical requirement for High-Risk AI systems under the expected EU AI Act

Maintain Technical Documentation:

when the final text of the EU AI Act is published, Credo AI will provide Policy Packs that operationalize the requirements of the regulation, making it easy for you to track compliance against any relevant requirements for your AI systems to be in compliance with the Act

FAQs

What is the origin of this legislation (where did the “EU AI Act” come from)?

The EU’s approach to artificial intelligence centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.

Since April 2018, the three governance bodies of the European Union - the European Commission, Parliament, and Council - have considered how to comprehensively regulate Artificial Intelligence in the European Union’s Single Market.

In June of 2018, the European Commission appointed fifty-two experts (from academia, business, and civil society), to its “High Level Expert Group on Artificial Intelligence” (HLEG), designed to support the implementation of the EU Communication on Artificial Intelligence (published in April 2018). This HLEG focused on outlining a human-centric approach to AI, and designed a list of seven key requirements that AI systems should meet in order to be trustworthy (in its Ethics Guidelines for Trustworthy AI):

  1. Human agency and oversight;
  2. Technical Robustness and safety;
  3. Privacy and data governance;
  4. Transparency;
  5. Diversity, non-discrimination and fairness;
  6. Societal and environmental well-being; and,
  7. Accountability.

The mandate of the AI HLEG ended in July 2020 with the presentation of two more deliverables: 

Then, in April 2021, the European Commission presented its “AI package,” which included:

The European Union (EU) “AI Act,” is the more commonly referred to name for the “Proposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence,” proposed by the European Commission on 21 April 2021.

Why is the EU AI Act relevant for non-European companies?

Broadly speaking, any high-risk AI system that is developed by an EU provider, wherever in the world it is deployed, as well as systems that are developed outside of the EU and put onto the EU market - are within the purview of the EU AI Act. 

The EU AI Act’s extraterritoriality - meaning its application outside of the European Union national borders - is expansive. The EU AI Act applies to high-risk AI systems that are developed and used outside of the EU, if the output of those systems is intended for use in the EU. 

Many high-risk AI providers and deployers  based outside the EU, including those in the United States, will find their system outputs being used within the EU, and such entities will therefore fall under the purview of the EU AI Act.

What AI systems classify as high-risk?

The EU AI Act outlines categories of high-risk AI (which can be updated through future implementing acts and delegating acts of the European Commission) as follows: 

  1. Remote biometric identification systems; 
  2. Critical infrastructure; 
  3. Education and vocational training; 
  4. Employment, workers management and access to self-employment; 
  5. Access to and enjoyment of essential private services and essential public services and benefits;
  6. Law enforcement;
  7. Migration, asylum and border control management; and,
  8. Administration of justice and democratic processes. 
How is General Purpose AI (GPAI) defined and categorized?

GPAI are AI models trained with a large amount of data using self-supervision at scale, capable of competently performing a wide range of distinct tasks that can be integrated into a variety of downstream systems or applications.

GPAI models with computing power of at least 10^25 FLOPs, or GPAI models designated by the Al Office to “pose systemic risk,” have additional requirements per the EU AI Act. Special obligations apply to all GPAI systems; however, those with systemic risk must also perform model evaluations, assess and mitigate systemic risks, and document and report to the European Commission any “serious incidents.”

What specific documentation and processes needs to be developed or implemented for high-risk AI systems? 
  • Perform Conformity Assessment
  • Register AI systems in the EU database
  • Have in place a Quality Management System which includes a Risk Management System
  • Carry out a Fundamental Rights Impact Assessment
  • Affix a CE marking

Adopt AI with confidence today

The Responsible AI Governance Platform enables AI, data, or business teams to track, prioritize, and control AI projects to ensure AI remains profitable, compliant, and safe.