Get ready for the EU AI Act with Credo AI
With our GRC for AI platform, you can prepare to implement elements of an AI risk management system and fulfill EU AI Act obligations by:
What is the EU AI Act?
The EU AI Act is an EU-wide regulation that sets out risk management, transparency and reporting obligations for any company placing an AI system on the EU market, or companies whose system outputs are used within the EU (regardless of where systems are developed or deployed). The European Parliament approved the Artificial Intelligence Act on March 13th and the final version was published in the Official Journal on July 12th, 2024.
The EU AI Act entered into force on August 1st 2024,
marking the start of enforcement timelines.
Penalties for non-compliance:
2. Supplying incorrect, incomplete, or misleading information to notified bodies and national competent authorities: up to 1% of global revenue or €7.5M*
3. Non-compliance with most other violations: up to 3% of global revenue or €15M*
*Whichever is higher. For SMEs and startups, the lower of the two will apply.
What are businesses responsible for doing?
As an organization building or using AI systems that are placed on the EU market or whose system outputs are used within the EU, you will be responsible for ensuring compliance with the EU AI Act.
Enterprise obligations will be dependent on the level of risk an AI system poses to people’s safety, security, or fundamental rights along the AI value chain.
The most significant transparency and reporting requirements will be for AI systems classified as “high-risk,” as well as general-purpose AI system providers determined to be high-impact or posing “systemic risks.”
Depending on the risk threshold of your systems, enterprises will have some level of responsibilities that could include:
Registration: Registration of all use cases in the EU database before placing the AI solution on the market or putting it into service.
Classification: Identification of all high-risk AI use cases.
Risk Management: Adoption of appropriate and targeted risk management measures to mitigate identified risks.
Data Governance: Confirmation of the use of high-quality training data, adherence to appropriate data governance practices, and assurance that datasets are relevant and unbiased.
Technical Documentation: Keeping records containing information which is necessary to assess the compliance of the AI system with the relevant requirements and facilitate post market monitoring (i.e. the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system and drawn in a clear and comprehensive form). The technical documentation should be kept up to date, appropriately throughout the lifetime of the AI system (note: high risk AI systems should technically allow for automatic recording of events (logs) over the duration of the lifetime of the system).
Human Oversight: Incorporate human-machine interface tools to prevent or minimize risks upfront, enabling users to understand, interpret, and confidently use these tools.
Accuracy, Robustness, and Security: Ensure consistent accuracy, robustness, and cybersecurity measures throughout the AI system's lifecycle.
Quality Management: Providers of high-risk AI systems must have a quality management system in place documented in a systematic and orderly manner in the form of written policies, procedures and instructions.
EU Declaration of Conformity: Draft the declaration of conformity for each high-risk AI system, asserting compliance (kept up to date for 10 years, submitting copies to national authorities, and updating as necessary).
CE Marking: Ensure that the CE marking is affixed in a visible, legible, and indelible manner or digitally accessible for digital systems, thereby indicating compliance with the general principles and applicable European Union laws.
Incident Reporting: Providers of high-risk AI systems placed on the European Union market must report any “serious incident” to the market surveillance authorities of the EU Member States where that incident occurred (immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident).
Find out how the EU AI Act will impact your organization.
How Credo AI can support you in preparing for the EU AI Act
Register your AI Systems:
Register your organization’s AI Use Cases in our AI Registry to get full visibility into and control over where and how your organization is using AI; track internally developed and third-party AI systems, and once the AI Act is finalized, quickly identify which risk category they fall into
Implement AI Risk Management:
the Credo AI Governance Platform supports you in implementing governance workflows to manage and mitigate AI risks—a critical requirement for High-Risk AI systems under the expected EU AI Act
Maintain Technical Documentation:
when the final text of the EU AI Act is published, Credo AI will provide Policy Packs that operationalize the requirements of the regulation, making it easy for you to track compliance against any relevant requirements for your AI systems to be in compliance with the Act
FAQs
The EU’s approach to artificial intelligence centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.
Since April 2018, the three governance bodies of the European Union - the European Commission, Parliament, and Council - have considered how to comprehensively regulate Artificial Intelligence in the European Union’s Single Market.
In June of 2018, the European Commission appointed fifty-two experts (from academia, business, and civil society), to its “High Level Expert Group on Artificial Intelligence” (HLEG), designed to support the implementation of the EU Communication on Artificial Intelligence (published in April 2018). This HLEG focused on outlining a human-centric approach to AI, and designed a list of seven key requirements that AI systems should meet in order to be trustworthy (in its Ethics Guidelines for Trustworthy AI):
- Human agency and oversight;
- Technical Robustness and safety;
- Privacy and data governance;
- Transparency;
- Diversity, non-discrimination and fairness;
- Societal and environmental well-being; and,
- Accountability.
The mandate of the AI HLEG ended in July 2020 with the presentation of two more deliverables:
- The final Assessment List for Trustworthy AI (ALTAI): A practical tool that translates the Ethics Guidelines into an accessible and dynamic self-assessment checklist. The checklist can be used by developers and deployers of AI who want to implement the key requirements. This new list is available as a prototype web based tool and in PDF format; and,
- Sectoral Considerations on the Policy and Investment Recommendations: recommendations have served as resources for policymaking initiatives taken by the Commission and its Member States.
Then, in April 2021, the European Commission presented its “AI package,” which included:
- its Communication on fostering a European approach to AI;
- a review of the Coordinated Plan on Artificial Intelligence (with EU Member States);
- its proposal for a regulation laying down harmonised rules on AI (AI Act) and relevant Impact assessment.
The European Union (EU) “AI Act,” is the more commonly referred to name for the “Proposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence,” proposed by the European Commission on 21 April 2021.
Broadly speaking, any high-risk AI system that is developed by an EU provider, wherever in the world it is deployed, as well as systems that are developed outside of the EU and put onto the EU market - are within the purview of the EU AI Act.
The EU AI Act’s extraterritoriality - meaning its application outside of the European Union national borders - is expansive. The EU AI Act applies to high-risk AI systems that are developed and used outside of the EU, if the output of those systems is intended for use in the EU.
Many high-risk AI providers and deployers based outside the EU, including those in the United States, will find their system outputs being used within the EU, and such entities will therefore fall under the purview of the EU AI Act.
The EU AI Act outlines categories of high-risk AI (which can be updated through future implementing acts and delegating acts of the European Commission) as follows:
- Remote biometric identification systems;
- Critical infrastructure;
- Education and vocational training;
- Employment, workers management and access to self-employment;
- Access to and enjoyment of essential private services and essential public services and benefits;
- Law enforcement;
- Migration, asylum and border control management; and,
- Administration of justice and democratic processes.
GPAI are AI models trained with a large amount of data using self-supervision at scale, capable of competently performing a wide range of distinct tasks that can be integrated into a variety of downstream systems or applications.
GPAI models with computing power of at least 10^25 FLOPs, or GPAI models designated by the Al Office to “pose systemic risk,” have additional requirements per the EU AI Act. Special obligations apply to all GPAI systems; however, those with systemic risk must also perform model evaluations, assess and mitigate systemic risks, and document and report to the European Commission any “serious incidents.”
- Perform Conformity Assessment
- Register AI systems in the EU database
- Have in place a Quality Management System, which includes a Risk Management System
- Carry out a Fundamental Rights Impact Assessment
- Affix a CE marking