One of the most important and difficult challenges faced by an AI-empowered organization is simply answering the question: What AI tools is my organization using? These include systems that are being developed and tested internally, third-party tools teams have purchased, or vendor tools undergoing procurement evaluations.
Since launching our AI Registry, we have seen our customers use the feature to tackle this challenge head on. As a central repository for information about your organization’s AI systems, the AI Registry enables visibility, coordination, and responsible decision making across your teams.
Furthermore, our AI Registry addresses another urgent need: readiness for future compliance requirements, specifically the upcoming EU AI Act.
Background: The EU AI Act
Anticipated to pass later this year, the EU AI Act serves as a risk-based regulatory framework to govern the development and deployment of AI systems. The proposal classifies AI systems into four categories of risk, with specific requirements proportional to the level of risk posed:
- Unacceptable Risk: These include AI systems considered to be a clear threat to the safety, livelihoods, and rights of people; such as systems used for social scoring and real-time remote biometric identification). These applications are outright banned from deployment.
- High Risk: These include AI systems applications explicitly defined in the Annex III of the EU AI Act, including systems related to access to essential services (like credit, healthcare, and insurance), education, and employment, among others. These applications must first conform with a long set of requirements (sometimes requiring verification of an external auditor) before being placed on the market.
- Limited Risk: These include AI systems that directly interact with humans–such as chatbots–and require specific transparency obligations to notify users that they are engaging with an AI system.
- Minimal Risk: These include AI systems not covered by the other risk categories (such as spam filters or AI-enabled video games). These systems are subject to voluntary transparency disclosures and codes of conduct.
Moreover, the EU AI Act allows for further changes to what use cases fall into which categories as AI (and our understanding of its impacts) develops as a technology.
Ensuring compliance will require ongoing quality and risk management by providers of AI systems. Misclassifying your use case and eschewing obligations can result in maximum penalties of the higher of EUR 40 million or 7 percent of the company’s annual global revenue. This is nearly twice the maximum penalties for noncompliance with the EU’s General Data Protection Regulation (GDPR).
With this context in mind, AI-empowered organizations will be required not just to know what use cases comprise their AI inventory, but also each use case’s risk level and their resulting obligations. Our AI Registry can be a vital tool in helping your organization comply with the EU AI Act.
How to Prepare for EU AI Act Compliance with Our AI Registry
Step 1: Register Your Use Cases
As discussed, the first step is identifying and registering the AI systems that your organization is using. Our AI Registry is intuitively designed to make this process simple by creating unique records for each AI system (or “use case”) associated with your organization, while prompting the input of relevant use case information including the type of model, its purpose, where it will be deployed, and more. This information is critical to the next step.
Step 2: Identify Your Use Case Risk Category
Understanding your use case’s risk category according to the draft EU AI Act can be a complex task, with definitions subject to change after the act is finalized and further refined. Based on the information provided about your use case, our AI Registry intelligently recommends a Risk Category informed by the most recent text of the EU AI Act and relevant regulations.
Step 3: Apply and Complete Relevant Policy Packs
Once you have selected your use case’s risk category, our Platform then provides information about the relevant obligations that apply to your use case under the EU AI Act. These come in the form of recommended Policy Packs, which serve as a dynamic checklists of requirements and evaluations required by the EU AI Act. The Platform contains tailored Policy Packs for the risk categories outlined in the EU AI Act, as well as other laws, regulations, standards, and industry best practices.
Conclusion
AI’s rapid and unprecedented growth is driving lawmakers to act fast to regulate these systems – with severe penalties for noncompliance. With the EU AI Act setting the tone for the AI policy landscape, organizations' ability to adequately prepare for its requirements will set the tone for their ability to develop and deploy AI in a manner that is safe, trustworthy, and compliant.
While the journey toward effective governance is critical, it doesn’t need to be taken alone. Our AI Registry is actively helping businesses and organizations taking the first step toward ensuring compliance with policies like the EU AI Act – allowing them to fully realize the benefits of AI, while mitigating its risks.
Start your AI Governance journey; reach out today to learn more about the AI Registry!
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.