Credo AI is thrilled to launch the largest and most comprehensive AI Risk and Controls Library in the world. Building on academic research, Credo AI’s domain expertise, and industry frameworks from leaders like MITRE and NIST, our team has built the market’s most comprehensive library of AI-specific Risk Scenarios and Controls to anticipate and mitigate negative incidents, allowing you to develop and deploy AI systems with peace of mind.
With Credo AI's expanded library, you can now quickly identify all of the relevant risks that need to be addressed for a specific AI tool or application, and the controls to mitigate those risks. Combined with our existing features to streamline GRC for AI, your AI governance process can be faster than ever, enabling your company to become a trustworthy AI-powered leader in your industry.
GenAI-Specific Risk Scenarios and Controls, Informed by NIST
On April 29, 2024, NIST released a trailblazing new standard for GenAI governance, a draft AI RMF GenAI Profile.
The GenAI-specific profile was developed over the past year and drew on input from the NIST generative AI public working group of more than 2,500 members, of which Credo AI was one.
An extension of the work on NIST’s AI Risk Management Framework, the draft AI RMF Generative AI Profile is designed to help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities.
Based on these risks and controls, we’ve enhanced the Credo AI platform with over 400 new GenAI-specific controls, expanding the Credo AI Risk and Controls Library to nearly 700 AI Risk Scenarios and corresponding controls.
Why AI Risk Scenarios and Controls Matter
Chances are, you’ve felt the expanding mandate for AI use cases at your company, with GenAI being embedded in every department and function. As AI use cases expand, so too must the corresponding index of controls. In the Credo AI platform, these are automatically recommended at the use-case level.
At Credo AI, we ensure our platform governs AI at the use case level—not just data sets and models. This aligns with emerging global standards like ISO 42001, which emphasizes both organization-level and use case-level governance.
The greater the index of controls to choose from, the more rapidly and accurately Credo AI Assist can recommend relevant risk scenarios and controls, enabling secure, streamlined AI governance. This empowers your AI teams to focus on building amazing products, and automates the (necessary) busywork in AI governance, risk management, and compliance.
As AI Use Cases Expand So Too Must Controls
Use-case level AI governance can also be referred to as “contextual”. The context where AI is deployed is paramount to mitigating its risk, and can greatly affect what “good AI” looks like in the context of minimizing damages to your company, your customers, and the public.
Just imagine a chatbot for an insurance company, and the disparate risks that chatbot would encounter depending on the policy, claim, or customer they were interacting with. Although it may become increasingly easy from a technology perspective to roll out an LLM across use cases simultaneously, mitigating—and critically, understanding—context-specific risks is essential to winning as an AI-powered company.
Like cars on a freeway, low-risk AI use cases should zoom full-speed ahead. But high-risk AI is like the trucks—without a weigh station, you could be zooming toward a high-speed disaster.
Register for Our Webinar on NIST, GenAI, and AI Risk Management
To learn more about this feature, and managing the risks of generative AI, register for our upcoming webinar on June 27th. Or, request a demo of the Credo AI platform.
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.