Resources

Explore curated content on AI governance and responsible AI, including the latest regulations, standards, and risks in AI and Generative AI.

Filters
themes
Formats
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text
Navigating The GenOps Market: Tools That Promote Responsible Practices And De-Risk Generative AI
Generative AI
Blog

Navigating The GenOps Market: Tools That Promote Responsible Practices And De-Risk Generative AI

With the rapid expansion of Artificial Intelligence’s capabilities, genAI is the hottest topic in the tech industry. From Google Docs to Excel to Firefly, companies around the world are integrating generative AI into their products to streamline processes and boost productivity. As more organizations adopt this technology, it's clear that generative AI is the future of business. While the promise of genAI is immense, the potential risks attached to this technology cannot be overlooked. From the rise of harmful content to concerns around cybersecurity, there are numerous potential downsides and risks to its development and use, and it is critical for businesses to address these issues to ensure that genAI is implemented in a safe and responsible manner. If you're someone who wants to start using genAI, but is worried about the potential risks of these tools, don't worry. You're not alone! (And there’s good news!) Fortunately, there are numerous low-barrier tools available in the market that can help address the responsibility dimensions of genAI, like fairness, transparency, and explainability. These "genAI Ops," a la MLOps or DevOps, can help you mitigate risk and improve the AI ROI of your business without requiring substantial technical innovation. At Credo AI, we're committed to highlighting current low-barrier genAI Ops tools, so organizations all over the world can start taking action today to unlock the full potential of genAI responsibly. If you're interested in a more comprehensive solution and guidance, register today for our GenAI Trust Toolkit Early Access program! Without further ado, let’s talk genAI Ops!

March 30, 2023
How NIST Pioneered GenAI Controls—and How to Operationalize Them
AI Governance 101
webinar

How NIST Pioneered GenAI Controls—and How to Operationalize Them

Chances are, you’ve felt the expanding mandate for AI usage at your company, with GenAI being embedded in every department and function. But unapproved usage or "shadow AI" is skyrocketing, with over 50% of employees using unapproved generative AI tools at work, according to a Salesforce study. On 29 April 2024, the National Institute of Standards and Technology (NIST) released its initial public draft of the AI Risk Management Profile for GenAI, which defines a group of risks that are novel to or exacerbated by the use of Generative AI (GenAI), and provides a set of actions to help organizations manage these risks at the use case level to power scalable, safe GenAI adoption.The trailblazing new draft AI RMF GenAI Profile was developed over the past twelve months and drew on input from the NIST generative AI public working group of more than 2,500 members, of which Credo AI is a member, as well as the prior work on NIST’s overarching AI Risk Management Framework.Credo AI is excited to present this webinar, explaining these newly defined GenAI risks and controls, as well as how to approach comprehensive AI governance and risk management for enterprises of all sizes that want to safely deploy GenAI tools. Watch this webinar to learn:• An overview of newly published GenAI governance documents, with a deep dive into NIST AI 600-1• How to apply GenAI controls to high-risk AI scenarios: high-risk AI industries and use case examples• Contextual AI governance: why you should apply controls, and manage AI risk, at the use-case level

June 27, 2024

No filtered results

Search site?
Button Text