In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk ⚡️at Mozilla Fest 2023—a conference for, by, and about people who love the internet and want to showcase world-changing ideas and technology through exhibitions, talks and interactive sessions.
In his talk, Ehrik explores the many dimensions of responsible AI, and discusses how governance frameworks, transparency reporting, and open-source software serve as key tools for responsible AI. With a focus on building tools to ensure AI systems are compliant, fair, transparent, and auditable, Ehrik delves into the ways different stakeholders can evaluate AI systems and practically make responsible evaluations.
Curious to learn more? Read the transcript of his talk to discover about the importance of responsible AI or watch his talk.
Speaker: Ehrik Aldana, Tech Policy Product Manager at Credo AI
Transcript:
## [00:00] Introduction
Hey everyone! I’m so excited to be here with you at MozFest 2023 to give a lightning talk on a topic I (and hopefully you all) care deeply about: Responsible AI – Artificial Intelligence.
My name is Ehrik Aldana, and I’m on the product and policy teams at Credo AI.
> Credo AI is an AI Governance company. We build a responsible AI platform that helps organizations define requirements for their AI systems and then create transparency reports that evaluate those systems with respect to those requirements.
And to do this, we use an open-source assessment framework we developed called Credo AI Lens.
## [01:06] Responsible AI has many dimensions.
So I’ve used this term – Responsible AI – a couple of times now, and in this talk, I’d love to cover:
- What is Responsible AI?
- How do I know I’m building or using it? What tools can I use to measure this?
- And why evaluating our systems transparently is critically important to build responsible and trustworthy AI systems.
At Credo AI, we define Responsible AI as Artificial Intelligence that is in the service of humanity. This means going beyond having an AI system that is performant or accurate – and also considering that there are many dimensions related to a system’s behavior and outcomes that we must evaluate to manage any unintended risks.
On the screen, you’ll see some of the tenants that we think about when considering responsible AI.
- Is the system fair – or does it disproportionately benefit or harm one group over another?
- Is it transparent – to what extent can we understand the system’s outputs and to effectively audit it and ensure accountability?
- Is it privacy-preserving?
- Is it secure against adversarial attacks?
- And what are the system’s larger impacts on environmental or economic sustainability?
## [02:00] Building the case for Responsible AI + Regulation
If you’re a developer or deployer of AI systems, these are hopefully questions you are already asking.
For years now, we’ve seen the many high-profile and truly harmful risks and unintended outcomes of AI systems in a variety of contexts:
1. From automated recruiting tools and credit lending decisions being biased against women
2. To uneven performance of facial recognition systems for skin color leading to wrongful arrests.
3. To private information or copyrighted material being reproduced by generative language models.
These types of outcomes are harmful to society – and expose you, your organization, and the people you serve with your AI systems to unnecessary risk.
So how do we ensure that our systems are accounting for these risks?
On one level – I’m going to use the R-word here – regulation is coming with the intention to provide guidance on what the definition of good or acceptable is for AI systems.
All around the world, governments are both clarifying how existing regulations apply to AI systems and creating new guidelines and requirements for using AI systems in contexts like hiring insurance and credit decisions.
## [03:17] Tools for Transparency
So yes – there is emerging regulatory pressure to ensure Responsible AI development and deployment.
But at the same time, laws and regulations take a long time to develop.
Gold standards and best practices for technology only emerge after we’ve had ample time to test and fully understand the technical limitations and behaviors of the systems we use – and frankly, with AI, we’re not there yet.
But there’s absolutely work that you can be doing to help us – as an industry, as a society – get there. And this is where tools for transparency come in.
Today, there is an increasingly large ecosystem of tools to comprehensively evaluate AI systems, moving beyond measuring performance to also evaluating robustness, fairness, and other aspects of the system’s behavior and impacts that we’ve discussed.
These tools can take a variety of different forms.
One form is technical tools that can assess the models and datasets that make up your AI system. Many of these are open source – such as IBM and the Linux Foundation’s Adversarial Robustness Toolbox, which enables developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.
Another form of these tools for transparency is oriented around governance and processes, such as the AI Risk Management Framework released earlier this year by the National Institute of Standards and Technology or NIST in the United States, which provides developers and deployers of AI systems a set of guidelines to follow to define, measure, and manage risks that emerge from their AI systems.
## [05:14] About Credo AI
At Credo AI, we take both of these types of tools to simplify Responsible AI Governance.
On the technical tool side, we’ve developed Credo AI Lens. As I previously mentioned, Lens is an open-source, comprehensive assessment framework for AI systems. The lens acts as a one-stop shop for technical assessment, taking many of those open source tools that we looked at, as well as additional evaluators we’ve built, and standardizes model and data assessment.
On the process tool side, we’ve developed what we call Policy Packs, which take those guidelines, principles, and regulations and turn them into checklists of controls that you can access on our platform and provide evidence to track how compliant your AI systems are against these policies.
And perhaps most importantly, on our platform, you can generate transparency reports that articulate the processes you follow, the assessment plans you use, and how your system performs against this evaluation.
And I say this perhaps is the most important aspect of this process because Responsible AI – our understanding of it and our collective ability to attain it – will only grow in the light.
Transparency around how to best govern and assess our systems, as well as transparency around how our systems are actually behaving, is vitally necessary to promoting an industry and culture that takes AI’s risks and strategies to mitigate them seriously.
## What can you do today to use AI responsibly?
So with that, I want to leave you with a few things you can do to promote responsible AI.
- If you’re a builder of AI systems, today we’ve gone over a few tools that can help you develop Responsible AI. Use these tools and share how you’re using these tools through things like transparency reports to positively impact industry culture in this regard.
- Develop new ways and tools to assess AI systems. These include not just technical aspects of an AI system but the societal impacts. This means that we need expertise from a variety of communities, contexts, and disciplines to assess the behaviors and impacts of AI systems.
- And finally, if anything I said today resonated with you or you’d like to learn more about the tools discussed in this talk, please go to our website at www.credo.ai or follow me on Twitter at @elaldana.
Thanks very much, and enjoy the rest of MozFest!”
Don't miss out on the opportunity to see how Credo AI's governance software can benefit your organization. Request a demo now by emailing us at demo@credo.ai.