In 2023, the world woke up to the need for oversight and governance as both generative AI and artificial intelligence (AI) become driving forces for companies. On the one hand, organizations want to embrace AI with unwavering confidence to move faster, innovate more, and stay ahead of the competition. On the other, there is a litany of risks, including but not limited to hallucination and algorithmic bias glooming organizations with ungoverned AI.
Companies want to adopt AI to move fast. But, without governance, they risk moving fast in the wrong direction. Enter Responsible AI—artificial intelligence that is compliant, secure, safe, auditable, fair, and human-centered.
Since our founding, Credo AI has strategically shaped the ecosystem to ensure that Responsible AI is no longer an ambition but a reality, and for the past years, we have been observing a major shift in the mindset of companies adopting AI.
This year, we witnessed and supported organizations worldwide in implementing successful AI governance strategies and observed how regulations and standards were swiftly developed to safeguard humanity while securing innovation.
With so many advancements in the ecosystem—from policy to enterprise—it was clear that it was time to bring together the AI leaders doing responsible AI on the ground in their respective companies. So we did it.
In November 2023, we hosted our first in-person Responsible AI Leadership Summit in New York City. With the theme "Make It Real,” our Summit highlighted the urgency and significance of translating AI governance into tangible actions given the recent global events like the White House AI Commitmthe ents, the G7 Summit, and the now politically agreed AI Act, as well as ensured the conversations held there were laser-focused on how to implement successful AI governance strategies and programs to adopt AI with confidence and ensure ongoing compliance to company/industry requirements.
From left to right: Gary Marcus, Scientist, Best-selling Author, and Serial Entrepreneur, Founder of Robust.AI and Geometric.AI, acquired by Uber; and Navrina Singh, Founder and CEO of Credo AI. Watch the fireside chat now.
With more than 150 attendees and 20+ distinguished AI experts discussing how to move Responsible AI from talk to action, we distilled five key insights from our Summit on transitioning from Responsible AI talk to action.
Let's dive right in!
1. “Don’t let perfect be the enemy of good.“ - John Larson EVP and AI Practice Lead, Booz Allen Hamilton.
From left to right: David Jeans, Senior Writer at Forbes; John Larson, EVP and AI Practice Lead at Booz Allen Hamilton; Carl Hahn, Chief Compliance Officer and Vice President at Northrop Grumman; and Diane Staheli, Responsible AI at the Chief Digital and AI Office, Department of Defense. Watch the panel now.
While the pursuit of perfection is admirable, it often hampers the practical implementation of responsibility AI and governance. Insisting on a flawless plan or strategy can lead to analysis paralysis (mentioned by more than three panelists!) and significant delays in the development of vital guardrails.
Prioritizing 'good' over 'perfect' fosters a proactive and pragmatic strategy, ensuring that enterprises stay responsive and adaptable in their pursuit of responsible AI practices while moving fast. This iterative approach allows continuous improvement and adaptation to evolving technology and new regulations, standards, and best practices.
For the ones getting started: get educated (here’s a guide), convene stakeholders, understand where AI is being used with your organization, and create a clear game plan for the what/how/why for AI and AI governance within your organization. Again, all points don’t have to be perfect for you to start your journey today; the importance is starting!
2. “The learning curve for companies is not technical. It’s strategic and cultural. It’s about your standards as a business” — Meghan Keaney Anderson, Marketing VP, Jasper.
From left to right: Navrina Singh, CEO and Founder of Credo AI; Meghan Keaney Anderson, VP of Marketing at Jasper; Cass Matthews, Assistant General Counsel at Microsoft’s Office of Responsible AI; and John Dickerson, Chief Scientist at Arthur. Watch the panel now.
Embracing artificial intelligence goes beyond just mastering the technical aspects (though that's crucial). It's about seriously considering the impact of your ML systems and AI and defining the right guardrails, standards, and principles that your organization will follow.
These standards act as your guiding compass in AI initiatives. They set the bar against which every decision, from handling data to ensuring transparency in algorithms, is measured. When you align these standards thoughtfully with AI integration, they become the bedrock upon which your company can construct a strong framework for responsible and governed AI, ensuring compliance and effectiveness in the ever-changing AI landscape.
3. “There is a moment in time where we can align incentives and where we can create a race to the top rather than a race to the bottom.” - Shamina Singh, Founder and President of the Center for Inclusive Growth, Mastercard.
From left to right: Navrina Singh, CEO and Founder of Credo AI; Serena Oduro, Senior AI Policy Analyst at Data & Society Research Institute; and Shamina Singh, Founder and president of the Center for Inclusive Growth at Mastercard. Watch the panel now.
Instead of succumbing to a race to the bottom, where short-term gains often overshadow long-term sustainability and ethical considerations of AI, we have the power to cultivate a "race to the top" mentality.
This concept signifies a collective aspiration towards excellence, innovation, and social responsibility—that will then lead to increased trust, fewer production issues, and better sales!
By aligning motives and values, we can elevate our standards, foster innovation, and prioritize responsible practices, leaving behind the detrimental notion of a "race to the bottom."
4. “Procurement should include ongoing monitoring in the age of GenAI.” — Ozlem Celik-Tinmaz, Analytics Principal Director at Accenture.
From left to right: Giovanni Leoni, Head of Business Strategy & Development at Credo AI; Ilana Goblin Blumenfeld, Director of Emerging Technologies and Responsible AI Lead at PwC; and Ozlem Celik-Tinmaz, Analytics Principal Director at Accenture. Watch "Crafting Responsible AI: Syncing Tools, Culture & Incentives for Success'' now.
In this era of rapidly evolving artificial intelligence, the traditional approach to procurement is not enough. The dynamic nature of GenAI technologies demands a continuous assessment of vendor performance and the alignment of AI solutions with organizational objectives.
This forward-looking perspective on procurement acknowledges that the journey doesn't end with the acquisition of AI systems; it extends into vigilant monitoring to ensure that AI solutions remain responsible, secure, and effective over time.
5. “Understanding your [AI] use case is important because context matters.” - Andrew Reiskind, Chief Data Officer at Mastercard
From left to right: Sharon Goldman, Senior Writer at VentureBeat; Vijoy Pandey, SVP of Outshift at Cisco; Christina Montgomery, Chief Privacy and trust Officer and AI Ethics Board Chair at IBM; and Andrew Reiskind, Chief Data Officer at Mastercard.
In order to earn the trust of their customers, organizations must have a comprehensive understanding of every use case of AI or Generative AI that they are using and deploying. This understanding is crucial to assess the appropriate levels of risk in a use case, identify the applicable regulations and standards, and establish the necessary management and governance procedures. This will ultimately lead to the development of safe and effective AI applications and the establishing of trust with customers. To learn more about the definition and examples of AI use cases, click here.
Bonus: It takes a village.
The journey towards responsible AI is transformative and attainable and relies on the collective backing of an ecosystem that spans from policy and enterprises to government. Collaboration and the exchange of best practices are integral to the success of this endeavor.
Our Summit is a true testament to the imperative nature of these collaborative efforts, and we at Credo AI want to express our gratitude to every single one of you who participated in our Responsible AI Leadership Summit.
From Credo AI, we thank you for joining us, and we look forward to seeing you again next year!
- 📹 For those who couldn't attend the summit, we invite you to watch our panel recordings. Click here to watch all panels now on-demand!
- 💌 For those who want to keep up to date with the latest and greatest advancements in the industry, subscribe to our monthly newsletter!
- ☎️ For those ready to take the next step in their AI governance journey, talk to our expert team, and we will help you get where you need to go!
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.