Summit

10/10 Vision: Seeing the Future of AI & its Governance Clearly

10 Panels, 10 Insights, and the Blueprint for AI Governance Innovation from the 2024 Credo AI Responsible AI Leadership Summit

October 15, 2024
Author(s)
Navrina Singh
Contributor(s)
No items found.
No items found.

Every leap in human progress is marked by the moment where vision meets responsibility- and ambition is built on trust. But business leaders often recognize these moments too late and are forced to recall or retrofit products to rebuild consumer trust and keep up with groundbreaking advances like data science, social media, or data privacy.

The 2024 Credo AI Responsible AI Leadership Summit in San Francisco was that pivotal moment, as businesses are approaching the continuous wave of innovation in AI. In 2024 alone, we've witnessed AI evolving at an astonishing pace,shattering old paradigms and transforming business use cases. Faster, more powerful, and increasingly multi-modal, AI is reshaping industries with each micro-shift in its capabilities.

We are now in a race to the top where AI innovation and governance must alignto chart the future of responsible AI. The critical question is no longer whether AI will shape our world, but how we, as business leaders, will shape AI.

Conversations at the Summit spanned from human values to frontier technologies, from accountability to safety. At the heart of these discussions was a unifying principle: trust can’t be an added feature—it’s the bedrock upon which AI must be built. 

A key theme emerged: trust is a powerful competitive advantage, and governance is the enabler. Strong AI governance builds the trust needed to fuel innovation, ensuring both safety and accelerated growth. The companies that win the AI race to the top will be those that embed governance to drive AI forward responsibly. Here are 10 powerful insights from the 10 exceptional panels at the summit, each offering a pathway for leaders to navigate the future of AI governance and innovation.

1) Trust Is the Engine of AI Innovation

Panel: ROI of Trusted AI

"Trust is no longer an accessory to AI—it’s the engine driving adoption."

From left to right: Shirin Ghaffary (Bloomberg), Paula Goldman (Salesforce), Jay Subrahmonia (Accenture), Dev Stahlkopf (Cisco)

At this panel, leaders from Accenture, Salesforce, and Cisco revealed a clear trend: companies that prioritize trust in their AI systems are seeing faster adoption and more stable long-term growth. Enterprises are already embedding trust as a core part of their AI strategy, recognizing that without governance, transparency, and accountability, innovation can only go so far. With clear guardrails, builders have certainty in what they are creating. Thistrust drives deeper customer loyalty and reduces risks that could disrupt AI commercialization. The era of trust as a competitive advantage is already here. The key takeaway for me is that organizations are not just competing on AI capabilities; now they are competing on AI credibility.

2) Boards Are Shifting from Firefighters to Builders, Placing AI Strategy and Governance at the Core

Panel: Racing to Responsible AI: Board Leadership and Oversight

"AI-prepared boards are not just stewards—they're the architects of a company's future in the age of AI."

Governance is no longer an afterthought for board members—it's at the forefront of strategic discussions. Board members are asking executives tough questions about AI governance, aiming to align governance frameworks with business objectives. Scott Frederick of Sands Capital summed it up: “The AI-prepared boards are shifting from oversight to insights. They don't just ask what could go wrong - they envision what must go right.” In the coming years, boards won’t be reacting to AI risks—they’ll be building proactive strategies and embedding AI governance at the core of their work. Prioritizing AI literacy and governance readiness, these boards will steer companies to not just navigate AI responsibly but to leverage it for innovation and growth.

From left to right: Mark Sullivan (Fast Company), Michelle Lee (Obsidian) Nicole Wong (Mozilla Foundation), Scott Frederick (Sands Capital)

This panel also discussed how investors are already watching closely, evaluating companies' AI risk and compliance in due diligence. AI governance is becoming a critical factor for securing funding. Boards with AI-savvy leadership will have the edge, positioning their companies for sustainable success in this new AI-driven world. As Michelle Lee pointedly noted, "We're no longer asking if we need AI governance, but how fast we can embed it into our DNA."

3) AI Governance Will Become the Backbone of Every AI-Driven Enterprises

Panel: Integrated Enterprise AI Governance


"AI governance isn’t a task for a single team—it’s woven into every process."

From left to right: George Hammond (Financial Times), Christina Montgomery (IBM), Navrina Singh (Credo AI)

AI governance can no longer be siloed or an afterthought. At the summit it was clear that Enterprises are embedding AI governance into every facet of their operations. AI governance is becoming a strategic priority for enterprises, not just a compliance measure. It must be deeply integrated into the AI tech stack, people, and processes to ensure that businesses are prepared for both risks and opportunities.Panelists emphasized the importance of changing the culture around governance technologies through education, incentives and executive buy-in. The stunning backdrop of the Golden Gate Bridge at the summit served as a metaphor for how integrated governance acts as a competitive bridge—connecting technology with business impact. The panel and attendees reinforced that this multi-stakeholder approach is essential to building trust, driving innovation, and managing risk. 

4) Generative AI Governance: A blueprint for Enterprise Preparedness  

Panel: Harnessing Generative AI: Balancing Innovation & Responsibility


“We are all running with scissors right now in this world of Generative AI." 

From left to right: Ed Ludlow (Bloomberg), Roger Roberts (McKinsey & Company), Greg Ulrich (Mastercard), Carla Eid (PepsiCo)

Generative AI is a powerful tool, but without proper governance, it risks driving enterprises into chaos rather than creativity. Leaders from McKinsey, Mastercard, and PepsiCo emphasized that governance is key to harnessing generative AI's potential while mitigating risks.Enterprises are particularly concerned with GenAI risks pertaining to information leakage, inaccuracy and hallucinations. These risks become particularly urgent when technologies are customer facing.. Greg Ulrich’s quote above captures the urgency: enterprises are navigating uncharted territory and underscores the urgency of establishing guardrails to prevent chaos. Without governance, the rapid innovation we’re witnessing can easily spiral out of control. Structured frameworks for generative AI not only balance creativity with responsibility but also pave the way for enterprise preparedness for future AI capabilities, such as agentic AI. Governance, as the backbone of these efforts, will ensure AI drives responsible innovation and positions businesses to manage upcoming advancements in AI while delivering real value.

5) Global Standards: Demonstrating Trust in your AI Systems 

Panel: Global Harmonization of AI Standards & Commitments

"Adopting AI standards isn't just about compliance for an organization—it's about gaining a competitive edge by ensuring trust, accelerating innovation, and unlocking global scalability."

This panel focused on the essential role standards play as the"lynchpin of certainty" for enterprises. Experts from ISO/IEC global standards development, the open-source AI community, and the U.S. National Institute of Standards and Technology (NIST) discussed how standards help business leaders determine “what good looks like” for AI systems.

From left to right: Evi Fuelle (Credo AI), Wael Diab (ISO/IEC JTC 1/SC 42 Artificial Intelligence), Elham Tabassi (NIST), Rebecca Weiss (MLCommons)

The panel emphasized that AI standards are essential for ensuring consistency, quality, and safety while enabling market access. These standards offer a reliable way to:

  • Establish common definitions for AI governance, such as risk management and human oversight.
  • Streamline compliance with AI regulatory requirements.
  • Enable interoperability across various regulatory environments.

Director Lucilla Sioli (EU AI Office)

As highlighted by Lucilla Sioli, Director of the EU AI Office, the passing of the EU AI Act has set a new global baseline for AI governance. Harmonized standards, such as ISO/IEC 42001, will be crucial in implementing the Act’s obligations. Enterprises that adopt frameworks like ISO/IEC 42001, the NIST AI Risk Management Framework, and emerging benchmarks like the ML Commons v0.5 standard will find it easier to scale responsibly across borders, transforming regulatory compliance into a competitive advantage. The top takeaway is the task now before us: Meeting the challenge of bridging the gap between regional regulations and creating truly global standards that provide certainty while allowing AI technology to evolve continuously.

6) Unified Governance Scales AI Value

Panel: Better Together: AI Ops + Governance


"Governance isn’t a compliance check—it’s a core business strategy."

From left to right: Kartikay Mehrotra (Bloomberg), Karthik Bharathy (Amazon Web Services), Heather Gentile (IBM), Susannah Shattuck (Credo AI)

As AI systems become more complex, businesses face the challenge of fragmented AI ops tools and disjointed workflows. As leaders from Amazon and IBM noted, while ops tools may provide metrics, they lack the judgment necessary to determine whether those metrics align with business objectives and responsible AI practices. This session emphasized the urgent need for new AI governance tooling—one that acts as a single pane of glass, giving businesses consistent oversight across their entire AI infrastructure. This unified governance layer integrates seamlessly with both legacy systems and cutting-edge AI tools, ensuring responsible AI aligned with business goals and policies. The key takeaway is that the ability to centralize AI governance and enforce trusted standards will allow organizations to manage risk, maintain accountability, and confidently scale their AI operations.

7) U.S. States leading the Charge: On Adaptive Policymaking

Panel:  US State-Level AI Policy

"While nations debate, states innovate. Adaptive policy is where innovation and governance meet."​​

From left to right: Tatiana Rice (Future of Privacy Forum), Senator James Maroney, Senator Robert Rodriguez, Senator Scott Weiner 

Across the U.S., states are becoming AI governance laboratories, proposing and passing state-level legislation on AI governance. Prominent AI policymakers, Senators Maroney (Connecticut), Rodriguez (Colorado), and Wiener (California) showcased how state-level AI policies are driving both governance and innovation. They emphasized the need for multi-stakeholder engagement, urging AI experts and businesses to collaborate in shaping policy. The panelists called out the contradiction of companies promoting Responsible AI while resisting regulation, reinforcing that responsible governance is essential. Adaptive policy making is enabling companies to build safer, more transparent, and scalable AI systems. The key takeaway from the conversation is that regulation developed with the input of important stakeholders, when embraced, provides a strategic advantage, allowing innovation to thrive responsibly.

8) High-Stakes Domains Will Adopt 'Zero Tolerance' AI Governance

Panel: AI in Government

"In high stakes sectors like government, governance isn’t negotiable—it’s mission-critical."

From left to right: Rachael Myrow (NPR), Carl Hahn (Naysa), Graham Gilmer (Booz Allen), Rama G. Elluru (Special Competitive Studies Project)

In high-stakes government sectors such as defense, public safety, and healthcare, AI governance has become essential for ensuring mission success. Panelists from Northrop Grumman, Booz Allen Hamilton, and SIP emphasized the stakes are too high for anything less than flawless, accountable, and transparent AI systems. Graham Gilmer from Booz Allen Hamilton’s AI practice underscored "The amount of effort in the last 11 months at federal agencies since the AI executive order is staggering." Strict governance frameworks are now being implemented to guarantee that AI systems not only meet rigorous technical and ethical standards but also support national security and public trust. For high-stakes AI, governance starts from contracts, where companies have to specify early on exactly how they are using AI. The main takeaway is that the strict governance of high-stakes AI is setting a new benchmark for other industries, as the need for trust and safety reaches new heights.

9) Governance Will Define the Future of AI Power

Panel: Securing the AI Frontier

"Open or closed source, ungoverned AI is a liability. Well-governed AI is an asset."

From left to right: Kate Rooney (CNBC), Jonathan Porat (State of California), Daniel Kluttz (Microsoft) Saurabh Baji (Cohere), Ella Irwin (Meta)

The governance of both open and closed-source AI models is emerging as the linchpin that will determine whether AI becomes a force for widespread societal benefit or a tool for unchecked influence. In this panel, featuring AI leaders from Microsoft, Meta, and the State of California, leaders discussed the opportunities and risks of open-source. Open-source models, with their potential for democratizing AI, require stringent governance to prevent misuse, while closed-source models, often guarded by corporate interests, need transparent accountability to ensure they serve the public good. The key takeaway is that the balance of power between innovation and ethical governance in these models will shape the future of AI—and only those systems governed with clarity, fairness, and alignment to societal values will build trust and ensure sustainability. 

10) Human-Centered Governance Will Define the Future of AI

Fireside: Navigating the AI Frontier with Fei-Fei Li

“In order to ensure AI is infused with human-centered values, we first need to want it.”

As AI evolves, its future will be defined not by its technological power, but by our commitment to human-centered governance. In the fireside chat with Fei-Fei Li, we explored the essential role of human values in shaping AI’s trajectory. Fei-Fei reminded us that “there is nothing artificial about AI—it is made by people, used by people, and it must be governed by people.” This insight drives the prediction that AI’s greatest value, especially in agentic systems, will be realized only when guided by governance frameworks that align with human values and societal needs. The real potential of AI lies in its ability to uplift humanity, but this can only happen if we deliberately choose to infuse it with our values and build the necessary guardrails to ensure it serves the greater good.

The Future is Governed

For too long, we've been stuck in what I call "governance theater"—where responsible AI is more performance than practice. That era is over. At the Credo AI Race to the Top Responsible AI Leadership Summit, industry giants like Accenture, Amazon, Booz Allen, Cisco, Cohere, IBM, Mastercard, McKinsey, Meta, Microsoft, MLcommon, Mozilla, Northrop Grumman, Pepsico, Salesforce, alongside thought leaders from ISO, NIST, and U.S. State Senators, didn't just discuss compliance—they explored the integral role of governance in fostering innovation, ensuring survival, and securing a competitive edge in an AI-first world.

The pivotal question posed at the 2024 Credo AI Responsible AI Leadership Summit was not whether AI will revolutionize your industry or society, but whether you are prepared to guide and capitalize on its transformative power, safely and responsibly.

In this race to the top, will you sit on the sidelines or write the playbook? The future of AI is unfolding before us, and it will be governed.  Make sure you are not just a part of it but actively shaping it.

Re-live the 2024 Responsible AI Leadership Summit →

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.