Responsible AI

Charting the Course for AI Policy: 7 Core Principles for Enterprise AI Adoption Now

Discover the Credo AI Policy team’s key insights from last year’s Responsible AI Leadership Summit, along with actionable core principles enterprises can adopt today.

January 16, 2025
Author(s)
Evi Fuelle
Contributor(s)
No items found.

In 2024, Credo AI hosted our third annual Responsible AI Leadership Summit in sunny San Francisco, bringing together top experts from enterprise, government, standard-setting bodies, and the non-profit ecosystem to discuss current challenges and opportunities facing AI-enabled enterprises in their journey to adopt AI responsibly.

With a beautiful view of the Golden Gate Bridge and the figurative framing of a “bridge” uniting our various “communities of expertise” in the back of our minds, the room was full of energy and a collective urgency to advance AI governance amid today's fast-evolving AI landscape.

Many enterprises in the room asked, “how can I create a ‘living policy’ for my enterprise use of AI that can grow and evolve as new laws and regulations are adopted globally regarding AI development and use? How can I create an AI governance strategy that will stand the test of time?”

If 2023 was the year of AI Policy, the phase of AI governance in 2024 can only be (affectionately) described as “a policy soup” that consisted of various regulation compliance deadlines, enforcement rulings that came  into play, some standards that were published while others remained in draft form, while various regions and markets publish voluntary guidelines or codes of conduct on AI. Here’s what enterprises can do right now—stay tuned for our upcoming updates on key AI regulations to expect in 2025.

Major regulations such as the European Union’s Artificial Intelligence Act as well as Colorado state’s own “Colorado AI Act,” have passed their final hurdles and become enforceable, with upcoming deadlines for compliance. Amidst the passage of this “hard regulation” on AI, international standards bodies have now published standards for AI enterprise risk management, intended to serve as the underpinning of “what good looks like” for AI risk management processes. Enterprises are contending with a variety of factors in AI policy and standards which influence the decisions they make regarding their AI governance, including (but not limited to):

  • Global Regulation (e.g. the European Union AI Act)
  • U.S. State-Level Legislation (e.g. Colorado State’s AI Act)
  • Global Standards (e.g. ISO/IEC 42001)
  • Voluntary Codes of Conduct (e.g. the Canadian Government’s Voluntary Code of Conduct on GenAI or the Biden-⁠Harris Administration Voluntary AI Commitments from July 2023)
  • Industry Best Practices (e.g. Microsoft’s annual Responsible AI Transparency Report)
  • U.S. Government Frameworks (e.g. the NIST AI Risk Management Framework)
  • Regional Frameworks (e.g. Singapore’s AI Verify Framework)
  • Supra-national recommendations (e.g. OECD Principles for Trustworthy AI)

The 2024 Credo AI Summit panels were designed to help explain this policy landscape at the state, federal, and global level, in order to better enable enterprises to effectively navigate this “policy soup.” Throughout the day, attendees heard from 40+ high level speakers on issues relevant to enterprise AI risk management, together with leaders from the AI policy and standards ecosystem, including leaders from the following organizations:

  • The European Union AI Office: sharing insights into how enterprises leading with trust can enable their market access in Europe and globally;
  • NIST, ISO/IEC, and MLCommons: sharing insights about why enterprises should pay attention to standards, and how they can enable enterprises to demonstrate “what good looks like” for AI system governance;
  • U.S. State Senators and the Future of Privacy Forum: for a discussion on how proportionate, balanced, and flexible regulation can provide certainty for businesses and enable innovation;

Key takeaways from these discussions:

  • Trust enables market access: Enterprises that commit to using AI responsibly with voluntary commitments and early regulatory compliance can build trust with their customers and stakeholders. They can do this using effective AI Governance, responsible AI R&D, and active contribution to industry standards - further enabling their market access not only in the European Union, but globally. Efforts like the European Commission’s AI Pact initiative further enterprise efforts towards responsible and business-oriented AI implementation and compliance with hard regulation such as the European Union's AI Act.
  • Standardization Provides Certainty: “Responsible AI” is transitioning from an art to a science with the publication and drafting of harmonized AI standards like ISO/IEC 42001 that are operationally relevant, interoperable, and adaptable across countries and regions. As AI technologies rapidly evolve, engineers, scientists, and academics are creating standards and large language model benchmarks that uphold the rigor of scientific validation, but are also practical for widespread enterprise application to evaluate AI performance and safety.
  • States are Innovating in AI Policy: U.S. states continue to forge ahead and enact rules that protect their communities, but are determined to write policies that will be implementable by enterprises. U.S. State Legislators have taken an iterative approach to policy development that allows AI governance to keep up with rapid technological change, and expressed a strong desire to get AI governance right, by learning from cybersecurity and data governance best practices, and encouraging enterprises to engage in this process with task forces and working groups on AI. Foundational AI policies at the U.S. state level aim to create certainty for businesses, enable market access, and protect communities.

So - what can enterprises do now?

As legislators move past the ideation and trial phase, and into the enforcement phase, we encourage enterprises to focus on building an enterprise AI governance structure based on the “common ground” that we observe being replicated across the state, federal, and global levels of policy making and standards. This “common ground” includes 7 core principles, including:

  1. Registry / Inventory of AI Use Cases
  2. AI System Tests
  3. Human Oversight
  4. Independent Evaluation
  5. AI Impact Assessment
  6. Public Notice / Transparency
  7. Ongoing AI Risk Management

Many in this industry are well aware that there is no “silver bullet” for AI governance (at least not yet). AI governance is incredibly contextually dependent, it is almost constantly in flux (with a “change cycle” in months as opposed to years), and it has the ability to be integrated as part of a system of components (as opposed to being a standalone product). The consequence of these realities is that AI governance will require “judgment calls,” which will be influenced by the risk tolerance of the jurisdiction in which the system is being regulated. The degree of privacy, control, predictability, and accuracy which are deemed “acceptable” by regulatory and oversight bodies will continue to vary based on cultural conceptions of risk.

Therefore, enterprises should not expect 100% regulatory alignment or agreement when it comes to AI policy. However, they can and should consider the “common ground” of AI policy as an effective starting point on which to build and develop a non-brittle policy toward AI development and adoption for company-wide use of AI that will build trust with stakeholders, enable market access, and evolve as AI policymaking evolves.

In 2024, Credo AI’s Responsible AI Leadership Summit established a clear agenda for organizations to effectively govern AI technologies amidst an evolving policy landscape. By building trust with voluntary commitments, preparing in advance for compliance deadlines, incorporating standards and benchmarks into internal AI governance processes, and ensuring ongoing AI risk management, companies can leverage AI’s advantages while mitigating its risks. These insights are essential for leading enterprises prepared to adopt responsible AI practices that guarantee effective and trustworthy implementation.

Adopting privacy-by-design and security-by-design has always enabled businesses to act faster, and be more strategic in the market. Responsible-AI-by-design is no different - it will be essential in an era where businesses that are unable to adopt AI are left behind. Smart investments in secure and robust AI frameworks will allow enterprises to move faster and scale better in the longer term.

Begin 2025 with certainty in your AI adoption, and learn how to practically manage risks as you adopt AI at scale. Jumpstart your AI Literacy with Credo AI’s Governance Advisory, which helps your enterprise build knowledge and operationalize AI governance with confidence, or explore our comprehensive AI risk management platform - use these tools to get started today!

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.