Credobility:
The only AI Governance Platform deeply integrated into the global policy and standards ecosystem.
Credo AI is a trusted partner for global policymakers, regulators, and standard setters. Our team includes staff with prior experience at the European Commission, technology trade associations, and the U.S. Department of Commerce, as well as some of the largest enterprises in Europe (including our on-the-ground staff in Europe). Credo AI’s Policy team connects with key stakeholders in the U.S. Congress, as well as Mayors, Governors, and State-Level legislators.
Credo AI’s CEO and Founder, Navrina Singh, sits on the National Artificial Intelligence Advisory Committee (NAIAC), which advises President Biden and the National AI Initiative Office and is a young global leader with the World Economic Forum as well as an OECD AI Expert serving on the OECD’s Expert Group on AI Risk and Accountability. Navrina previously served as an executive board member of the Mozilla Foundation and Mozilla AI, supporting its trustworthy AI charter.
A Glimpse into our Global Impact and Ecosystem:
The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Credo AI was present at the signing of The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, on October 30th, 2023.
UK Centre for Data Ethics and Innovation (CDEI) Algorithmic Impact Assessments Workshop
Hosted by CDEI and Ada Lovelace Institute, Credo AI was the only enterprise selected to present at the UK Government Centre for Data Ethics and Innovation workshop entitled “Exploring Tools for Trustworthy AI: Impact Assessments,” at White Hall in London.
In this workshop, Credo AI showcased our RAI Governance Platform, and research conducted into Algorithmic Impact Assessment prototypes for generative AI and human resources, to an audience of global regulators and enterprises, including the UK Information Commissioner’s Office, Ada Lovelace Institute, The Alan Turing Institute, British Standards Institute (BSI), DeepMind, Mastercard, Northrop Grumman, NHS AI Lab, and more.
This workshop provided an interactive opportunity for regulators and legislators, as well as standard-setting bodies and impacted enterprises, to exchange dialogue over best practices for algorithmic transparency reporting.
OECD AI Risk and Accountability Expert Working Group
Credo AI’s CEO Navrina Singh spoke at OECD.AI AI Risk and Accountability Expert Working Group meeting at OECD Headquarters in Paris, France, as part of critical discussions on the impact of generative AI on AI policy worldwide, including discussions of the NIST Risk Management Framework, International Standards on AI, and the European Union AI Act.
Through the OECD.AI Network of Experts workstream on AI risk, the OECD is engaging with partner organizations, including the International Organization for Standardization (ISO), Institute of Electrical and Electronics Engineers (IEEE), National Institute of Standards and Technology (NIST), European Committee for Electrotechnical Standardization (CEN-CENELEC), the European Commission (EC), Council of Europe (CoE), UNESCO, OECD, EU-US Trade and Technology Council (TTC) and Responsible AI Institute (RAII)-WEF to identify common guideposts to assess AI risk and impact for Trustworthy AI. The goal is to help implement effective and accountable trustworthy AI systems by promoting global consistency.
EU AI Act Insights
Credo AI’s Head of Business Development, Giovanni Leoni, and Global Policy Director Evi Fuelle engage with the European Parliament co-rapporteur of the EU AI Act, MEP Dragos Tudorache, & Chief of Staff Dan Nechita, to discuss how businesses can be best prepared for the European Union Artificial Intelligence Act and the future of general-purpose AI.
National Artificial Intelligence Initiative Panel hosted by SeedAI
Credo AI’s Global Policy Director Evi Fuelle represented Credo AI's views and research on Responsible AI Governance on a panel alongside Elham Tabassi (Chief of Staff, Information Technology Laboratory, NIST), Janet Haven (Executive Director, Data & Society Research Institute), Nicol Turner Lee (Senior Fellow, Center for Technology Innovation, The Brookings Institution), and Christine Curtis (Partnership on AI) at a public engagement event hosted by SeedAI on the National AI Research Resource (NAIRR) Task Force's final report.
Trade and Technology Council (TTC) Discussion at Embassy of France in the United States
Credo AI’s Global Policy Director, Evi Fuelle, joined a roundtable discussion hosted at the Embassy of France in the United States with fellow panelists Elham Tabassi, Peter Fatelnig, Michel Servoz, Jordan Crenshaw, and Cameron Kerry to discuss multi-stakeholder approaches to transatlantic cooperation on AI, including the EU-U.S. Trade and Technology Council (TTC) Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management. The panelists discussed processes for identifying key priorities for international AI standard setting, using existing dialogues like TTC and more.
NIST AI RMF 1.0 Launch at U.S. Department of Commerce
After months of engagement and feedback provided as part of the robust industry engagement conducted by NIST in their development of the AI Risk Management Framework (RMF), Credo AI was the only startup invited to speak on a panel discussion at the U.S. Department of Commerce official launch of the NIST AI RMF, in a conversation moderated by Elham Tabassi, Chief of Staff in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST).
Policy Intelligence: Translating Policy and Standards to Code
Drawing from our experiences and discussions with global policymakers and standard setters, Credo AI has developed extensive and deep “Policy Intelligence.” Credo AI integrates this expertise and the most up-to-date insights into our Responsible AI Governance Platform. This work combines a profound technical grasp of AI risks with extensive policy and regulatory knowledge.
Our Policy Intelligence feeds into our Policy Packs, Credo AI’s technical requirements developed in collaboration with our research team to translate high-level concepts into checklists of actionable steps to ensure your AI systems are responsible, safe, and compliant.
Credo AI is trusted by the ones that build trust
It doesn’t end there
The knowledge we gather is shared through our expert content in our Resource Center. To learn more about topics such as the EU AI Act, NIST AI Risk Management Framework, and how to embark on your AI governance journey, please visit our Resource Center.
Adopt AI with confidence today
The Responsible AI Governance Platform enables AI, data, and business teams to track, prioritize, and control AI projects to ensure AI remains profitable, compliant, and safe.