Policy Submissions

Credo AI's Comments on NIST’s Plan for Global Engagement on AI Standards

Credo AI submitted comments on NIST RFC AI 100-5: A Plan for Global Engagement on AI Standards, highlighting ways to strengthen activities before, during, and after the creation of a formal standard.

June 10, 2024
Author(s)
Lucía Gamboa
Contributor(s)
No items found.

As the conversation moves from principles to practice, the development of timely and practical standards to govern AI’s development and deployment is crucial. At Credo AI, we believe that refining these standards is essential for building a future where AI is not only powerful but also trustworthy. In our submission to the Institute of Standards and Technology (NIST) on its Plan for Global Engagement on AI Standards we outline our recommendations for prioritizing key topics, activities, and actions in AI standardization efforts, drawing on insights from our experience and industry expertise.

To bolster the standard creation process, NIST’s strategy should consider:

  • Before the creation of a standard: explore opportunities for enabling compliance with global regulations through mutual recognition agreements.
  • During the standard creation process: ensure that standards are actionable for organizations of all sizes, considering the diverse needs and inputs of stakeholders.
  • After a standard has been created: encourage mechanisms for voluntary and anonymized sharing of standards adoption to better evaluate AI standards' effectiveness and impact.

Credo AI’s submission also provides specific recommendations on prioritization of topics for standardization work, including additional topics, and prioritization of activities and actions.

Topic Prioritization and Additions

One of the foundational aspects of AI standardization is terminology and taxonomy. At Credo AI, we advocate for further refinement in this area, particularly concerning terms like "algorithmic discrimination," "risk assessment," and "AI impact assessment." These terms are critical for organizations to understand and implement measures to ensure the fairness, safety, and accountability of their AI systems. By defining these terms clearly, organizations can develop and justify their approaches to different technical dimensions and applications of AI.

Moreover, transparency among AI actors is essential for fostering trust in AI systems. We propose standardizing disclosure mechanisms for model and data cards to facilitate seamless sharing of governance artifacts throughout the AI value chain. This ensures that downstream developers have the necessary information to perform context-specific evaluations, especially when substantial modifications are made to AI models.

Additionally, we recommend introducing a standard for auditing AI systems' risk-based management. Audits play a crucial role in ensuring meaningful accountability and assessing conformity with risk management standards. Standardized audit guidelines would enable consistent and reproducible practices, enhancing trust and transparency in AI systems.

Activity and Action Prioritization

Government agencies play a pivotal role in adopting and implementing AI standards effectively. At Credo AI, we have observed that government agencies are in some instances having difficulties in understanding when and how to adopt NISTs RMF and or the GenAI Profile. Therefore, we recommend that NIST prioritizes developing resources or mechanisms to improve understanding and adoption of NIST standards by the government. 

With respect to stakeholder engagement, meaningful engagement requires openness and accessibility to all stakeholders, including smaller organizations. We advocate for workshops and stakeholder engagement exercises that increase AI standards literacy among smaller businesses. Collaboration with supranational organizations like the United Nations can also enhance global cooperation and alignment in AI standardization efforts.

Conclusion

Standards play a critical role in the development of safe, secure, and trustworthy AI. By prioritizing key topics, activities, and actions, we can ensure that AI standards remain relevant and effective in an ever-changing technological landscape. Credo AI is committed to partnering with NIST to advance AI governance and standardization work, ultimately driving progress toward a trusted AI ecosystem.

Read our full submission here.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.