Generative AI

Unveiling Transparency: First Step towards Responsible AI Disclosures

Rooted in our belief in transparency, we have taken the initial strides towards establishing a standardized framework for evaluating the risk and trustworthiness of generative AI vendor tools, and we’re sharing the evaluations of common generative AI tools that we’ve done based on this framework.

June 28, 2023
Author(s)
Susannah Shattuck
Contributor(s)
Eli Sherman, Ph.D

Trust is the foundation that enables the widespread adoption of generative AI in the enterprise. At Credo AI, we are dedicated to constructing the bedrock of trust for AI, providing essential tools that allow vendors to demonstrate the safety, security, fairness, compliance, and human-centered nature of their AI systems. Simultaneously, we empower organizations embarking on their AI journey to assess the trustworthiness of vendor tools.

In recent months, the urgency surrounding generative AI adoption has surged, prompting a significant demand for AI governance. Businesses now recognize the imperative of evaluating the risks associated with tools such as ChatGPT, Github Copilot, and the underlying models that drive these applications.

“At Credo AI we firmly believe that transparency is the cornerstone of safe AI use,” says Navrina Singh, our founder and CEO. “Embracing Responsible AI Disclosure as the norm across companies ensures collective AI success, promoting the development, procurement, and responsible use of systems that benefit both businesses and society at large."

Rooted in this belief in transparency, we have taken the initial strides towards establishing a standardized framework for evaluating the risk and trustworthiness of generative AI vendor tools, and we’re sharing the evaluations of common generative AI tools that we’ve done based on this framework. These profiles equip organizations with a high-level understanding of the risks that vendors have proactively mitigated, as well as those that remain unresolved, necessitating further governance at the tool's point of use. Take a look at our Generative AI Vendor Tool Risk Profiles here.

We remain committed to collaborating with AI vendors and the wider AI ecosystem, forging a path towards public Responsible AI postures, akin to the data privacy, cybersecurity, climate disclosures embraced by the industry to build trust with their stakeholders. Together, we are building a future where transparency reigns, trust thrives, and responsible AI becomes the norm. 

We welcome feedback on our first version of a standardized framework for evaluating and reporting on generative AI vendor tool risk. We will be convening stakeholders from industry, public sector, and academia in a series of closed and open door discussions to further develop and refine Responsible AI postures to be useful for all members of the ecosystem. If you are interested in participating, please reach out to us.

Finally, if you are a vendor selling AI tools, and you would like to create your own Vendor Tool Risk Profile to start building trust with your customers and the broader market, we’d like to help.

Get in Touch!

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.