In a moment where AI governance is coming to the forefront across the U.S. and globally, California's September 6, 2023 Executive Order and subsequent passage of Assembly Bill 302 on inventory requirements for automated decision systems represents a significant step toward advancing responsible AI use through government use and public sector procurement. The California EO on AI is not just a set of instructions for state agencies, but a blueprint for how governments and enterprises alike can navigate the evolving landscape of AI, ensuring that AI tools are adopted responsibly.
Understanding California’s Executive Order N-12-23
The California Executive Order (EO) draws on principles from the White House Blueprint for an AI Bill of Rights and NIST’s AI Risk Management Framework 1.0 (AI RMF), with the aim to reform public sector procurement so that agencies consider uses, risks, and training needed to improve AI purchasing. Specifically, California's Department of Technology (CDT), the Office of Data and Innovation (ODI), and other key state agencies are tasked with examining how generative AI can best serve the state's needs while safeguarding against its risks. This work includes a mandate to issue guidelines as soon as January 2024 for procurement, uses, and the training required for generative AI. The EO also mandates that all agencies and departments conduct and submit an inventory of all current high-risk uses of generative AI within the agency or department to the CDT, which will administer the inventory.
What about AB-302? How does it relate to the EO?
In October 2023, one month after the Executive Order was signed, Bill AB-302 was approved. Bill AB-302 added Section 11546.45.5 to the California Government Code which became effective on January 1, 2024. Section 11546.45.5 expands the scope of the inventory requirement from high-risk uses of generative AI to high-risk automated decision systems.
“Automated decision system" means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.
California agencies will first need to understand if their systems and use cases are considered “high-risk”. Section 11546.45.51 defines “high-risk automated decision systems” as:
“an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice.”
This high-risk analysis will be critical as it will determine which systems will be subject to the comprehensive inventory requirements. Among others, inventory requirements include describing measures in place to mitigate the risks, including cybersecurity risks and the risk of inaccurate, unfairly discriminatory, or biased decisions of the automated decision system.
Deadlines and Deliverables
Both the California EO and Section 11546.45.51 sets forth a series of deadlines and requirements for that state agencies must complete, including:
- By January 2024: general guidelines for public sector procurement, including considerations for high-risk scenarios.
- By July 2024: guidelines for for State agencies and departments to analyze the impact of GenAI on vulnerable communities and for training state government workers in the use of GenAI tools.
- By September 1, 2024: comprehensive inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, any state agency.
- By January 2025: updates to the State's project approval, procurement, and contract terms, along with establishing criteria to evaluate the impact of GenAI on the state government workforce.
- By January 1, 2025: annual report of the comprehensive inventory
Implications for Government Agencies and Enterprises
For government agencies, the California EO mandates a proactive approach for state agencies to procure, develop and use AI responsibly. By setting clear deadlines for the development of guidelines and inventory requirements for each state agency, it ensures a state-wide approach to California’s use of AI in procurement and the deployment of public services. This action by the State will have ripple effects on the AI industry at large - standards implemented by enterprises contracting with the state agencies will become standards to which all enterprises will be required to hold themselves in order to remain a trusted AI provider and marketplace competitor.
How can Credo AI help?
Credo AI's platform is designed to help organizations stay in control of AI risk. Credo AI enables both agencies and organizations to inventory all AI use cases, measure and manage risk for those use cases –including impacts on vulnerable communities– and apply-risk based controls through custom policy packs. Our platform also enables enterprises and agencies to evaluate third-party tools for risk & compliance, and generate transparency documentation, such as risk and compliance reports.
The Credo AI Platform enables enterprises selling AI products to federal and state government agencies to seamlessly surface government requirements at the point of ML development, generate evidence from existing tools and workflows, and automatically create governance artifacts. Vendors are also able to upload evidence through a secure portal where federal and state government agencies can review and evaluate if requirements are met.
California's EO is more than a policy directive, it is a statement of intent to procure and use trustworthy AI, leading the way in enacting mandatory transparency requirements for entities using AI at the state level. Organizations that start tracking all AI and ML use cases with an AI Registry now can set themselves up for success as procurement requirements become the norm across the U.S. For both government agencies and enterprises, the message is clear: responsible AI governance needs to be at the center of AI development, use, and procurement.
By leveraging Credo AI's products, organizations can effectively navigate the procurement landscape and ensure that their use of AI aligns with any AI governance requirements.
- 💌 Subscribe to our monthly newsletter to Keep up to date with Credo AI updates and advancements in the RAI industry.
- ☎️ Talk to our expert team to learn more about our product and how we can support you in your with AI governance, risk management and compliance.
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.