In recent weeks, there has been a significant increase in the number of AI-related state bills introduced across the United States.
This reflects greater pressures to address AI and automated decision-making systems used in the government and private sector – and the potential risks they present.
States have taken different approaches to fill the current gaps in regulation, including the development of task forces and the allocation of funding for research.
Additionally, a number of bills have proposed measures aimed at increasing transparency around AI systems, including requirements for algorithmic impact assessments and registries/inventories of AI systems used.
These transparency measures are growing in popularity as a regulatory tool to ensure that AI systems are trustworthy and safe, affecting developers and deployers of AI products, both in the private and public sectors.
AI Transparency Bills at a Glance
Impact Assessments
Similar to the European Union’s General Data Protection Regulation (GDPR) use of mandating data protection impact assessments (DPIA) to address the risks associated with data collection and processing, regulators are proposing the use of algorithmic or AI impact assessments to mitigate potential biases, discrimination, and other adverse consequences associated with the use of AI and algorithmic systems.
For example, California AB 331 requires developers and deployers of an AI system to complete and document an impact assessment, include the following elements at a minimum:
1. A statement of the purpose of the automated decision tool and its intended benefits uses, and deployment contexts.
2. A description of the automated decision tool’s outputs and how they are used to make or be a controlling factor in making a consequential decision.
3. A summary of the type of data collected from natural persons and processed by the automated decision tool when it is used to make, or be a controlling factor in making, a consequential decision.
4. An analysis of a potential adverse impact on the basis of sex, race, or ethnicity that arises from the use of the automated decision tool.
5. A description of the measures taken by the developer to mitigate the risk known to the developer of algorithmic discrimination arising from the use of the automated decision tool.
6. A description of how the automated decision tool can be used by a natural person, or monitored when it is used, to make, or be a controlling factor in making, a consequential decision.
These impact assessments are required to be provided to the California Civil Rights Department with fines of up to $10,000 for failing to produce the document.
Additionally, New York’s proposed Digital Fairness Act would require that any state or nonprofit entity using an automated decision system conduct and publicly publish an impact assessment that includes:
- A detailed description of the automated decision system, its design, its training, its data, and its purpose;
- An assessment of the relative benefits and costs of the automated decision system in light of its purpose, taking into account relevant factors, including data minimization practices, the duration for which personal information and the results of the automated decision system are stored, what information about the automated decision system are available to the public, and the recipients of the results of the automated decision system;
- An assessment of the risk of harm posed by the automated decision system and the risk that such automated decision system may result in or contribute to inaccurate, unfair, biased, or discriminatory decisions impacting individuals; and
- The measures the state agency will employ to minimize the risks, including technological and physical safeguards.
As AI technologies continue to develop and become more integrated into various sectors, it is expected that more governments and regulatory bodies will introduce requirements for algorithmic impact assessments or similar transparency reports.
AI Inventories
Following in the footsteps of the White House’s Executive Order (EO) 13960, states such as Connecticut, Pennsylvania, Texas, and Washington have proposed that state agencies inventory their AI use cases – including those that were procured from private companies.
Last year, Vermont already passed a law requiring an inventory of automated decision systems used or procured by the state. For each automated decision system, the inventory includes, among other things:
- The automated decision system’s name and vendor;
- A description of the automated decision system’s general capabilities, including: (A) reasonably foreseeable capabilities outside the scope of the agency’s proposed use; and (B) whether the automated decision system is used or may be used for independent decision-making powers and the impact of those decisions on Vermont residents;
- The type or types of data inputs that the technology uses; how that data is generated, collected, and processed; and the type or types of data the automated decision system is reasonably likely to generate;
- Whether the automated decision system has been tested for bias by an independent third party, has a known bias, or is untested for bias; and
- A description of the purpose and proposed use of the automated decision system, including: (A) what decision or decisions it will be used to make or support; (B) whether it is an automated final decision system or automated support decision system; and (C) its intended benefits, including any data or research relevant to the outcome of those results.
While the Vermont law is focused only on automated decision systems used by the state, Pennsylvania has proposed HB 49, which would create a similar registry of “businesses operating artificial intelligence systems” in the state detailing basic information about the business and “the intent of the software being utilized.”
With the emergence of these registries, it becomes more imperative for businesses to maintain an accurate and comprehensive inventory of their AI and automated decision-making systems.
Are Your AI Systems Compliant?
Emerging transparency requirements will affect how governments and businesses develop and deploy AI and automated decision-making systems. Preparing for these now is key to using AI responsibly and guaranteeing compliance. Learn more about how Credo AI can support your organization by scheduling a demo today at demo@credo.ai.
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.