Connecting AI advances to AI governance, one paper at a time.
Paper: Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action
Welcome back to Paper Trail, Credo AI’s series where we break down complex papers impacting AI governance into understandable bits. In our second post, we explore a law article which discusses the legality of race-aware algorithms in the context of algorithmic fairness.
Disclaimer: the post summarizes the opinions provided by law professor Dr. Pauline Kim in her paper on Race-Aware Algorithms. This does not represent Credo AI’s legal advice.
Summary
This paper argues against using the affirmative action doctrine as the default legal model for defending the use of race-conscious algorithms in fairness and bias mitigation, and seeks to provide clarity as to what legally permissible steps those designing predictive algorithms can take to reduce disparate impacts on historically subordinated groups.
Key Takeaways
- While recent scholarship has called for “algorithmic affirmative action” as a well-intentioned defense of using race-aware algorithms in bias mitigation, invoking the affirmative action doctrine in such contexts can sometimes do more harm than good.
- The decision to defend race-conscious algorithms under affirmative action implies an algorithm to be discriminatory, and may subject it to stricter legal scrutiny. Yet many applications of race-conscious algorithms are non-discriminatory in nature.
- Understanding which algorithmic de-biasing strategies are non-discriminatory, and thus, legally permissible without special justification, is increasingly important in light of the recent Supreme Court decision.
- Dealing with dataset limitations and reformulating optimization problems to be more equitable are generally legally permissible and do not require an affirmative action defense. However, certain strategies to increase demographic parity may trigger increased legal scrutiny.
Paper Summary
Often, strategies for mitigating algorithmic bias require system designers to be aware of protected characteristics like race. Recent scholarship calling for “algorithmic affirmative action” has sought to defend the use of race-aware algorithms and de-biasing strategies to increase fairness under the doctrine of affirmative action. However, the decision to defend race-conscious algorithms under affirmative action implicitly assumes discrimination, which in turn triggers heightened legal scrutiny.
What is often overlooked is that forms of race-consciousness (e.g. Census racial data collection or public health attempts during COVID to consider racial disparities in access to vaccine resources) do not violate anti-discrimination law, and are thus permissible without the invocation of affirmative action. In a discussion of algorithmic de-biasing strategies, and whether they constitute disparate treatment requiring special justification, Dr. Kim reaches the following conclusions:
Dealing with dataset limitations is generally legally permissible. Biased training data may hurt a model’s output in ways that negatively affect marginalized groups. Attempts to address dataset limitations (e.g. collecting additional data from certain groups, removing features with unreliable data, rejecting a dataset outright) may be race-conscious, but should not raise legal concerns.
Example: “Suppose [AI system] designers discovered that supervisor evaluations included in the training data consistently downgraded Black employees relative to others even though they demonstrated the same level of productivity. The decision to remove that feature when training the algorithm is race-conscious but does not discriminate against white employees.”
Problem reformulation to increase equity is generally legally permissible. Predictive systems are often built by translating abstract objectives (e.g. hiring the best candidate) into algorithmic problems through easily measurable target variables (e.g. automated resume screening). Designers who find that their system is biased may change how they have operationalized a problem by choosing to optimize for different target variables that maintain system accuracy while increasing equity. Although choosing new target variables may require race-awareness, this strategy is generally legally permissible because it does not require making decisions about individuals.
Example: In a famous case, a health care algorithm was built to predict high-risk patients in order to allocate additional medical resources. The problem was operationalized using medical expenditures as a proxy for health risk, however, this turned out to be a flawed choice. Due to economic, structural, and cultural reasons, healthcare consumption does not necessarily correlate evenly with health risks across demographic groups. By instead reformulating the model to predict chronic health conditions, racial disparity was reduced.
Some strategies to increase demographic parity may trigger increased legal scrutiny. Attempts to ensure that demographic groups receive positive outcomes in proportion to their actual representation sometimes involve the use of strategies that will trigger increased legal scrutiny.
Example: Ranking people according to a predicted target and then choosing a fixed percentage of the top scorers across racial groups to ensure representative distribution may increase demographic parity but violate anti-discrimination law. In hiring, it is possible that such a strategy could be interpreted as "race-norming," which is specifically prohibited by Title VII.
Select tactics like Disparate Learning Processes (DLPs) and using race at prediction time fall into an area of greater legal uncertainty. DLPs are defined as strategies “that use racial information during training, but do not allow the model to access race when making predictions.” Some make the intuitive assumption that because DLPs do not use race at prediction time, they are more likely to be legally compliant than strategies that do use race at prediction. However, as the author argues, this is not necessarily the case: “some DLPs may constitute disparate treatment, while some uses of race during prediction may be completely lawful. “
Why does this matter?
In late June 2023, the United States Supreme Court ruled that affirmative action in the context of university admissions programs was unlawful. Although this ruling was limited to the domain of higher education, some suspect that the decision may have a chilling effect on pro-diversity efforts elsewhere (e.g. corporate DEI initiatives, anti-bias legislation). The Court ruling makes it increasingly important to distinguish non-discriminatory algorithmic de-biasing strategies from other methods which are more similar to affirmative action in education.
Although this paper was written in 2022, the author anticipated what a ruling against affirmative action might mean for the legality of race-aware algorithms, writing that “when race is considered in the model-building process in order to de-bias algorithms…the ways in which race is taken into account and the effects of doing so are [often] quite distinct from treating race as a “plus” factor in a college application file.” As a result, “there will likely remain room for race-conscious efforts to remove bias from algorithms.” However, as discussed above, the use of certain strategies, particularly those in which race-classification is applied mechanistically or to systematically benefit one group of individuals over others, requires extra caution.
How is it relevant to AI Governance?
AI governance involves identifying risks and applying the relevant risk-mitigating controls based on deployment context. This paper is particularly concerned with the field of AI fairness and the associated ethical risks. Although there are many factors to take into consideration when addressing bias risks (e.g. solution efficacy), legal nuances are critically important and often overlooked.
Ultimately, it will come down to policymakers and the courts to further define the legality of using race in algorithmic contexts. However, in the meantime, we encourage deployers of AI systems to remain cognizant of the legal implications of the de-biasing strategies they use.
At Credo AI, we are committed to working with companies to build AI that is fair, transparent, and accountable. As you navigate the challenges of building safe and compliant technical systems, we are here to serve as a partner in your AI governance process.
About Credo AI
Credo AI is on a mission to empower enterprises to responsibly build, adopt, procure and use AI at scale. Credo AI’s cutting-edge AI governance platform automates AI oversight and risk management, while enabling regulatory compliance to emerging global standards like the EU AI Act, NIST, and ISO. Credo AI keeps humans in control of AI, for better business and society. Founded in 2020, Credo AI has been recognized as a CBInsights AI 100, Technology Pioneer by the World Economic Forum, Fast Company’s Next Big Thing in Tech, and a top Intelligent App 40 by Madrona, Goldman Sachs, Microsoft and Pitchbook.
Ready to unleash the power of AI through governance? Talk to us today!
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.