TY - JOUR
T1 - Less Discriminatory Algorithms
AU - Black, Emily
AU - Koepke, John Logan
AU - Kim, Pauline T.
AU - Barocas, Solon
AU - Hsu, Mingwei
N1 - Publisher Copyright:
© 2024 Georgetown Law Journal Association. All rights reserved.
PY - 2024/10
Y1 - 2024/10
N2 - In discussions about algorithms and discrimination, it is often assumed that machine learning techniques will identify a unique solution to any given prediction problem, such that any attempt to develop less discriminatory models will inevitably entail a tradeoff with accuracy. Contrary to this conventional wisdom, however, computer science has established that multiple models with equivalent performance exist for a given prediction problem. This phenomenon, termed model multiplicity, suggests that when an algorithmic system displays a disparate impact, there almost always exists a less discriminatory algorithm (LDA) that performs equally well. But without dedicated exploration, developers are unlikely to discover potential LDAs. These observations have profound ramifications for the legal and policy response to discriminatory algorithms. Because the overarching purpose of our civil rights laws is to remove arbitrary barriers to full participation by marginalized groups in the nation’s economic life, the law should place a duty to search for LDAs on entities that develop and deploy predictive models in domains covered by civil rights laws, like housing, employment, and credit. The law should recognize this duty in at least two specific ways. First, under disparate impact doctrine, a defendant’s burden of justifying a model with discriminatory effects should include showing that it made a reasonable search for LDAs before implementing the model. Second, new regulatory frameworks for the governance of algorithms should include a requirement that entities search for and implement LDAs as part of the model building process.
AB - In discussions about algorithms and discrimination, it is often assumed that machine learning techniques will identify a unique solution to any given prediction problem, such that any attempt to develop less discriminatory models will inevitably entail a tradeoff with accuracy. Contrary to this conventional wisdom, however, computer science has established that multiple models with equivalent performance exist for a given prediction problem. This phenomenon, termed model multiplicity, suggests that when an algorithmic system displays a disparate impact, there almost always exists a less discriminatory algorithm (LDA) that performs equally well. But without dedicated exploration, developers are unlikely to discover potential LDAs. These observations have profound ramifications for the legal and policy response to discriminatory algorithms. Because the overarching purpose of our civil rights laws is to remove arbitrary barriers to full participation by marginalized groups in the nation’s economic life, the law should place a duty to search for LDAs on entities that develop and deploy predictive models in domains covered by civil rights laws, like housing, employment, and credit. The law should recognize this duty in at least two specific ways. First, under disparate impact doctrine, a defendant’s burden of justifying a model with discriminatory effects should include showing that it made a reasonable search for LDAs before implementing the model. Second, new regulatory frameworks for the governance of algorithms should include a requirement that entities search for and implement LDAs as part of the model building process.
UR - https://www.scopus.com/pages/publications/105005984041
M3 - Article
AN - SCOPUS:105005984041
SN - 0016-8092
VL - 113
SP - 53
EP - 120
JO - Georgetown Law Journal
JF - Georgetown Law Journal
IS - 1
ER -