May 26, 2024
Cindy Lui, Windsor Law Student, JD 2025, & LTEC Lab Research Assistant
LTEC Lab members and Professor Okidegbe take a picture together. Lots of smiles!
On March 22, 2024, Windsor Law LTEC Lab, jointly with the Transnational Law and Racial Justice Network (TLRJN) welcomed Professor Ngozie Okidegbe who presented on “Discredited Data: The Epistemic Origins of Algorithmic Discrimination”. The seminar brought together various faculty, community members, and students from different disciplines to learn about the employment of pretrial algorithms and its production of inequities within the bail context.
Professor Okidegbe is the Moorman-Simon interdisciplinary professor of law and assistant professor of computing and data sciences at Boston University. Her work focuses on law and technology, evidence, criminal procedure and racial justice. Her research examines how the use of predictive technologies and criminal justice impact racially marginalized communities. Along with her impressive academic research, Professor Okidegbe previously practiced labour law at a boutique firm in Toronto and clerked for both the Constitutional Court of South Africa and the Court of Appeal of Ontario.
What is Algorithmic Discrimination?
Professor Okidegbe began her presentation by defining algorithmic discrimination. This term refers to how algorithmic systems produce biased predictions from members of marginalized communities, often labelling them as higher risk for negative outcomes. As a result, it produces racial, gendered, and class inequities which their implementation is supposed to reduce.
The problem with discussions surrounding algorithm discrimination is that it primarily focuses on the biased data used to construct predictive technologies, but fails to engage with algorithmic construction, a major contributor to algorithmic discrimination. Algorithmic construction operates as a site for algorithmic discrimination, entrenching a way of knowing about data that is exclusionary and oppressive to marginalized communities facing intersectional discrimination. It continues to exclude these groups from dominant knowledge production. Algorithmic discrimination also derives from the fact that algorithmic systems are built and trained exclusively with data from criminal legal institutions known as carceral knowledge sources.
The Rise of Pretrial Algorithms
Every year, millions of charged individuals are subjected to pretrial hearings to determine whether they will be released before trial. In the United States, jurisdictions turn to pretrial algorithms as a popular response to the problem of mass incarceration. These algorithms tend to be risk assessment algorithms, using information such as the likelihood of non-appearance and risk of arrest for a new crime, to produce a risk score that is factored in a charged person’s eligibility for pretrial release.
Professor Okidegbe identified 3 important shifts that result from the growing of algorithms in the pretrial context:
Stratification advances in data collection, data processing, and computation methods have made algorithms faster, cheaper, and more readily available. For example, public safety algorithms are offered to jurisdictions for free.
There are sociopolitical shifts in governance, privatization, resource rationing, and concerns about efficiency, neutrality, and objectivity of government decision making. Algorithms come with the promise that it can make the government less costly, biased, and less arbitrary.
There are current reform movements around criminal administration. The increased use of pretrial algorithms is a part of growing movements to use data to address racism, classism, and other forms of discrimination that have resulted in mass incarceration in the pretrial system. As a result, there are claims that the use of technology such as pretrial algorithms can reduce existing inequities within the pretrial system.
Algorithmic Discrimination and the Data Source Selection Problem
However, the increased use of pretrial algorithms did not reduce the overrepresentation of racially marginalized individuals charged in pretrial detention. Critics of pretrial algorithms feared its use as algorithms have tended to reproduce and entrench existing inequities in the USA. This is because algorithms tend to overestimate the riskiness of marginalized charged individuals while underestimating the riskiness of charged persons that are white. Therefore, it leads to the algorithm disproportionately mislabelling racially marginalized charged individuals as at a high risk of pretrial misconduct while whites are mislabelled as having a low risk.
Algorithmic discrimination is a problem that is impacting pretrial algorithms and acting jurisdictions are noticing that they facilitate existing racial and socioeconomic inequities. Professor Okidegbe provided three reasons as to why algorithmic discrimination may be occurring:
Bias: Algorithms produce existing inequities because they are built on bias and doubts.
Data Selection Problem: There is exclusive use of data from carceral knowledge sources to build algorithmic systems.
Carceral Knowledge Sources: Carceral knowledge sources are formally connected to the political and social systems that facilitate control, punishment, and incarceration. There are four main carceral knowledge sources that developers use which are the courts, police, pretrial service agencies, and criminal law actors. The problem with using carceral knowledge sources is that they routinely produce incomplete or inaccurate data. They produce data that is racialized, classed, and gendered because the criminal system is concentrated on marginalized communities/
Faculty, students from different disciplines, and community members join together to learn about algorithmic discrimination.
Limits of the Biased Data Diagnosis
There is an overall consensus that algorithmic discrimination exists and it is caused by the fact that algorithms are constructed and trained with biased data. There are a number of strategies designed to combat algorithmic discrimination and Professor Okidegbe examined 3 major ones:
The Better Data Approach: This approach advocates for the use of more complete and representative data sets while focusing on getting better data from currently used knowledge sources.
The Technical Adjustment Approach: This approach requires adjusting the algorithm so it can overcome the fact that it was built with biased data. An example is using the colorblind approach which entails hiding data that is heavily correlated with race and class from the algorithm to help facilitate the predictions, not taking into account the biased data.
The Auditing Approach: This approach advocates for measuring the inequitable impact that the use of an algorithm has in a jurisdiction. The idea is that the information can empower the jurisdiction to discontinue the use of or try to figure out adjustments for the fact that the algorithm is maintaining existing inequities.
However, Professor Okidegbe recognized limitations to these strategies as they fail to contend with the carceral knowledge sources, an essential component of algorithmic discrimination.
Proposal
Professor Okidegbe proposed two steps to address algorithmic discrimination. The first step is to shift to non-carceral knowledge sources, knowledge sources that are not connected to the political and social institutions that control or facilitate punishment and incarceration. The second step is to tap into community knowledge sources, sources that are connected to the production, collection, and validation of knowledge from communities. These communities have been traditionally excluded and discredited because the communities affected are often targeted by criminal administration which produces data that tends to be qualitative, while algorithmic construction needs quantitative data. Some communities actually produce quantitative data as well as qualitative data, but their data continues to be excluded due to epistemic oppression. The shifting to community knowledge sources, which Professor Okidegbe focuses primarily on, has many potential benefits as data produced by community knowledge sources enables us to reconceptualize what we mean by public safety. This could potentially facilitate an anti-racist framework for public safety. Shifting to community knowledge sources also allows us to think about harm reduction to lower the use of pretrial incarceration.
Finally, Professor Okidegbe concluded her presentation by acknowledging potential objections to her proposal. The first objection regarded accuracy, as algorithms that are built on data from community knowledge sources will be less accurate than algorithms built on data from carceral knowledge sources. Professor Okidegbe responded to this objection by reminding the audience to rethink what we mean by “accuracy.” Accuracy needs to be understood in the context of what we want the pretrial system to do and how we understand public safety. Another objection is the democratizing objection, the fear that data derived from community knowledge sources will not be less biased, and may in fact be more biased. Professor Okidegbe stated that only through dismantling carceral knowledge sources and the dominance of algorithmic construction can we begin to think through a way of addressing algorithmic discrimination.
Comments and Questions with Professor Danardo Jones
Windsor Law Professor Jones followed Professor Okidegbe’s conclusion with a few comments and questions. He pointed towards Professor Okidegbe’s argument of the over reliance on carceral knowledge sources and the need to break apart the idea of race risk. He related his current research looking at the incorporation of race-based pre-sentence reports being used to break apart of the race risk matrix, and discussed how the existence of these reports have reinforced stereotypes against Black offenders. Professor Jones also liked the possibility of a colour-blind approach that Professor Okidegbe mentioned earlier in the presentation, but asked how do we move away from the race risk matrix? Professor Okidegbe responded that an important step to dismantling the race risk matrix is to not only turn to non-commercial knowledge sources, but creating an inclusive and democratic process of technological development. Only when we are able to do so can we think about the matrix of riskiness and race as well as the pathologization of racially marginalized people.
Students speaking to Professor Okidegbe after her presentation.
The harm of algorithmic discrimination only continues to threaten the ability of algorithmic systems to address inequalities. If states and their governments are serious about addressing algorithmic discrimination, then they have to look beyond the biases in the current data used to make these algorithms and instead, move away from carceral knowledge sources to reorient the criminal legal system. Ultimately, this would help decrease the harm on marginalized communities, but that potential requires us to dismantle the knowledge sources that brought us to this issue in the first place.
LTEC Lab thanks Professor Okidegbe for sharing her passion, research, and insights at a time where the use of algorithms are becoming more prominent within our legal systems. We also thank Professor Jones for his insightful commentary, which contributed to a lively exchange with our presenter of the day and seminar attendees.
Comments