Menu Close

Can Artificial Intelligence Create a Fairer Justice System?

algorithm

Illustration by Matthew Shadbolt via Flickr

As technology and its uses continue to advance in the criminal justice system, researchers are being tasked with analyzing its effectiveness— and, surprisingly, its perceived fairness.

An increasing number of courtrooms and judges are using Artificial Intelligence (AI) to help determine bail, parole sentencing, and risk assessment of a defendant.

But are their measurements accurate— and fair— in legal terms?

Doaa Abu-Elyounes, a doctoral student at Harvard Law School, who is affiliated with the Berkman Klein Center for Internet & Society at Harvard University, has been studying how algorithms affect the criminal justice system.

In a recent paper, Abu-Elyounes discusses the legal limitations and notions of algorithmic “fairness” through a legal lens.

Abu-Elyounes begins by analyzing COMPAS, a proprietary actuarial risk and needs assessment tool used by criminal justice agencies in different stages of the system, such as pretrial, sentencing, jail placement, probation, and parole.

COMPAS was developed by the for-profit company, Northpointe, today owned by Equivant.

The program uses a “combination of dynamic and static factors for each defendant,” and after analyzing data, “COMPAS provides a risk score on a scale of low to high, and this score represent the likelihood that the defendant would recidivate,” Abu-Elyounes wrote in her paper.

Judges and other authorities typically look at the computer-produced score and hold it to a high standard, leading them to agree with the output. Therefore, Abu-Elyounes explains, the potential for a misguided outcome is elevated if the program’s data isn’t fully accurate.

A lengthy investigation by ProPublica, and the Washington Post, concluded that “COMPAS is racially biased because it falsely labeled black defendants as future criminals twice as much as it did so for white defendants,”  Abou-Elyounes wrote.

“While among black defendants, 42 percent of those who were released from jail and did not commit any future crimes were wrongly labeled high-risk, among white defendants, the algorithm made the same mistake in only 22 percent of cases,” she noted.

Moreover, when addressing an implicit racial bias behind the COMPAS system, ProPublica also identified false-negative errors, “meaning that COMPAS falsely flagged white defendants as low-risk (although the exact percentage was not mentioned in the article),” said Abou-Elyounes.

Since then, Abu-Elyounes explained, some academics and researchers have been reluctant to support the use of algorithms in the criminal justice system.

Prior Offenses

Another common problem with AI and risk-assessment determinations has to do with analyzing a defendant’s prior offenses. Hypothetically, as Abu-Elyounes writes, imagine if an African-American defendant has five prior offenses. An AI will typically deem this defendant as a “higher risk” individual compared to a white defendant with only three priors.

While using a numerical metric, it’s easy to see the differences. However, Artificial Intelligence cannot make a “fair” comparison between these two hypothetical individuals if the African-American defendant has five minor offenses, and one of the white individual’s three offenses is for murder.

A “fair risk evaluation” using a computer doesn’t seem possible, Abu-Elyounes explains.

But, Abu-Elyounes argues that if at least three “notions of fairness” are considered when dealing with these AI systems, we might have a chance at using the technology with the desired “fair” outcomes.

She acknowledges that no single notion is the “right notion,” noting that it all depends on the context of each situation, crime, or policy the AI is being applied to.

The notions are as follows:

      • “Individual Fairness Notions—aim to achieve fairness toward the individual regardless of his/her group affiliation and corresponds with the principle of equal opportunity;
      • Group Fairness Notions—aim to achieve fairness toward the group that the individual belongs to, and corresponds with the principle of affirmative action; and
      • Causal Reasoning Notions—put the focus on the causal relationship between the factors and the outcome, so that those notions correspond with the principle of due process.”

Abu-Elyounes details that fairness is contextually dependent on the applied discipline, and most importantly, it’s not a “one-size-fits-all solution.”

She argued it is important to clarify laws and policies as well as define an approach to fairness in order to address bias implicit in algorithms.

The paper cites a Canadian approach called the Algorithmic Impact Assessment tool [AIA], which aims to “help institutions better understand and mitigate the risks associated with Automated Decision-Making Systems by providing the appropriate governance, oversight, and audit requirements,” according to their website as cited by Abu-Elyounes.

In the U.S., the Algorithmic Accountability Act, introduced in April of 2019, “authorizes the Federal Trade Commission to develop regulation[s] that require companies to conduct impact assessment if they create algorithms that pose high risk automated decision systems.”

Abu-Elyounes concludes her paper by saying, “AI algorithms cannot replace social or legal reforms that need to be made in order to cultivate a more just society, but collaboration between all actors in the field can at least ensure that we are on the right path.”

Her full paper can be accessed here.

Andrea Cipriano is a staff writer for The Crime Report.