top of page

Artificial Intelligence and the Justice System

Akhil Shah[1], JD'21, Windsor Law

On November 20, 2019, Windsor Law LTEC Lab hosted a presentation by Nye Thomas, Executive Director and Ryan Fritsch, Counsel at the Law Commission of Ontario (LCO).  The LCO is an independent law reform agency with a mandate to be at an arms-length from the judiciary and the provincial government and provide suggestions to complex legal questions.

Nye Thomas and Ryan Fritsch spoke about the role that artificial intelligence (AI) is playing, and is primed to play, in the justice system. In broad terms, the pair spoke about the use and misuse of AI algorithms in the criminal and civil justice systems. The question before legal professionals, as posed by Mr. Thomas, concerns the impact of this technology on dispute resolution, human rights, due process, and access to justice: How will AI be used in the context of legal information, research, predictive analytics, and decision-making?

Mr. Thomas explained that the application of AI tools to the justice system is much further along in its development than people may realize and that these new technologies have relevance in a full breadth of legal demands. In the civil law context, AI has roles in child welfare, allocation of government benefits, fraud detection, public health, and immigration. However, the area in which AI is used most extensively is in the criminal justice system, particularly in the United States, where AI tools are employed for national security, predictive policing, and, most notably, in bail. As Mr. Thomas explained, a bail decision is considered by many to be one of the most significant determinations in the criminal justice context. According to some estimates, up to one-third of Americans live in a jurisdiction where AI is used for bail.

Specifically, AI is used to make statistical predictions – at its core, a recidivism prediction tool – upon whether an accused is likely to comply with bail conditions before their court date or likely to commit another crime in that period. The AI tool processes a set of historical data from the criminal justice system, thereby applying certain risk factors, generates a score on whether the accused should be granted bail. The use of AI in this context was initially applauded by many observers across the spectrum, including prosecutors, judges, public defenders, and community advocates. AI was thought to be an objective, impartial, and evidence-based method of determining bail, and a reliable alternative to the years of subjective decision-making that has adversely affected minorities in the criminal justice system. However, the sentiment has recently taken a turn, as many supporters of these tools have stepped back and raised some concerns, as pointed out by Mr. Thomas.

The chief concern relates to the notion of the disclosure. AI usage by the justice system and administrative bodies is   not often disclosed.  Examples from the United States show that the use of these tools is not frequently revealed. The public will likely come to know   about the use of AI through press reports, class action lawsuits, or freedom of information requests. As AI becomes increasingly utilized in the Canadian legal system, Mr. Thomas views the disclosure of the use of AI as a due process issue. If Canadians do not know AI is used in adjudication and other administrative decisions, there is a gap in the information necessary to critique or challenge the adjudication or decision-making process.

Further to the general concern about the disclosure of the use of AI tools, there are questions about what precisely ought to be disclosed about such use of AI tools  There are no meaningful parameters concerning what must be disclosed, so what exactly should be revealed? The opinions are diverse. Some commentators advocate that a simple Excel spreadsheet containing relevant raw data would be sufficient, while others advocate that the full scope of code and sophisticated algorithms be revealed. Often, algorithms may be so extraordinarily complex and nearly indecipherable that the developers themselves may not understand how the AI tool produced a particular output.

Second to disclosure, Mr. Thomas states that a significant issue with AI is bias. Mr. Thomas summarized the problem as “bias in, bias out.” In the criminal justice system, much of the historical data has proven to be – through research and experience – biased. Specific communities have been over-policed, overcharged, and over-convicted in both the United States and Canada, which is reflected in the data sets available. As a result, the outcome of an algorithm that relies on that data will inevitably be biased. This potential for bias has been a major cause for caution amongst skeptics of the technology in the United States. As Mr. Thomas says, results that are inputted from prior cases are not neutral; they reflect subjective choices. With this in mind, the question becomes whether AI decision making should be used at all and whether it is justified, both legally and ethically, to apply AI tools in matters of criminal law. 

Third, lies the issue surrounding interpretation. How are these AI data processing outputs interpreted by prosecutors, judges, defense lawyers, and self-represented accused? Mr. Thomas stated that people might be vulnerable to “automation bias” when considering AI. There is a tendency to believe that AI reports ought to be the last word in a matter simply because it is “scientific”, “accurate”, and “evidence-based.” Mr. Thomas points out that historical data must not always be viewed as objective. As previously mentioned, such historical data is not immunized from the subjective decision making and bias of triers of fact.

What about due process issues? The right to an explanation, the right to fairness, and the right to challenge even the more mundane judicial decision is an integral element of due process. In criminal law matters, the right to obtain the reasons for a decision is essential, especially where liberty is at stake. Such concerns about due process take on elevated importance when, as previously mentioned, developers themselves may not be able to explain how the AI tool produced a specific result rather than another. Currently, the LCO is trying to build best practices which can incorporate due process into AI code. In the view of Mr. Thomas, due process-based concerns in their intersection with constitutional rights will be one of the most litigated areas of AI in the future. 

The regulation of AI must consider several issues. Mr. Thomas cited the Toronto Declaration,[2] which enumerates many recommendations to governments surrounding how AI should be used while accounting for human rights. In the future, human right laws such as the Ontario Human Rights Code and the Canadian Charter of Rights of Freedoms will be instrumental in guiding the parameters of AI in Canada. Mr. Thomas acknowledged that questions may be raised with regards to whether our existing legal rules are adequate to deal with AI, or whether they are ill-equipped and in need of refinement in the face of this revolutionary technology.  Nevertheless, Mr. Thomas notes that the rules of evidence will almost certainly need to be revisited by Canadian legislatures as AI becomes increasingly used in the justice system.

AI is the new frontier of access to justice. The widespread adoption of AI is equivalent to any of the previous waves of fundamental changes to the justice system. Mr. Thomas opines that legal professionals will need to learn about AI: while comprehensive knowledge of coding is probably unnecessary, a basic understanding of how it works will be invaluable for future lawyers. As he says, paraphrasing a representative at Blue J Legal[3] “AI won’t replace lawyers, but lawyers who know about AI will replace lawyers who don’t know about AI.” In the future, there will have to be more collaboration between lawyers, developers, and various community members to create AI that is purposeful and that can better serve the justice system.

Mr. Fritsch made additional remarks and discussed another project of the LCO that he is currently leading on consumer protection in the digital marketplace. The research project involves examining online terms of service and “click to consent” contracts.

In closing, Mr. Thomas encouraged attendees to think not just defensively about AI, but also opportunistically. AI is a promising and revolutionary new technology, and despite some lingering questions concerning its usage, it can be harnessed to pursue progressive objectives surrounding access to justice. Overall, the presentation provided a great introduction to the issues that AI will bring to the justice system in Canada. My colleagues and I truly enjoyed the presentation, as expressed by Adrian Zita-Bennett, J.D. Candidate 2021:

The discussion on the legal realities posed by emerging forms of AI was enlightening, insightful, and highly relevant to attendees, many of whom will face these realities as they embark on legal careers down the road.


It was truly a great experience to have both Mr. Thomas and Mr. Fritsch speak at Windsor Law.

[1] This is the author’s account of the presentation of November 20 2019 on AI and the Justice System. All errors are the author’s.

[2] Anna Bacciarelli, Joe Westby, Estelle Massé, Drew Mitnick, Fanny Hidvegi, Boye Adegoke, Frederike Kaltheuner, Malavika Jayaram, Yasodara Córdova, Solon Barocas, William Isaac, The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems, Amnesty International and Access Now, May 2018 online: <>



bottom of page