A Recap on the LTEC LAB Seminar Series: The Legal Architecture of AI
- 15 hours ago
- 6 min read
Authored By: Charlie Martin, LTEC Lab Research Assistant, JD 28’

On March 6th, 2026, Windsor Law LTEC Lab, in conjunction with the Legal Innovation Hub hosted as part of its LTEC Lab Seminar Series: The Legal Architecture of AI. The panel discussion brought together Amy Salyzyn (University of Ottawa Law Professor), Jim Hinton (Founder of OwnInnovation, and IP Lawyer), as well as Annette Demers (University of Windsor Law Reference Librarian, and professor).The panel was moderated by Omer Malik (Co- President, Legal Innovation Hub, JD 27’), and Charlie Martin (LTEC Lab Research Assistant, JD 28’).
Taking place on the University of Windsor Faculty of Law campus, the panel discussion focused on how Canada, and our legal system should respond to the rapid emergence of artificial intelligence (AI), and its increasing presence within the administration of justice and the legal profession.
On the very morning of the event, an article was released by La Presse reporting that a Quebec judge may have used AI in drafting a decision that was riddled with incorrect citations and an improper assessment of case law. This example served as an exemplar of the risks associated with the unregulated use of artificial intelligence in legal decision-making, which furthered the theme of the conversation.
This concern goes beyond isolated incidents. One of the key issues raised during the panel was that we simply do not have an accurate understanding of how often AI is currently being used within the legal system. Two main reasons were identified for this lack of transparency. First, there is currently no requirement for courts or tribunals to publicly disclose what technologies they are using in their decision-making processes. Second, there is the growing phenomenon of “shadow AI,” where artificial intelligence tools are used within workplaces even when their use has not been formally approved or regulated.
Law firms themselves have also begun integrating AI tools into their practices. While many lawyers welcome technological tools that assist with legal research, the panel noted that many practitioners remain cautious about AI systems that generate full drafts of legal documents. Lawyers who approach their work with diligence will be more cautious relying on outputs that they cannot fully verify. The concern is that AI-generated drafts may misinterpret case law or legislation, producing outputs that are not factually or legally correct.
Much of the argument in favour of AI adoption has focused on efficiency. There is no denying that Canada’s court system faces significant backlogs, and some have suggested that AI could help alleviate these pressures. However, the panel offered a more cautious perspective. Rather than improving the justice system, poorly implemented AI tools risk injecting new harms into it.
One such harm arises from what Amy Salyzyn described as “subtle hallucinations.” These occur when AI produces outputs that appear plausible but contain small inaccuracies or distortions. In law, minor differences in wording can significantly alter the meaning of a legal standard. For example, AI may fail to distinguish between concepts such as a “reasonable person” and a “fair person.” While the difference may appear minor in everyday language, in law it can carry substantial doctrinal consequences.
The concern is that if these subtle inaccuracies are not caught, they may slowly erode the reliability of legal reasoning. Judges themselves could eventually struggle to identify these distortions if they become embedded in legal arguments or research materials. The panelists cautioned that over time, this could undermine public confidence in the integrity of the legal system.
At the time of the panel, approximately fifteen cases on CanLII had already been identified where a party had made submissions before a court that included fabricated, incorrect AI-generated hallucinations. The case of Reddy v Saroya, 2025 ABCA 322 (CanLII) brought up by Annette Demers, was discussed as an example of the additional time and resources required to address fabricated or misrepresented authorities produced by AI tools.
This raises an important question: how should lawyers who rely on such tools be held accountable? Current tools include public shaming, financial penalties, and, in more serious situations, findings of contempt. As well, the Law Societies themselves can step in to discipline a member of the legal profession.
Another concern discussed was the broader problem of “de-skilling.” As AI tools become more prevalent, there is a risk that professionals may rely too heavily on them, gradually weakening their own analytical and critical thinking skills. If lawyers begin outsourcing core aspects of legal reasoning to machines, the profession itself could lose some of the expertise that has traditionally defined it.
A deeper ethical question also emerged during the discussion: who is responsible for embedding legal model of thinking into AI programs? At present, there is very little transparency surrounding how these tools are developed. Lawyers do not have access to the system prompts or internal training methods used by AI engineers. As a result, there is limited visibility into how these systems are being taught to interpret legal principles or replicate legal reasoning. In many cases, the primary objective appears to be maximizing efficiency rather than focusing on developing an AI program that offers safeguards about their quality and accuracy of the output it produces.
The discussion also highlighted a growing social issue: the increasing number of self-represented litigants turning to AI tools for legal advice. As AI becomes more accessible, individuals may consult chatbots before consulting lawyers. Examples of this phenomenon are already emerging. In one widely reported incident in the ABA Journal referenced by Annette, a client dismissed her lawyers after receiving advice from ChatGPT that contradicted their professional guidance. Situations like this illustrate the broader societal risks of AI systems that provide confident answers without professional accountability.

Beyond the legal profession itself, the panel also explored the global implications of artificial intelligence and Canada’s place within this rapidly evolving technological landscape.
AI has become more than just a technological innovation, it is now a central component of global economic competition. Countries around the world are racing to develop and patent AI technologies that collect data, build algorithms, and generate enormous economic value.
Canada, however, faces a significant challenge in this regard. The commercialization of these technologies which are developed here in Canada, often occurs elsewhere. A large proportion of AI patents linked to Canadian research ultimately end up being owned by foreign companies, particularly from the United States. As a result, Canada may contribute to innovation while losing control over the economic benefits that follow.
This trend raises concerns about Canada’s long-term economic performance. Jim Hinton suggested that some projections show that Canada could become one of the weakest performing advanced economies over the next decade. Part of the issue lies in Canada’s continued focus on traditional resource-based sectors rather than investing strategically in intangible assets such as intellectual property, algorithms, and data infrastructure.
There are also important questions of sovereignty and national security. Control over digital infrastructure and data increasingly translates into geopolitical power. Canada’s reliance on foreign cloud providers and technology platforms raises concerns about how much control the country truly retains over its own digital ecosystem.
Jim emphasized that AI is not just a technological issue, it is also a legal, economic, and political one. The future of AI will not be determined solely by engineers or technology companies. Regulators, policymakers, and legal professionals must play a critical role in shaping how these tools are integrated into society.


For law students in particular, the discussion left an important question: what role will the next generation of lawyers play in shaping the legal architecture of AI?
Amy put forth a framework for how lawyers and law students should approach artificial intelligence moving forward. This framework was summarized as the “Three C’s.”
The first is curiosity. Lawyers cannot afford to ignore AI or hope that it disappears. Members of the legal profession must actively learn about how these technologies function and understand both their capabilities and their limitations.
The second is confidence. Despite the rapid advancement of technology, Amy emphasized that the legal profession is not about to be replaced. Law involves judgment, ethics, and contextual reasoning, qualities that machines cannot fully replicate. While AI may assist with certain tasks, the human element of legal decision-making remains indispensable.
The final principle is caution. AI should be used carefully and deliberately. Lawyers must resist the temptation to rely on these tools simply for convenience or speed. Professional obligations to clients, courts, and the justice system require careful verification of any AI-generated material.
Artificial intelligence will undoubtedly continue to reshape the legal profession. The challenge for lawyers, judges, and policymakers will be ensuring that the pursuit of efficiency does not come at the expense of the values that define our justice system which has a long-standing history.
If you were unable to attend the event in person or online, a video recording of the event can be found here.





Comments