Brandon Beck, 2L Windsor Law Student, LTEC Lab Researcher
January 30, 2024
On November 22, 2023, Windsor Law LTEC Lab (www.lteclab.com) hosted a forum discussion: Let’s talk about Chat-GPT & Generative AI. With the purpose of bringing University of Windsor students and faculty together to participate in this hot topic, the discussion panel was composed of members with expertise in AI technology and the law.
After lunch was served, the audience took their seats in a full house as Professor Pascale Chapdelaine introduced the roadmap for the discussion and the panelists: Nick Baker, Director, Office of Open Learning at the University of Windsor, Mita Williams, Acting Law Librarian for the Don & Gail Rodzik Library, myself, Brandon Beck, a second year student at Windsor Law, and herself as Associate Professor at Windsor Law.
Nick Baker kicked off the discussion by introducing key terms associated with AI. Generative AI programs learn from massive amounts of data sets and respond to prompts using information based on what they have been trained with. The large language models that we generally refer to can produce writing of virtually any type and quality, depending on the prompts. He explained the concept of large language models and how they work in general before discussing the differences between various models and how they are trained to perform.
Nick explained that the way how different tools are trained can make a difference in their functionality. The original thought was that more specific prompts would produce the best results. However, recent research shows that the models work better when engaged in a back-and-forth conversation. Surprisingly, these large language models also respond positively to “emotional prompts” (such as positive reinforcement or “gentle scolding”) and when asked to double-check their work.
The models are trained based on user feedback. The more constructive the interactions are between the user and the model, the less likely the model is to produce hallucinations. Nick pointed out that hallucinations are not facts that the models believe to be accurate but actually arise from the models attempting to deliver the user exactly what they are looking for. The existence of hallucinations demonstrates the importance of human intervention along the process of receiving assistance from generative AI and exploring validating the fact the results that these models produces.
Nick then demonstrated a few models using Chat GPT, with prompts suggested by the audience.
Following Nick’s introduction to the technology, I spoke about my research project involving the use of Chat-GPT, further highlighting Nick’s advice to explore the facts of the results these models produce.
To summarize, my research work led to an academic article the articlewhich describes an instance of how the reoccurrence of a business entity misnomer, i.e. , “Limited Liability Corporation” being misused over 8,000 times in legislation and jurisprudence caused Chat-GPT to reproduce the misnomer when asked for details about it. Chat-GPT was accurately reproducing information to which it had access. However, the information and sources it used were incorrect, and so was the response.
I used this anecdote to caution students to ensure that they are using Chat-GPT as a resource rather than a source itself without background knowledge or additional research to ensure that the results are accurate. Echoing Nick’s earlier comments.
Following my anecdote, Mita Williams began to describe the AI business. Mita offered the surprising statistic that around 1.7 billion Chat-GPT requests were made in October, yet the product continues to lose money due to its subscription model. Mita forecasted that results like this might in the end increase the cost for every user. She asked: will these tools become something we cannot live without? What cost are we willing to pay for it?
“The promise of AI is that it replaces a free library, whether that’s the internet or an actual library, and it replaces it with a $30 a month librarian.” – Mita Williams
How is Generative AI going to transform academics?
The panel then discussed an open question about generative AI in academia. These tools have many positive uses within academia for both students and teachers. For example, administrative tasks like preparing agendas are simplified. Professors may use these tools to assist with lecture preparation or footnotes in their writing. As I pointed out, the current issue is the balance between the institution’s current policies, ethics, and the importance of learning the desired skills that each course aims to teach.
What is clear, though, is that the emergence of AI has changed the landscape in academia, and these tools should be embraced rather than frowned upon.
“75-90% of students are actively using AI tools” – Nick Baker
Nick pointed out a significant use of these tools and discussed how these tools can help reduce barriers to learning for students with disabilities. For example, students unable to effectively put their thoughts to paper due to learning differences can use these language generation to bridge the gap between their thoughts and what they are able to express.
Pascale Chapdelaine then discussed evaluation methods. She explained that the uses and generative AI policies vary depending on the skills we aim to perfect. As Mita put it, “learning is about the process and not necessarily the output itself.” For example, if the purpose of an assignment or course is to nurture writing or oral advocacy, using Chat-GPT to produce most of your speech or your essay would be counterproductive to the process itself, and although there may be useful outputs (the jury is out on this) the student would not be learning the desired skills.
Guest participant star Anette Demers, Reference Librarian at the Don & Gail Rodzik Law Library appeared and explained the current status of AI research tools already built into Westlaw and Lexus. Anette described how important it is to know where the information comes from and the parameters so that students, professors and lawyers know how exhaustive the legal research outputs are. For example, how far back does the information go, and what jurisdictions does it include?
The discussion then moved to the dark side of AI. Professor Chapdelaine spoke about the biases that exist within the outputs from these models. The information these models are trained on is created by humans and contains biases. An example is when hiring is left to AI, and AI is instructed to hire candidates that resemble top performers at the company. The machine will suggest hiring candidates who resemble current employees of a certain race or gender because these people were historically successful in the company, resulting from discrimination. The result is a perpetuation of the discriminatory practices.
Professor Chapdelaine then described the copyright issues associated with generative AI output. Who is the author? The law is quite clear that an author can only be a physical person. However, tools are often used in different fields to produce the works that are protected by copyright. For example, a photographer may hold the copyright in their photographs who uses a camera even though the camera is the tool that actually captured the image. Determining The extent to which an individual using Chat GPT can generate a work protected by copyright is up for much debate. It depends on e.g. the extent to which the individual meets the requirements of originality in copyright law and whether the output displays sufficient exercise of skill and judgment by the individual.
Another harsh reality of the generative AI revolution is the environmental footprint. Nick explained how resource-intensive AI can be and how important it is that companies stick to a minimalist approach. He pointed out that big corporations such as Google and Bing have access to unlimited resources, so they use whatever they can with a “more is better” attitude. He also points out that some smaller companies try to give back to the local communities by offering their products for free.
“The average conversation with Chat GPT requires about half a litre of water” – Nick Baker
Professor Chapdelaine concluded with a discussion about privacy, and then the panel responded to audience questions.
Windsor Law LTEC Lab thanks all panel members and audience participants for sharing their passion, expertise and insights on such an important and evolving field that will continue to impact legal education and practice in unimaginable ways. The recording of the event can be found here.