top of page

Disruptive Technology: AI and its Impact on the Legal Profession and the Automotive Industry – What Path(s) for Regulation?

May 31st, 2024

Written by Khadija Shamisa, 2L Dual J.D. at Windsor Law and Detroit Mercy Law


On March 1st, 2024, the EpiCentre, Professor Myra Tawfik, Don Rodzik Family Chair in Law and Entrepreneurship, the Federal Economic Development Agency for Southern Ontario, and Windsor Law LTEC Lab joined together for a one-day multidisciplinary forum delving into the impact of Artificial Intelligence on the legal professions and the automotive industry. The discussions brought together thought leaders, professionals and experts to explore the opportunities and challenges AI brings to these sectors. This post captures the highlights of the day’s discussions.


PANEL 1: AI and the Legal Profession

 

The first panel began as a discussion about the opportunities and challenges that AI offers to regulated professions, using the legal profession as an illustrative example. The recording can be watched here.

 

Our panelists were: (virtually) Quinn Ross, Managing Partner at The Ross Firm, George Wray, Partner at Borden Ladner Gervais, and Joshua Morrison, Director of the Future of Law Lab. The panel was moderated by Annette Demers, Reference Librarian at the University of Windsor, Faculty of Law.





 Will AI Make Lawyers Obsolete? 

 

Ross noted that despite the growth and possible future uses of AI solutions in the legal profession, the human element—the lawyer's expertise and decision-making—will always be needed. AI can streamline processes, but there will always be a need for legal professionals to direct that process, and to manage client relationships and expectations.

 

Wray agreed and emphasized that AI's role should complement lawyers’ work. Since automation takes away the repetitive aspects of legal work, lawyers can focus on the intellectually stimulating and relationship-building parts. He proposed that AI will allow lawyers to spend more time fostering relationships with their clients and mentees, as well as solving problems creatively. These are the more fun parts of lawyering.


Can AI assist with access to justice problems we already have in our country?

 

Morrison highlighted AI's potential to increase access to justice. For example, platforms like ChatGPT could allow people to find quick answers to their legal questions. Although legal advice would be restricted in this area, people could customize their prompts to get closer to an answer they’re looking for. However, this may disrupt the self-regulated nature of the legal profession, and may deepen the gap between self-represented litigants and people/organizations having access to top law firms.

 

Do the Rules of Professional Conduct sufficiently protect clients from harms that may arise from AI?

 

All panelists agreed that the ultimate responsibility for AI's output will rest on lawyers’ shoulders. Lawyers are responsible for fact-checking as part of their due diligence. All believe that, although the Rules are not explicit, they ultimately safeguard clients’ interests. However, the Rules may need to be rewritten to incorporate a higher level of technological competence as a requirement for legal practice.

 

Does legal education need to change to prepare law students for these technological changes?

 

All panelists agree that legal education needs to incorporate AI into its training. AI is a tool that will be a necessary part the practise of law, and therefore should be part of the substantive element of legal education. The challenge is that professors may be more inept at its use than students. Additionally, some fear that using AI will cause students to spend less time intimately learning substantive material.  Allowing students to use AI during their education could permit them to bypass this learning, possibly diluting the quality of legal education.

 

Annette Demers notes that law students can check the Law library’s Libguide on how they can appropriately use AI.

 

Will AI incentivize firms to forego hiring articling students, to save on costs?

 

First, Wray commented that AI may not be as cost-efficient as people believe. The technology is expensive to develop. Ross and Morrison agreed, and they added that articling students are valuable to firms for different reasons. They are an investment into the human capital of the future firm, not merely a way for the firm to cut costs. If firms stop hiring articling students, they will have no future senior talent to guide the decision-making of the firm. Thus, any firm that opts to stop hiring articling students will likely put themselves at a future disadvantage.

 

Are there any areas of law that should lean into the use of AI?

 

The panelists agreed that AI's application should align with the nature of legal tasks. Fungible aspects, such as routine document review, are suitable for automation. In contrast, nuanced, non-fungible elements like case-specific legal analysis, require human expertise.

 

 

Closing Thoughts

 

Ross sees AI as a game-changer. Using AI, he can spend more time mentoring and fostering growth and innovation. Morrison contemplated the evolving nature of expertise in the age of AI.  He wondered at what point he will not be an expert anymore. Wray asserted that AI is not a fad. "That’s like saying the internet is a fad," he argued. He encourages businesses to contemplate the impact of AI. Overall, our panellists emphasized the importance of embracing change and leveraging innovation to drive progress in the legal profession.


PANEL 2: AI and the Automotive Industry

 

The second session explored AI’s effects on the automotive industry. The recording can be watched here.

 

Given the significance of the automotive sector to Windsor’s history, this topic is of particular interest to the region. The panelists were: Dr. Mitra Mirhassani, Professor of Electrical and Computer Engineering at the University of Windsor, James Hinton, IP Lawyer and Founder of  Own Innovation, Jarrod Hicks, U.S. Patent Lawyer and Director of IP at Intellectual Property Ontario, and Homeira Afshar, Research and Insight Analyst at Ontario Vehicle Innovation Network (OVIN). The Panel was moderated by Dr. Wissam Aoun, Associate Professor at the University of Windsor, Faculty of Law, and member of LTEC Lab.



How much will the automotive sector be replaced by AI? 

Dr. Aoun emphasizes that when Henry Ford created cars, he was also creating jobs. The advent of AI, however, seems to take work away from workers. He asked: how true is this in the automotive sector?

 

Hinton remarked that this is a great question. He said that it was truly a question about the future of work. We must realize that good, middle-class jobs will no longer be the same in number. This should urge us to adapt to become higher-performing in our work. Hicks agreed and commented that the Canadian economy is shifting from manufacturing-based to service-based work.

 

 

Is there an attempt to align the “bright” and “dark” sides of AI?

Dr. Mirhassani remarked that this is a tough question with no complete answer. She shared her experiences on a team that helped created AI system solutions. At that time, she assumed that AI was inherently unbiased, and therefore neutral. However, when investigating the decision-making of her AI model, she realized that the machine makes “innocent” decisions that ultimately cause output to be biased against certain groups of people. She acknowledges that this is a part of the “dark side” of AI.

 

Afshar suggested a possible solution to this problem: transparency in machine-learning technology. This transparency can ensure that the decisions being made are accountable and free from bias. This will require dialogue with stakeholders to create effective policy on this matter.

 

Hinton introduced the issue of cars and their software. He remarked that no longer is the utility of a car to get from Point A to point B, their value is now to collect the driver’s data. Privacy law around this area currently does not adequately provide sufficient privacy protection to individuals. Given that the vehicles’ software incorporates AI, there is serious need to limit and control its use of personal data.

 

What are Canadian researchers doing in the automotive space using AI to keep up with our competition in other countries?

 

An audience member raised the issue of the global race of IP protection. He described that there is a huge number of Chinese players patenting AI. This allows them to use AI to, for example, create electric circuitry in less than one day. Without AI, it can take two years to complete. This can pose a competitive problem for Canada. What are we doing to ensure that we are keeping up with our global competition?

 

Afshar noted that Canada is a major global player in data mining. However, Hinton added that it is a problem that no Canadian companies are in the top 100 patent filers anymore. Canada also has a problem of companies being divested to foreign interests, which eventually results in foreign IP ownership of Canadian inventions.

 

Hinton added that this is not just about patents, but about a one’s entire cloud of IP rights. Also, we are playing a global game. Whether we like it or not, the head of the U.S. Patent Office will drive the patents, as they tend to follow the market. It’s not a great construct, but it’s the reality that Americans are pushing for it. To get it treated differently will be a big challenge.

 

Hicks added that around 2017, China set out a roadmap on how they can become dominant player in AI. Each year since 2021, they have five times the number of patent filings than the U.S. Since China will have most of the IP rights, there is a legitimate question about where the jobs will be in Canada.

 

In closing, Dr. Aoun commented that there is a difference between protecting patents and commercialization. Since the Canadian government needs to develop a strategy to support tech to support readiness levels, getting into the patent space is only one piece of the puzzle. He also queried about the unique nature of the automotive industry. Will the strong political will to preserve this industry lead to different outcomes than other industries where AI may significantly alter the labour force?


PANEL 3: The Emerging Legal and Regulatory Environment

 

The final panel was about AI in the legal industry. Readers can watch a recording of this panel here.

 

Our panelists were: Professor Céline Castets-Renard, Full Professor at University of Ottawa, Faculty of Law and University Research Chair on Accountable Artificial Intelligence in a Global Context, Professor Pascale Chapdelaine, Professor at the University of Windsor Faculty of Law and Co-Founder at LTEC Lab, Sam Ip, Partner at Osler, Hoskin & Harcourt LLP, and Jennifer Dukarski, Shareholder based in Butzel's Ann Arbor office.

 

The panel was moderated by James Hinton, Founder of OWN Innovation.



What can the world learn from the European Union?

 

Professor Castets-Renard kicked off the panel discussion by presenting recent developments on AI regulation in the European Union (EU). She described the process of the adoption of the Artificial Intelligence Act (AIA) being almost complete. It has taken 3 years to finish, which is typical of E.U. regulations.

 

The Act codified four levels of risk for AI: minimal, limited, high, and unacceptable. The minimal and limited risk levels do not require as much regulation and contain fewer uses of AI. On the other hand, the high level was designed to incorporate most of AI. This category pertains to the risk to the safety, health, and human rights of individuals. It can also include the risks to privacy, property, data protection regulation, quality, discrimination, freedom of speech and opinion. Some examples of high-risk activities include employment e.g., using AI to hire and fire employees.

 

The last category is unacceptable risk, where the use of AI is prohibited. Some examples of unacceptable risks include predictive policing and data scaping of facial images from the internet.

 

 

What is Canada doing with AI governance regulation?

 

Sam Ip discussed ongoing legislative efforts at the federal level to regulate AI through Bill C-27 and the AI and Data Act (AIDA). Canada is a leader in AI research, and desires growth in this area. Knowing this, Canada should be thoughtful about striking the balance between promoting innovation and the need for regulation.

 

Canada is likely to enact Bill C-27. Canada’s closest trading partners, the USA and U.K. have enacted different legislation about AI. The USA takes a more open approach, the EU takes a middle-road approach, while Bill C-27 in Canada places significant restrictions on AI. Bill C-27 sets different obligations for those who make AI available than for those who manage AI systems. Further, it defines three different types of AI: high-impact, general purpose, and machine-learning models. Ip explains that the definition of these categories remains uncertain, posing challenges to the players across the Canadian AI industry. If the players cannot understand where their activities fit into the legislation, they will not be able to understand their legal obligations. 

 

Further, Canadian companies operate in a global context. Canadian AI companies may be harmed by the more restrictive Canadian regulation compared to the U.S. and E.U.

 

Ip argued that thoughtful regulation should be risk-adjusted (like in the E.U. legislation), tech-neutral, and proportionate in its application. He argued that since it is ideal for Canada to foster a robust economy, the legislation should not discourage the development of AI companies within its borders.

 

 

How does current Canadian copyright and privacy law apply to AI?

 

Professor Chapdelaine focused her presentation on two pressing issues of AI in Canada: Copyright law and privacy law.

 

She summarized the main issues of AI in Copyright law in three questions. First, who are the authors and owners or AI-generated works? Second, is the use of copyrighted works to train AI an infringement of those copyrighted works? Third, who is liable when AI is found to infringe a copyrighted work? These questions are the focus of ongoing consultation by the Government of Canada considering the need for copyright legislative reform.

 

In the area of privacy law, Professor Chapdelaine discussed three main issues: data scraping of personal information for training models and creation of data sets by private entities such as Clear View AI, their use by police forces and the Charter issues they raise. She also referred to the application of personal data protection law (such as PIPEDA) to such activities and limited enforcement power against private undertakings making such use of personal information. Finally she discussed the criminal and tort law issues raised by AI impersonation tools (deepfakes, Identity theft, fraud, celebrity likeness, etc.)

 

In general, Professor Chapdelaine asks whether massive data scraping activities will lead to greater enclosures online similar to blockchain, as it is predicted by authors including Alex Tapscott’s in his recent book Web3.

 

 

How has AI law developed in the United States?

 

Jennifer Dukarski provided an overview on the regulation of AI in the US, noting that not much has been done at the federal level.  So far, President Biden has issued executive orders primarily for national security issues.. Some commentators opine that this could go as far as targeting Chinese cars entering the U.S.A.

 

Each State in the U.S. will legislate differently on the topic. The U.S. is very sectoral across the States, which results in higher levels of divergence. Thus far, 30 States have created 50 rules related to AI governance

 

Several major cases have dealt with technology that uses AI. Cruz v. Talmadge presented one of the first challenges to the decision-making of technology.[1] In this case, a 12-foot-tall self-driving bus driving Harvard students attempted to drive under a bridge only 10 feet tall. In Nilsson v. General Motors LLC, a motorcyclist was hit by a self-driving car. General Motors settled for an undisclosed amount.[2] In Umeda v. Tesla, the plaintiff was killed when he was hit by a Tesla Model X in autopilot mode.[3] In light of these cases, Dukarski asks, what is the “reasonable machine” standard for AI? What exactly is AI capable of “foreseeing” under the “reasonable foreseeability” element of the tort analysis? These are the interesting – almost comical – questions which will be crucial to the analysis for torts caused by AI.


[1] Cruz v. Talmadge, 244 F. Supp. 3d 231 (D. Mass. 2017).

[2] Nilsson v. General Motors LLC., (US Dst Ct for the Northern Dst of Cal, Jan 22, 2018).

[3] Umeda v. Tesla, No. 21-15286 (9th Cir. Jan. 3, 2022).




 Overall, we thank our various panelists for their contributions to our understanding of AI across various industries, professions and jurisdictions. This exploration answered some pressing questions, and posed many more. May this important conversation continue to ensure that the benefits derived from AI will not undermine the core fabric of our democratic institutions, our fundamental rights, the environment, and our livelihood. LTEC Lab will continue to monitor and engage in these interactions. 

Comments


bottom of page