Judge Bridget Mary McCormack on the Impact of AI in the Legal Community

By Liz Benjamin

January 10, 2024

Judge Bridget Mary McCormack on the Impact of AI in the Legal Community

1.10.2024

By Liz Benjamin

Bridget Mary McCormack was still on the bench, serving as chief justice of the Michigan Supreme Court, when ChatGPT was released.

She says it only took a few days for her to realize that “something pretty significant was happening, and it was going to impact the legal profession very quickly.”

McCormack made it her business to get up to speed and learn all she could about the technology – and AI, in general – and how it could be used, as she puts it “to address what I believe is a civil justice crisis in America.”

Now president and CEO of the American Arbitration Association-International Centre for Dispute Resolution, McCormack lectures, writes, educates, and advocates for the responsible and appropriate use of AI by members of the legal profession. She will be discussing her work and the rapidly developing issues and promise of AI at the New York State Bar Association Presidential Summit next on Jan. 17.

McCormack recently sat down for a preview of her Presidential Summit presentation, discussing the ins and outs of AI, how she believes it will transform the legal profession, and what lawyers need to do to prepare themselves to make the best use of the technology.

Q: What kind of AI is out there, and which do you recommend and why?

A: Whether it’s GPT-4 or Chat-GPT, CoCounsel, Bard or Bing, they each have their strengths and weaknesses. I use GPT-4 quite a bit but I suspect that some of the others that are specifically built on legal texts are most relevant for lawyers. I don’t mean to be advertising for GPT in particular. Any model that can make legal information accessible is a value add, in my view. Everyone is governed by the law and most people don’t have access to it and can’t afford representation. I think generally, the possibility of democratizing legal information for anyone who has a civil or criminal problem, giving them the ability to figure out what is expected of them and what responsibilities and rights they have is a net plus.

Q: Should lawyers be worried about being replaced by AI?

A: No, lawyers will always have work. There are always going to be disputes that need to be resolved in courts by a public justice system. Disputes with governments, for example, and criminal law, which will need to be resolved with lawyers and judges. And there will always be a need for lawyers to help resolve many civil disputes too. There will always be work for lawyers to do. But more than 90% of people with civil justice problems are priced out of the market. So, there’s an enormous mismatch that is a threat to the rule of law. It’s that simple. Maybe some lawyers would resist the idea that people can get legal information on their own, but I think that’s short sighted. To the extent that fewer people feel left out of the legal rules we’re all governed by, I believe that’s positive for the rule of law and for the profession generally.

Q: The legal profession has been notoriously slow to adopt technology. What would you say to those who are reluctant to give AI a chance?

A: It’s an accelerant for self-help tools that lots of people have been working on for a long time. It’s true that there is sometimes resistance from some parts of the bar to innovations that allow people to get legal information and solve their own problems. But I know an awful lot of lawyers who welcome that kind of positive change. I don’t think there’s a uniform reaction in opposition to AI, in particular.  And to be clear, I don’t want lawyers to hear me saying that they are alone in being resistant to change. Judges maybe are even more prone to the same tendencies. Some of them don’t even read their own email; their assistants do it for them.

Q: What about the possibility of attorneys misusing AI? There was a high-profile incident in New York, for example, that drew a lot of attention in which lawyers were sanctioned for using fake ChatGPT cases in a legal brief.

A: Every once in a while some lawyer will be careless, but lawyers are careless with other technology, too. That New York lawyer story is more about the lawyer than it is about the technology. If a lawyer goes to ChatGPT and thinks they can copy and paste into a court pleading, well, that tells me a lot about that lawyer. All the large language models hallucinate, and the ones that are publicly available and not trained on a legal vertical will certainly hallucinate about the law. It’s just the technology doing what it was trained to do. But there are at least two companies now that have products trained on legal texts – CoCounsel and vLex – and those hardly hallucinate at all. Not never, but a lot less often.

Q: What should members of the legal profession be doing to prepare themselves to use AI properly and ethically?

A: Lawyers have an obligation to educate themselves about technology – that’s in the ABA rules and also the model rules in most states. They need to get smart about both the risks and the benefits where their clients are concerned, and there’s an incredible number of resources out there to help them do that. I’m teaching a class at UPenn, for example, though the technology is moving so quickly that some of what I read this week will be irrelevant in January. Just to keep up with what’s happening with AI, I put in a number of hours every week and play with it every day. But since I strongly believe it will have a tremendous impact on the profession and my business, I believe the time is well spent.

Q: Experts in the field of AI have publicly issued warnings about the threat this technology can pose to humanity and urged governments to do something. Does the power of AI worry you and is regulation even possible?

A: It doesn’t frighten me. There are certainly very serious people who work in the generative AI field and spend all their time on that topic who are sounding alarms. That’s not frivolous, but I just happen not to be in that camp. I do think the technology is accelerating and we are barreling toward a future that is hard for our human brains to understand. Artificial general intelligence, where machines are smarter than us in all ways, not just some ways, that scares people. I think it will happen, and likely on a timeline that is three to five years out.

On regulation, it’s going to be very hard for government to stay ahead of where the technology is going and how fast it’s going there. I applaud the White House for its recent massive executive order on this topic; that’s a good start for how government should take regulatory steps in the direction of this technology, but again, it’s going to be hard.

Q: Does the legal profession need to adopt new, AI-specific, rules for its own operations?

A: The legal system is self-regulating. There are risks and benefits, and it’s our job to think about those. The current Rules of Professional Conduct already govern the use of this technology. For example, lawyers have a duty to not submit false information to the court – that was true before ChatGPT, and the fact that they’re doing it with a new technology doesn’t change the regulatory framework governing their behavior. It was unethical before ChatGPT and it is still unethical.

Q: Do you see AI and its impacts and challenges creating a new area of practice for lawyers?

A: I don’t think it’s a new practice area. It’s a new application of an old practice area. Where the law lands on areas now litigated as it relates to, for example, artists and publishing houses and authors, will be interesting to see. My guess is these large language model companies will work out licensing agreements. To be clear, though, that doesn’t mean there won’t be disputes about how content will be crawled or used. It could be resolved under a new set of rules, and lawyers will have plenty of work to do.

Q: You have spoken publicly about the possibilities of AI in the area of dispute resolution. Can you expand on that?

A: You can imagine in some simple disputes where users would welcome faster and cheaper processes. In a simple dispute that could be decided on documents only, for example, an AI read all the paperwork and spit out a decision that, in some cases, people would be quite happy with. It would be quick and cheap, and they could then move on. There are plenty of disputes to go around for both public and private systems, and I’m betting the market sorts that out pretty well.

Q: Critics of AI say that it is biased – just like humans are – because it is taught by humans. What are your thoughts on that and how to combat it?

A: It’s a lot easier to de-bias a data set than to de-bias a human who has been elected and gets to keep their position no matter what (almost) happens. That’s in part why people of color and women, for example, might be excited about the idea of an online dispute resolution system.  These models are trained on the data that we’ve produced, and we are a biased species, so we’ve produced biased data. But you can fix that—you can de-bias a data set– and there are people who do that full time. The difference, I think, is the opportunity to debias a data set might offer more upside potential than there is with some humans. That said, some people will never accept a decision from a machine, and others will not accept it for certain disputes. I understand that—but there are enough disputes to go around for all the resolution systems. In fact, having a new resolution option can get us closer to access to justice, if many disputes that now have nowhere to go for resolution have a new option.

 

 

 

 

 

 

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)

Join NYSBA

My NYSBA Account

My NYSBA Account