Under Construction
The NYSBA website is now undergoing scheduled maintenance
Close

The NYSBA Portal and storefront is currently undergoing scheduled maintenance through 9/16/2024. Access to online store purchases, event registration and membership sign are unavailable at this time. Certain functions may still be available by calling our Member Resource Center at 800-582-2452. Click here to learn more

Artificial Intelligence Is Here To Stay—Human Mediators Need To Learn the Lingo

By Hana Glasberg

Artificial Intelligence Is Here To Stay—Human Mediators Need To Learn the Lingo

Seemingly every week there is a new headline in the news relating to artificial intelligence (AI). Researchers, computer scientists, engineers, and even the average individual around the world have been thinking more and more about AI and its numerous endless potentials in our daily lives. The legal field has not been spared. Would AI ever replace lawyers? Judges? What about the human mediator? In June 2023, a judge discovered that an AI-produced brief had cited and described three non-existent cases.1 Before we address these evolving existential questions as they pertain to mediation and alternative dispute resolution (ADR) in general, we all must be educated in the fundamentals of AI to be able to understand the initial reactions to it.

Mediation can be defined as the process of assisting in the negotiations of others.2 A third party, typically someone with no involvement in the dispute or with any of the dispute’s participants, will attempt to spark discussion between the disputants to get the disputants to begin speaking about the issues to try to reach a resolution.3 The charter of the United Nations identifies mediation as an important means for peaceful conflict management and dispute resolution.4 In 2012, the United Nations issued “Guidance for Effective Mediation” as an annex of the report of the secretary-general, “Strengthening the Role of Mediation in the Peaceful Settlement of Disputes, Conflict Prevention and Resolution.”5

Unlike litigation, mediation is governed by the principle of consent.6 Mediators do not issue rulings or compel parties to accept an agreement.7 Even if participation in mediation is mandatory, parties can end the process at any time and do not have to agree to a resolution.8

What Is Artificial Intelligence?

Stanford University Professor of Computer Science John McCarthy succinctly defined AI in 2007 as “the science and engineering of making intelligent machines, especially intelligent computer programs. [AI] is related to the similar task of using computers to understand human intelligence, but [it] does not have to confine itself to methods that are biologically observable.”9 Artificial intelligence combines computer science and data sets to enable problem solving.10

In 2021, the European Commission proposed legislation that would regulate the use of AI across the European Union.11 The proposed regulation adopts a risk-based approach, distinguishing between uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk.12 “Unacceptable” risks include creating harm for vulnerable populations like children and individuals with disabilities, and practices that would exploit existing protections around personal data privacy and consumer protections.13

The proposed legislation lists high-risk AI systems as those that conduct biometric identification and categorization of natural persons, management and operation of critical infrastructure, education and vocational training, access to and enjoyment of essential private services and public services and benefits, systems used by law enforcement, migration, asylum, and border control management, and systems that assist in the administration of justice and democratic processes.14 The proposed legislation out of the EU specifically classifies “AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts” as high-risk.15

AI, as it exists today, can be sorted into four general categories: generative AI, big data, machine learning, and automated decision-making. Each has its own defining characteristics and potential applications to the field of mediation.

Generative AI

Generative AI tools are designed to generate natural-language responses, images or computer code from text prompts provided by human users.16 Generative AI tools are not limited to a single type of information input, and are equipped to handle large, complex sets of data.17 The behaviors of generative AI can be complex, and can surprise even the developers who built the tools.18 General purpose AI models are sometimes referred to as “foundation models,” and often serve as pre-trained models for more specialized use cases or software to be built off of.19 For instance, a single general purpose AI system for language processing can be used as the foundation for several hundred applied models (e.g. chatbots, ad generation, decision assistants, spambots, translation, etc.), some of which can then be further fine-tuned into a number of applications tailored to the end user.20 Potential applications in the context of mediation are that generative AI could be taught to reach out to parties for calendar availability to schedule a session, draft a mediator’s opening statement, and provide suggestions for settlement terms that would benefit both parties.

Big Data

Big data refers to large, complex sets of data so voluminous that traditional data processing cannot manage them.21 The relationship between AI and big data is synergistic.22 AI requires massive amounts of data to learn and improve decision-making processes, while big data leverages AI for improved data analysis.23 Big data is wide, varied, and rapidly evolving, as the sources that feed big data expand seemingly on a daily basis.24 The wide and varied universe of sources that feed big data include consumer intelligence, social media, credit information, retail purchase history, geographic location tracking, mobile and satellite data, behavior monitoring, demographic data, data from RFID machines, and data from sensors and wearable devices.25 Mediation case outcomes and settlement terms, if collected at all, could be consolidated as a big data source to help AI platforms predict mediation outcomes and offer better settlement terms. Transparency about its use and specific sources, data protection, privacy and other issues around collecting such information will likely become a topic of dispute in the field. An early example is Picture It Settled,® a database containing thousands of cases contributed by lawyers, mediators, and other sources for the purpose of helping parties improve their results by modeling different potential scenarios.26 Some settlements are available, but by and large, today, this material is highly guarded. Picture It Settled® supplements their control data set with “several thousand more anonymous cases,”27 but it will be interesting to see how and if this space grows as practitioners look to integrate aspects of AI into dispute resolution. It remains unclear.

Machine Learning

The Bank of England has defined machine learning (ML) as “the development of models for prediction and pattern recognition from data, with limited or no human intervention.”28 Since ML involves a computer program running a model or recognizing pattens from data, this contrasts with generative AI where the human programmer decides what decisions are being made by asking questions.29 Machine learning often builds on big data.30 ML applications often consist of multi-step processes, including data acquisition and ingestion, processing that data, creating a model, testing the model, and then deploying the model.31

A graduate center in Switzerland has been experimenting with ML in the context of peace mediation.32 The researchers used a simple coding framework to identify parties’ positions, premises, and conclusions across hypothetical peace mediation scenarios: parliamentary debates in India, the border dispute between Ireland and Northern Ireland, identification of federal states in South Sudan, and social media traffic around the political crisis in Cameroon.33 The researchers found that while analyzing parliamentary debates was beneficial to learn the reasoning of politicians and high-level decision-makers and while analyzing social media content was useful for understanding how arguments form and travel across the web, actively allowing parties to “opt-in” with their positions through surveys or focus groups may be the most useful.34

The United Nations has been working on a similar project. The U.N. Department of Political and Peacebuilding Affairs’ Middle East Division is working on a tool driven by machine learning to evaluate public sentiment towards a peace agreement across Arab nations.35 This tool uses machine learning to mine social media sources and conduct digital focus groups to get thousands of public opinions in real time.36 Eventually, the U.N. will be able to test different aspects of a potential peace agreement in real-time on behalf of the parties in conflict, thereby expanding the inclusivity of peace negotiations.37

ML could be applied to mediation in a couple of ways: by training the ML on a data set of past mediation cases, which the ML would then use to inform outcomes of future cases,38 or, similar to the U.N.’s application, where ML is used to survey parties on a variety of outcomes and prompting them to rank their choices.

Automated Decision-Making

The government of the United Kingdom defines automated decision-making (ADM) as the process of making a decision by automated means without any human involvement.39 An example of ADM is in a loan-approval process, where software will review an application and third-party data to evaluate a loan application.40

As you can imagine, ADM requires vast sets of personal data to run. Privacy laws are becoming a more frequent way for lawmakers to address the risks inherent in ADM.41 The EU’s General Data Protection Regulation (GDPR) gives data subjects the right not to be subject to a decision solely based on ADM.42 Brazil has a similar statute, which provides data subjects the right to request review of decisions made solely based on “automated processing of personal data affecting his/her interests,” including any decisions intended to define their personal, professional, consumer and credit profile, or aspects of their personality.43 Québec, Canada has enacted legislation effective later this year (Sept. 2023) that will require public bodies who make a decision exclusively on an automated processing of such information must, at the time of or before the decision, inform the person concerned accordingly.44 It must also inform the person concerned, at the individual’s request, of the personal information used to render the decision, of the reasons and the principal factors and parameters that led to the decision; and of the right of the person concerned to have the personal information used to render the decision corrected.45

In the United States, there has been no broad federal statutory regulation of personal data privacy. States have been taking their own approaches to data privacy, with the first being California, followed by Colorado, Virginia, and Connecticut.46 Just this year, New York City issued a final rule to regulate employer use of automated employment decisions in hiring and promotions.47

Given the lack of federal regulation in the U.S., businesses utilizing ADM for important decisions will likely not be required to notify individuals when individual personal data is used in ADM, or when ADM is used in a decision that affects the individual, like a loan approval or a credit score.

As we determine how best to incorporate AI into dispute resolution, we will have no choice but to consider the issues of transparency, privacy, control and human judgment. However, a fundamental understanding of the processes involved is the first step, a necessary step for us all. AI is not going away any time soon.

Hana Glasberg is a J.D. candidate at Fordham University School of Law and a vice president in the legal & compliance group at Blackstone. Hana received a bachelor of arts degree in 2015 from Johns Hopkins University in international studies and East Asian studies. The views expressed in this commentary are the personal views of the author. Hana can be reached by email at [email protected].

Endnotes

1 Benjamin Weiser, Here’s What Happens When Your Lawyer Uses ChatGPT, The New York Times (May 27, 2023), https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html?smid=url-share.

2 Douglas N. Frenkel & James H. Stark, The Practice of Mediation 3 (Wolters Kluwer, 3rd ed. 2018).

3 Id.

4 U.N. Charter, Ch. VI, Art. 33.

5 U.N. Guidance for Effective Mediation (Sept. 2012); U.N. Docs. (A/66/811, 25 June 2012).

6 Jacqueline Nolan-Haley, Informed Consent in Mediation: A Guiding Principle for Truly Educated Decisionmaking 74 Notre Dame L. Rev. 775, 777 (1999).

7 Frenkel & Stark, supra note 2.

8 Id.

9 John McCarthy, What Is Artificial Intelligence?, Stanford University (Nov. 12, 2007, 2:05 AM), https://www-formal.stanford.edu/jmc/whatisai.pdf.

10 What Is Artificial Intelligence?, IBM, https://www.ibm.com/topics/artificial-intelligence (last visited Apr. 26, 2023).

11 Proposal COM 2021/0106 (C.O.D).

12 Id. at 5.2.2.

13 Id.

14 Id. at Annex III.

15 Id.

16 Angus Loten, PricewaterhouseCoopers To Pour $1 Billion Into Generative AI, The Wall Street Journal, (April 26, 2023, 5:32AM), https://www.wsj.com/articles/pricewaterhousecoopers-to-pour-1-billion-into-generative-ai-cac2cedd.

17 General Purpose AI and the AI Act 3, Future of Life Institute, May 2022 (last accessed Apr. 28, 2023), https://artificialintelligenceact.eu/wp-content/uploads/2022/05/General-Purpose-AI-and-the-AI-Act.pdf.

18 Id. at 4.

19 Id. at 3.

20 Id. (citing GPT-3 Powers the Next Generation of Apps, OpenAI (last accessed Apr. 28, 2023) https://openai.com/blog/gpt-3-apps/).

21 What Is Big Data?, Oracle (last accessed Apr. 28, 2023) https://www.oracle.com/big-data/what-is-big-data/#defined.

22 Big Data AI, Qlik (last accessed Apr. 28, 2023) https://www.qlik.com/us/augmented-analytics/big-data-ai#:~:text=are%20transforming%20BI-,Relationship%20between%20AI%20and%20Big%20Data,AI%20for%20better%20data%20analysis.

23 Id.

24 State of Conn. Insurance Dep’t, Notice to All Entities and Persons Licensed by the Conn. Insurance Dep’t (April 20, 2022).

25 Id.

26 http://www.pictureitsettled.com/about-program/

27 Id.

28 Machine Learning in UK Financial Services, Bank of England Financial Conduct Authority, Oct. 2019, https://www.fca.org.uk/publication/research/research-note-on-machine-learning-in-uk-financial-services.pdf.

29 Id. at 6. See also supra Part II.b.i (discussing generative AI).

30 See supra Part II.b.ii (discussing big data).

31 Id. at 21.

32 See Andreas Hirblinger, Snapshots From Our Pilot Analysis: What’s Behind Conflict Stakeholders’ Positions?, Mediating Machines (Aug. 19, 2019), https://mediatingmachines.com/updates/2020-08-19-snapshots-from-our-pilot-analysis-what’s-behind-conflict-stakeholders’-positions/.

33 Id.

34 Id.

35 Katharina E. Höne, Mediation and Artificial Intelligence: Notes on the Future of International Conflict Resolution 13, DiploFoundation (Nov. 2019), https://www.diplomacy.edu/wp-content/uploads/2021/06/Mediation_and_AI.pdf (citing U.N. Docs, Digital Technologies and Mediation in Armed Conflict 13-14 (Mar. 2019), https://peacemaker.un.org/sites/peacemaker.un.org/files/DigitalToolkitReport.pdf).

36 U.N. Docs, supra note 35 at 14.

37 Id.

38 Audrey Berland, Artificial Intelligence (AI) and Mediation: Technology-Based Versus Human-Facilitated Dispute Resolution, JDSupra (Mar. 8, 2023), https://www.jdsupra.com/legalnews/artificial-intelligence-ai-and-1573917/.

39 Information Commissioner’s Office of the UK, Guide to UK GDPR, (Jun. 5, 2018), https://ico.org.uk/media/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling-1-1.pdf.

40 Id.; see also Mary Shacklett, AI Decision Automation: Where It Works, and Where It Doesn’t, Tech Republic (Nov. 17, 2020), https://www.techrepublic.com/article/ai-decision-automation-where-it-works-and-where-it-doesnt/.

41 Avi Gesser et al, New Automated Decision-Making Laws: Four Tips for Compliance, Debevoise & Plimpton Data Blog (Jun. 25, 2022), https://www.debevoisedatablog.com/2022/06/25/new-automated-decision-making-laws-four-tips-for-compliance/?utm_source=mondaq&utm_medium=syndication&utm_term=Privacy&utm_content=articleoriginal&utm_campaign=article.

42 2016 O.J. (L119) 679, Ch. III, § 4, Art. 22.

43 Gesser, supra note 41 (citing Brazil Law No. 13,709 (August 14, 2018), Art. 20).

44 Id. (citing Quebec Bill 64, 20 §65.2).

45 Quebec Bill 64, 20 §65.2.

46 See CAL. CIV. CODE § 1798.100; see also VA. CODE § 59.1, COLO. S. B. 21-190 (2021), CONN. S. B. 6.

47 NYC Local Law 144 of 2021.