Bias and Fairness in Artificial Intelligence

By Luca CM Melchionna

June 29, 2023

Bias and Fairness in Artificial Intelligence

6.29.2023

By Luca CM Melchionna

An attorney has taken on a multifaceted case and can’t decide whether to use artificial intelligence to meet discovery demands involving 100,000 sensitive documents. While AI can save money by selecting only the most pertinent documents, the lawyer does not want to risk the client’s privacy by exposing sensitive documents to AI’s DIALOG DTE computer program. What to do?

It’s a good question, but it’s only the beginning. There are many other questions about AI – its lack of transparency, for example, or its potential for intentional use of false information.

Perhaps even more important, what about AI’s potential for bias and unfairness? It is well documented that AI can spread bias if the program’s designer is biased. To combat this, lawyers need to recognize and guard against computer-generated bigotry to protect their clients and their professional reputations. This article will examine the issue of bias and fairness in AI from all angles, including how it works and how it can be misused.

AI – It’s Everywhere

The presence, use and application of artificial intelligence is rapidly expanding not only in new and traditional industries but is also steadily becoming a new tool at disposal of our professional lives. AI developments involve education, trading, healthcare (e.g., with the recent discovery of the structure of the protein universe[1]), e-commerce, marketing and social media, just to name a few. This is because AI has various functional applications, which include, among others, speech processing, predictive analytics, distributed AI and natural language processing.[2]

Recently, the public at large had the opportunity to interact on a regular basis with a machine learning tool within the realm of natural language processing. ChatGPT-4 is a chatbot or a natural language tool developed by OpenAI that permits conversation (interrogation and responses) conducted in standard (or natural) language.[3] As of January 2023, ChatGPT-4 had approximately 100 million users.[4] ChatGPT is not an isolated example: the natural language processing function/field is expanding rapidly. There are probably a couple dozen similar chatbot natural language processing tools on the market including Chatsonic, YouChat, Bing AI Chat and Google Bard AI, just to name a few.[5]

New Technology, Old Liabilities

As the popularity of machine learning soars, attorneys and courts must analyze not only the application in certain industries, but also its legal implications. For example, because ChatGPT collects information on the internet, a bug recently exposed the payment information of 1.2% of its users.[6] Another recent study found that ChatGPT-4 can spread more misinformation and false narratives than its prior version (facilitating the construction of disinformation campaigns by bad actors).[7] In such cases, the analysis (conducted by humans[8]) relates to additional legal consequences, including but not limited to the liabilities linked to the actors who created these false narratives. Even if ChatGPT-4 can pass the bar exam,[9] it cannot actually perform a legal self-assessment of the various dimensions of its own attention, values, rights and liabilities, much less those of someone else.[10]

This short contribution intends to focus on a very limited aspect of AI: bias and fairness.

AI and Machine Learning Definitions

Initial frenzy (or overenthusiasm) for new chatbots may start to vanish once most humans realize that these AIs fail the mirror test, which assesses an entity’s capacity for self-awareness.[11] ChatGPT – a machine created by humans – is an autocompleting system mimicking human conversation. While individuals can learn from their failed test because they are sentient and have self-awareness, chatbots fail the mirror test.[12] Machines are not sentient, and if they eventually are able to acquire self-awareness, rather than simply mimicking this trait, they would be classified differently.[13] In other words, machines currently are lifeless, have no conscience and need humans to program them and to perform tasks.

How can we define artificial intelligence and machine learning? The first step in the direction of machine learning was provided by the 1950 Turing Test (aka the “imitation game”) in which an interrogator had to discover whether he or she was interrogating a human or a machine and, therefore, whether a machine can show human-like intelligence.[14] In 2007, AI was defined as the “science and engineering of making intelligent machines, especially intelligent computer programs.”[15] In 2018, Microsoft defined AI as “a set of technologies that enable computers to perceive, learn, reason and assist in decision-making to solve problems in ways that are similar to what people do.”[16] More recently, AI has been defined as “a system that thinks and acts like humans.”[17]

Similarly, machine learning is defined as a subset of AI and involves the use of data and algorithms to mimic the way in which humans learn to incrementally reduce the margins of error.[18].

In both definitions, we need to further distinguish among the training algorithm (unbiased by definition), the dataset (potentially biased) and the model created (potentially biased). If a model is created to think and act like a human, it is because someone built the same using a dataset the algorithm is trained on. Therefore, if the dataset is factually incorrect or biased, the model will show the same bias (because the training algorithm is not aware of being biased).[19]

How AI Works: Learn to Reduce Errors

Briefly, AI and machine learning use a template function, namely training data and a training algorithm to try to learn the “optimal parameter values” of a model to accurately control the outcome for a new example or a new set of facts. Past experiences and facts are used as basis to instruct the machine to predict future outcomes. A template function can be linear or non-linear. Non-linear relationships are harder to train because of their intrinsic complexity. A neural network is one of the available models with non-linear relationships.[20]

A neural network is arranged in various layers, each one with a number of nodes, and an architecture geared towards the function the system is organized to face and train. The network is trained to solve a particular problem and tested against known outcome values to make adjustments and reduce marginal outcome errors close to zero.[21]

AI and Bias

The creation of a model and the use of a particular dataset are based on the free will/choice of the human creator or because the creator must perform a contractual obligation. This input affects the machines like the process of imprinting.

AI bias is the voluntary or involuntary imprinting of one or more human biases in one or more datasets. The model delivers biased results because of fallacious assumptions of the training data provided to the neural network.

Bias can be found in a model trained via a biased dataset that is comprised of biased human decisions, historical/social inequities and/or ignored variables such as gender, race or national origin, with the consequence of unreliable results.[22]

Once instilled into the algorithm or system, bias can be corrected if the biased source is detected or through anonymization and direct calibration.[23] However, once bias and/or misrepresentations are in the system, the damaged output is already in the world. Studies have demonstrated biases in pharmaceutical healthcare as well as in law enforcement facial recognition algorithms.[24]

Bias, misrepresentation and errors generated by AI are still numerous, so the AI as a product may fail to meet certain expectations.[25] In the facial recognition sector, the scholar Najibi suggests that to overcome the bias of AI it would be necessary to enlarge the dataset used as a training ground for the algorithm.[26] However, Gebru et al. warned that the larger the dataset used, the higher the risk of embedded biases and misrepresentations.[27] They proved to be correct with the current level of misinformation produced by ChatGPT-4.

In the space of recidivism, a 2016 ProPublica report showed that the use of the COMPAS algorithm (Correctional Offender Management Profiling for Alternative Sanctions[28]) was biased against Black individuals, and in its conclusions the report states that: “Black defendants were twice as likely as white defendants to be misclassified as a higher risk of violent recidivism, and white recidivists were misclassified as a low risk 63.2% more often than black defendants.”[29]

In the field of pain medication, AI failed to detect patients in need.[30] In the area of loan application and mortgages, AI proved to systematically discriminate against Blacks at a higher rate compared to whites.[31]

If bias in AI is not recognized, isolated and corrected, the risks, repercussions and damages for our society (and for AI as a technology) overcome the money and time savings AI was intended to realize through its original goals of problem-solving and prediction. Bias in AI sows prejudice against groups and ideas and limits the advancement of the technology.

Countering Bias: AI Fairness

To resolve these failures, legislators, regulators and researchers have identified and proposed several measures and initiatives geared towards fairness and reducing prejudice.

In 2023, the U.S. National Institute for Information and Technology introduced the first Artificial Intelligence Risk Assessment Management Framework conceived “to better manage risks to individuals, organizations and society associated with artificial intelligence (AI).”[32] In its executive summary, the framework states that

“AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior. [. . .] AI risk management is a key component of responsible development and use of AI systems. Responsible AI practices can help align the decisions about AI system design, development and uses with intended aim and values. Core concepts in responsible AI emphasize human centricity, social responsibility and sustainability. [. . .] The Framework is designed to [. . .] AI actors [. . .] to help foster the responsible design, development deployment.”

Previously, in 2022, the White House Office of Science and Technology Policy released a whitepaper intended to be a reference point in the design, use and deployment of machine learning systems to “protect the American public in the age of artificial intelligence.”[33] Privacy and protection against discrimination take a central stage.

Tasked by Congress, the Federal Trade Commission entered this space in August 2022 with the goal of creating new regulations to combat online scams, deepfakes, child sexual abuse, terrorism, hate crimes and election-related disinformation. Regarding commercial surveillance, the FTC requested comments from the public in order to stop the use of AI to collect, analyze and profit from information about consumers’ private lives. According to the FTC, surveillance with AI leads to inaccuracies, bias and discrimination.[34]

From a legislative perspective, Congress introduced, and President Biden signed into law, two pieces of legislation: in October 2022, the Artificial Intelligence Training for the Acquisition Workforce Act[35] on federal agency procurement of AI and, in December 2022, the National Defense Authorization Act[36] that directs the defense and intelligence agencies to integrate AI systems and potential.

Among the legislative measures proposed to combat bias, it is important also to mention the Algorithmic Accountability Bill of 2022[37] that, if and when signed into law, will allow the FTC to verify a bias analysis by AI in various fields among which include employment, finance, healthcare and legal services. Additionally, California, New Jersey, Colorado and New York City introduced various measures to combat bias.[38]

Pagano et al. state that “more research is needed to identify the techniques and metrics that should be employed in each particular case in order to standardize and ensure fairness in machine learning models.”[39] Additionally, Charles suggested that AI use more representative data sets inclusive of more diverse human groups coupled with human monitoring.[40]

AI, Fairness and Litigation

Fairness is not inherent in a training algorithm that is fair by design. Qualitatively, the model can be black-box or transparent. If fairness is the reference point to qualify a model as reliable, the same should be true during its deployment either by the private sector or the government. If transparency is an issue, then human supervision is the last resort to control, manage and correct an algorithm.[41]

Because algorithms are nothing more than a set of instructions for solving a problem or accomplishing a task, parameters and options can be selected, or even manipulated at a later point in time, to reach certain results.

In the last two years, litigation on algorithms has developed rapidly and largely centers on the biases of datasets and/or instructions. From the instructions provided, it is possible to determine the real intention of an algorithm’s creator.[42] It is possible to instruct the algorithm to reach a specific result (and sow prejudice). On Feb. 21, 2023, the U.S. Supreme Court heard the oral arguments in Gonzalez v. Google.[43] The issue presented is whether Section 230(c)(1) of the Communication and Decency Act[44] shields interactive computer services from liability arising from content posted on their platforms and created by third-party providers using providers’ algorithms. Here, the justices are also called to understand whether a model’s goal is to affect the behavior of the targeted group of individuals. Manipulation can take many forms, including abusing bias or taking advantage of human insecurities.[45] This is the case of non-standard training algorithms created by developers on case-by-case basis with the goal to prepare certain data for training.

Case law is in this area is growing rapidly.[46]

Conclusions

The rapid development of AI has consequences for human wealth, democracy, government stability, research and education, health, employment and social welfare, just to name a few. Technology is an important component of human lives, and humans are becoming dependent on such tools. Are we in control, or do we want others to control us? That’s substantially the question that, on March 29, 2023, a group of technology experts raised when they recommended a pause on AI research.[47]

Humans are still in the driver seat when it comes to verifying the fairness of a machine learning system at the time of creation, deployment and application, and attorneys are clearly called to verify the liability of the machine learning creators based on various theories among which include product defect, lack of transparency, abuse of privacy, fraud unjust enrichment and intentional use of false information, among others.

Luca CM Melchionna is managing member of the New York-based Melchionna. He has more than 25 years of experience in both private practice and academia in Italy and in the United States. He is a transactional attorney with a focus on regulatory, compliance, and M&A/tax. He was a visiting scholar at Columbia University.

[1] Demis Hassabis, AlphaFold Reveals the Structure of the Protein Universe, DeepMind, July 28, 2022, https://www.deepmind.com/blog/alphafold-reveals-the-structure-of-the-protein-universe.

[2] WIPO Technology Trend – Artificial Intelligence (2019), at 26.

[3] GPT-4 Is Openai’s Most Advanced System, Producing Safer and More Useful Responses, OpenAI,

https://openai.com/product/gpt-4.

[4] Let’s Chat About ChatGPT, UBS Wealth Mgmt. Global, Feb. 23, 2023, https://www.ubs.com/global/en/wealth-management/our-approach/marketnews/article.1585717.html.

[5] Aaron Drapkin, Best ChatGPT AI Alternatives You Need to Try in 2023, Tech.co, March 29, 2023, https://tech.co/news/best-chatgpt-alternatives.

[6] Cecily Mouran, The ChatGPT Bug Exposed More Private Data Than Previously Thought, Openai Confirms, Mashable, March 24, 2023, https://mashable.com/article/openai-chatgpt-bug-exposed-user-data-privacy-breach.

[7] Sara Fisher, GPT-4 Readily Spouts Misinformation, Study Finds, Axios Media Trends, March 21, 2023, https://www.axios.com/2023/03/21/gpt4-misinformation-newsguard-study.

[8] As it stands today, humans should not state a disclaimer in the opening of an article, study or research to shield from liability.

[9] Samantha Murphy Kelly, ChatGPT Passes Exams From Law and Business Schools, CNN Business, Jan. 26, 2023, https://www.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html.

[10] The mirror test (a.k.a. the mirror self-recognition test, MSR) is a technique developed by psychologist Gordon Gallup Jr. to determine whether an animal possesses the ability to self-recognition and to measure such physiological and cognitive self-awareness. Animals have negligible self-awareness. Humans employ many years to simply pass the first five stages of self-awareness during early life. Philippe Rochat, Five Levels of Self-Awareness as They Unfold Early in Life, 12 Consciousness and Cognition 717–31 (2003).

[11] James Vincent, Introducing the AI Mirror Test, Which Very Smart People Keep Failing, The Verge, Feb. 17, 2023, https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test.

[12] Like in a mirror chamber, I pretend to be a machine using the pronouns “their” and “they.”

[13] Frank John Ninivaggi, Consciousness and Awareness in Learned Mindfulness (2020), https://www.sciencedirect.com/topics/psychology/self-awareness (“Self-awareness includes multiple dimensions of how one experiences the self (Duvall and Wicklund, 1972; Northoff, 2011). Self-awareness involves paying attention to oneself and consciously knowing one’s attitudes and dispositions. This mindful understanding comprises awareness of sensations, emotions, feelings, thoughts, the physical body, relationships with others, and how these interact.”).

[14] Alan M. Turing, Computing Machinery and Intelligence,  49 Mind  433–60 (1950).

[15] John McCarthy, What Is Artificial Intelligence? Formal Reasoning Grp., Nov. 12, 2007, https://www-formal.stanford.edu/jmc/whatisai.pdf.

[16] The Future Computed: Artificial Intelligence and Its Role in Society, Microsoft, 2018, https://blogs.microsoft.com/uploads/2018/02/The-Future-Computed_2.8.18.pdf

[17] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th ed. (2020). The authors also add the definition of “a system that thinks and acts rationally.”

[18] Arthur Samuel, Some Studies in Machine Learning Using the Game of Checkers,3IBM J., 535–54 (July 1959),http://people.csail.mit.edu/brooks/idocs/Samuel.pdf.

[19] In 1843, Lady Lovelace stated that machines could be programmed to perform a sequence of operations, but also recognized their limitations and the fact that they could not generate new ideas or concepts on their own. As reported by Turing, this is also known as the Lovelace Objection: “The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform.” Lady Lovelace “Notes” on Luigi Menabrea Notions sur la machine analytique de M. Charles Babbage, in Bibliotheque Universelle de Geneve, (1842).

[20] Michael Mauriel, Andrew Noble and Rory Radding, Patenting Artificial Intelligence Inventions: Introduction and Selected Issues, NYSBA Bright Ideas, Summer 2020, vol. 29, no. 2, 4–10, 6.

[21] Id. at 6, 9.

[22] James Manyika, Jake Silberg, Brittany Presten, What Do We Do About the Biases in AI? Harvard Bus. Rev. Oct. 25, 2019, https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

[23] Paul Barba, 6 Ways To Combat Bias in Machine Learning, Builtin, March 2, 2021, https://builtin.com/machine-learning/bias-machine-learning.

[24] T. Gebru & J. Buolamwini, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research, MIT Media Lab, Feb. 4, 2018, 1–15, https://www.media.mit.edu/publications/gender-shades-intersectional-accuracy-disparities-in-commercial-gender-classification.

[25] N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, A. Galstyan, A Survey on Bias and Fairness in Machine Learning, 54 ACM Computing Survey 6, 1–35 (July 13, 2021), https://dl.acm.org/doi/10.1145/3457607; W. Sun, O. Nasraoui, P. Shafto, Evolution and Impact of Bias in Human and Machine Learning Algorithm Interaction, PLOSOne(Aug. 13, 2020), https://doi.org/10.1371/journal.pone.0235502.

[26] Alex Najibi, Racial Discrimination in Face Recognition Technology, Harvard University Blog, Oct. 24, 2020, https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology.

[27] Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, Shmargaret Shmitchell, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021, 610–23, https://dl.acm.org/doi/10.1145/3442188.3445922.

[28] https://bja.ojp.gov/sites/g/files/xyckuh186/files/media/document/compas.pdf.

[29] Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin, How We Analyzed the COMPAS Recidivism Algorithm, ProPublica, May 23, 2016, https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.

[30] Maia Szalavitz, The Pain Was Unbearable. So Why Did Doctors Turn Her Away?, Wired, Aug. 11, 2021, https://www.wired.com/story/opioid-drug-addiction-algorithm-chronic-pain.

[31] E. Martinez and L. Kirchner, The Secret Bias Hidden in Mortgage-Approved Algorithms, The Markup, Aug. 25, 2021, https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms.

[32] Artificial Intelligence Risk Management Framework, Nat’l Inst. of Stds. and Tech., Jan. 2023, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

[33] Blueprint for an AI Bill of Rights, The White House, https://www.whitehouse.gov/ostp/ai-bill-of-rights.

[34] FTC Explores Rules Cracking Down on Commercial Surveillance and Lax Data Security Practices, Fed. Trade Comm.: Press Release, Aug, 11, 2022, https://www.ftc.gov/news-events/news/press-releases/2022/08/ftc-explores-rules-cracking-down-commercial-surveillance-lax-data-security-practices.

[35] Pub. Law No: 117-207.

[36] FY23 Nat’l Defense Auth. Act.

[37] H.R. 6580.

[38] For a comprehensive view on federal, state and European state of the art, see Artificial Intelligence and Automated Systems 2022 Legal Review, Gibson Dunn Blog, Jan. 25, 2023, https://www.gibsondunn.com/artificial-intelligence-and-automated-systems-2022-legal-review.

[39] Tiago Palma Pagano et al., Bias and Unfairness in Machine Learning Models: A Systematic Literature Review, Cornell University, arXiv:2202.08176, Nov. 3, 2022, https://arxiv.org/abs/2202.08176.

[40] Sergio Charles, The Algorithmic Bias and Misrepresentation of Mixed Race Identities by Artificial Intelligence Systems in the West, Research Based Argument in Vol. 1 No. 1 (2023): AI Frameworks Discission of Abeba Birhane’s “Algorithmic Injustice” and Social Impact Articles (Feb. 16, 2023), https://ojs.stanford.edu/ojs/index.php/grace/article/view/2592.

[41] Ben Green, The Flows of Policies Requiring Human Oversight of Government Algorithms, 45 Computer Law & Security Review1–22 (2022).

[42] The AI Now Institute provides a useful introduction. https://ainowinstitute.org/.

[43] Reynaldo Gonzalez et al. v. Google LLC et al., No. 21-1333.

[44] 47 U.S.C. § 230.

[45] George Petropoulos, The Dark Side of Artificial Intelligence: Manipulation of Human Behaviour, Bruegel Blog, Feb. 2, 2022, https://www.bruegel.org/blog-post/dark-side-artificial-intelligence-manipulation-human-behaviour

[46] A useful starting point is the AI Litigation Database of George Washington Law School, https://blogs.gwu.edu/law-eti.

[47] L. Mohammad, P. Jerenwattananon, J. Summers, An Open Letter Signed by Tech Leaders, Researchers Proposes Delaying AI Development, NPR, March 29, 2023, https://www.npr.org/2023/03/29/1166891536/an-open-letter-signed-by-tech-leaders-researchers-proposes-delaying-ai-developme.

 

Related Articles

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)

Join NYSBA

My NYSBA Account

My NYSBA Account