6 days 23 hrs 13 min
Register for Annual Meeting Before 12/12/2025 to Save $100

Will AI Render Lawyers Obsolete?

By Annabel V. Teiling

October 7, 2025

Will AI Render Lawyers Obsolete?

10.7.2025

By Annabel V. Teiling

Artificial intelligence is driving a new conversation about its impact on the workforce, specifically its potential to automate tasks and replace humans across many professions. This ongoing discussion, fueled by both excitement over technological advancements and anxiety about job displacement, has prompted critical debates and examinations of how AI will integrate into and transform various professions, including legal professions.

While the practice of law is human-centered and requires nuanced judgment, many of its foundational tasks, like legal research and data analysis, are now being automated by AI. The key question for the legal community is not whether AI will have an impact, but rather how it will reshape the profession. Instead of being seen as a future threat, at least for now, AI should be viewed as a tool that can complement and enhance lawyers’ work. The challenge is to adapt proactively to a constantly changing technological landscape, ensuring that the legal profession evolves in a way that preserves its core values while embracing new efficiencies.

Today’s Reality and the Use of AI in the Law

The use of artificial intelligence has become a reality, redesigning how certain legal tasks are performed daily. AI is more than just a search engine; lawyers are now regularly employing machine learning tools for a range of functions, including e-discovery, analyzing contracts, and automating parts of the due diligence process.[1] Lawyers who do not incorporate relevant technological competence into their work will likely be outpaced by those who effectively utilize these tools. The legal community’s challenge is to not just acknowledge AI but to thoughtfully adopt it to modernize and enhance their practices, boost efficiency, and increase client value.[2]

One of the key areas where law firms are using machine learning is in e-discovery and document review. AI can quickly sift through massive amounts of documents, such as emails, contracts, and other case-related documents. It can identify relevant keywords, flag privileged information, and even determine the likelihood of a document being relevant to a case. This automation drastically speeds up a previously time and labor-intensive process, leading to substantial reductions in both legal fees for clients and internal resources and operational costs a firm must dedicate to the task.[3]

Firms are also leveraging AI for contract analysis. AI tools can pinpoint and flag regulatory risks within contracts. This capability enhances both the speed and accuracy of due diligence. Rather than having lawyers manually go through hundreds or thousands of pages of contracts, AI can rapidly scan and interpret contracts, identifying and isolating specific clauses and data points. For example, AI can instantly identify a clause that violates or conflicts with a new data privacy law, a detail a human might miss. This is particularly valuable during the due diligence phase of mergers and acquisitions. Through AI tools, a firm can review a target company’s entire portfolio of contracts in a fraction of the time it would take a human, and without the potential for human error, ensuring a faster and more thorough risk assessment.[4] The result is a more efficient, accurate, and cost-effective legal service for the client.[5]

Finally, another major application of AI in law is predictive analytics to gain a strategic advantage. Law firms are utilizing specialized platforms that are designed to forecast case outcomes. AI can process and analyze vast amounts of historical legal data, including court records, judicial opinions, and a judge’s prior rulings in no time. By analyzing this information, AI can identify patterns and trends that are difficult for humans to spot, to generate the statistical probability of a particular case’s outcome, such as the likelihood of a motion being granted or a case settling and the risks and benefits of continuing litigation.[6]

AI’s Capabilities and Limitations

AI’s primary strengths lie in its speed, scalability, and ability to recognize patterns within vast datasets.[7] However, legal work is not solely technical; it encompasses interpretive, relational, and ethically complex dimensions. The application of statutes, case law, and regulations frequently occurs in emotionally charged or morally ambiguous situations and with years of experience, knowledge and understanding of the law.[8]

Crucially, legal advice depends on professional judgment and the establishment of interpersonal trust. AI cannot replicate the essential role of a lawyer who advocates in court, negotiates deals face-to-face, and counsels clients through crises where experience, human insight and empathy are indispensable.[9]

In the corporate sphere, AI’s limitations become particularly apparent when navigating regulatory landscapes that are ambiguous. While AI excels at identifying clear-cut compliance issues, it struggles with the subtle interpretations and strategic judgments often required in highly nuanced regulatory areas.[10] Assessing risk in these contexts frequently depends not only on the letter of the law, but also on the specific type of corporation, the nature of its products or services, and their potential impact on customers. Crucially, corporate culture plays a significant role.[11] A company’s appetite for risk, its ethical framework, and its long-term strategic goals all influence decisions in gray areas. For instance, if a regulatory interpretation is open to debate and a particular approach offers substantial customer benefits with only minor, manageable risks, a company might strategically choose to lean into that ambiguity. Conversely, if the potential consequences of misinterpretation include severe penalties, reputational damage, or significant harm to customers, a more conservative legal stance would likely be advised.[12] These complex balancing acts requiring human judgment, ethical consideration, and an understanding of organizational specificities are beyond AI’s capabilities.[13]

Practicing Without a License

Beyond the direct practice of law, the accessibility and apparent fluency of AI tools have inadvertently fostered a dangerous overconfidence among some non-lawyers, leading them to believe they can draft legal opinions or provide legal advice without a license.[14] This phenomenon is particularly concerning in corporate environments, where in-house business teams, lured in by quick AI-generated insights, might bypass legal counsel and rely on AI for complex legal interpretations or contract drafting, potentially leading to significant and unforeseen legal risks for a company.[15]

This false sense of legal expertise, propagated by AI’s ability to produce plausible sounding, but legally inaccurate or incomplete responses, extends to the public as well. Individuals might turn to AI for personal legal issues, misinterpreting its output as authoritative advice, which can lead to misguided decisions, unrepresented legal actions, or a fundamental misunderstanding of their rights and obligations.[16] This democratization of possibly misleading legal information poses a broad danger, undermining the regulated practice of law and potentially leading to serious negative consequences for individuals and organizations alike.

This may precisely be why the judgment of experienced lawyers, including seasoned in-house counsel who deeply understand the company’s specific context and risk profile, is more critical than ever to navigate these complexities and mitigate potential risks.[17] Instead of rushing to make AI available to all employees, corporations should consider whether all employees do in fact require access to advanced AI tools, or they should, at the very least, implement clear and specific internal guidelines and policies governing their use, especially for legal or quasi-legal tasks.[18]

Professional Responsibility and Technological Competence

The integration of AI into legal practice places greater emphasis on a lawyer’s professional responsibility. The New York Rules of Professional Conduct mandate that lawyers maintain competence (Rule 1.1), a standard that now includes understanding the benefits and risks associated with relevant technology. This principle is rooted in ABA Model Professional Conduct Rule 1.1, Comment 8. The New York State Bar Association’s Task Force on Artificial Intelligence has echoed these concerns, cautioning lawyers to ensure that the use of these tools does not compromise confidentiality, diligence, or their independent judgment..[19]

The ethical risks of overreliance on AI are significant. Automated tools and systems can perpetuate existing biases, misinterpret legal language, and create overconfidence in their outputs. This can lead to flawed legal strategies and advice.[20]

Finally, it is crucial to emphasize that an AI tool is not licensed to practice law and, for now, cannot be. The practice of law is a highly regulated profession reserved for individuals who have met strict educational and ethical requirements. An AI system cannot pass the bar exam, swear an oath, or be held professionally accountable.[21] This is a fundamental distinction that places the full responsibility for legal advice and its ethical implications squarely on the shoulders of an attorney, who must always supervise and verify AI’s work.[22]

Perpetuating Existing Biases

Another potential pitfall is that automated tools, especially those leveraging AI, can perpetuate existing biases within the legal system, primarily because they learn from historical data that may reflect societal prejudices and inequalities. When this biased data is fed into an algorithm, AI can internalize and even amplify those biases in its outputs, leading to unfair or discriminatory outcomes.[23]

For example, algorithms that use historical crime data to predict where future crimes might occur or who might be involved can perpetuate bias.[24] If historical policing data reflects over-policing in certain minority neighborhoods, the algorithm might direct more resources to those areas, leading to more arrests and reinforcing the cycle of disproportionate targeting, even if actual crime rates are not higher. This can also lead to increased surveillance and discriminatory interactions for specific communities.[25]

While not exclusively a legal concern, AI tools used in recruitment processes for legal positions have also led, in certain instances, to the reproduction of biases from past hiring practices. If a firm historically favored certain demographics, an AI tool trained on that data might disproportionately screen out qualified candidates from underrepresented groups, regardless of their actual qualifications. This could lead to a less diverse workforce.[26]

These examples highlight why human oversight and careful auditing of AI systems and outputs are crucial to mitigate the risk of perpetuating discrimination within the legal system.[27]

AI Legal Misinterpretation

Automated tools, particularly large language models used in legal contexts, can misinterpret legal language in several critical ways due to their underlying architecture and the nature of legal discourse.[28] Unlike human lawyers who employ deductive logic and real-world understanding, AI models primarily operate on probabilistic language prediction, constructing text that sounds plausible but may lack substantive legal accuracy or contextual understanding.[29]

An AI hallucination is when an AI model, especially a large language model, generates false or misleading information while presenting it as fact. This occurs because the AI’s primary function is to predict the most likely next words based on its training data, not to verify factual accuracy. If the data is incomplete or the model cannot find a correct answer, it will invent one that sounds plausible. AI models can fabricate case law, statutory provisions, or doctrinal interpretations that appear credible but are entirely false.[30] Hallucinations of non-existent legal precedent or statutes are now the most dangerous form of false or misleading information.

In Mata v. Avianca, Inc.,[31] the plaintiff’s counsel filed a submission that included citations to several non-existent cases. The attorney admitted to using ChatGPT to “supplement” his legal research and that AI had provided the fabricated cases. Even after being asked by the court to verify the cases, the attorney went back to ChatGPT, which “complied by inventing a much longer text.” The court found that the attorney had submitted “bogus judicial decisions, with bogus quotes and bogus internal citations.” The lawyers involved were each ordered to pay $5,000 fines, with the court emphasizing that while AI tools are not inherently improper, their output must be verified. This case became highly publicized and prompted many courts to issue new standing orders requiring disclosure of AI use or verification of AI-generated content.[32]

In Gauthier v. Goodyear Tire & Rubber Co.,[33] the plaintiff’s counsel submitted a response to a summary judgment motion that included citations to two non-existent cases and multiple fabricated quotations. The lawyer admitted to using a generative AI tool, “Claude,” without verifying its output. The court sanctioned the attorney, ordering him to pay a $2,000 penalty and complete a continuing legal education course on AI in the legal field. The court emphasized that Rule 11 (requiring filings to be grounded in existing law) requires attorneys to “read, and thereby confirm the existence and validity of, the legal authorities on which they rely.”[34]

These cases serve as stark warnings about the risks of using AI in legal research and drafting without human verification. They highlight that the responsibility for the accuracy of submissions ultimately rests with the human attorney.

Aside from hallucinations, AI may also suggest case law that, at first glance, seems supportive of an argument but, upon closer examination by a human, undermines it. AI lacks true legal reasoning and generates results based on pattern recognition rather than a deep contextual understanding of how a judge might interpret principles in a specific, unique case. More problematic, they also have a tendency to seem to agree with a user’s incorrect assumptions, reinforcing existing biases rather than challenging them objectively.[35]

AI’s knowledge is limited by its training data. If the legal texts it was trained on are outdated, incomplete, or contain biased information, the AI program can “hallucinate” by producing “nonexistent opinions, inaccurate analysis of authority, and use of misleading arguments.” It may not pick up on recent repeals, amendments, or publications of new legislation.[36] These issues underscore why human oversight and critical verification of AI outputs are indispensable in legal practice.

Overconfidence in Outcome

As noted above, AI tools can also create overconfidence in suggested outcomes primarily due to a phenomenon known as automation bias. This bias describes the human tendency to over-rely on automated aids and decision support systems, leading individuals to favor their suggestions and potentially disregard contradictory information or their own judgment.[37]

Large language models and other AI tools are designed to generate responses that are highly fluent, coherent, and often appear authoritative, even when the information is incorrect or fabricated. This polished presentation can mask underlying inaccuracies, leading users to implicitly trust the output without sufficient scrutiny.[38] Humans naturally tend to take the path of least cognitive effort. When AI provides a quick, seemingly complete answer, it reduces the need for the user to engage in deeper critical thinking, independent research, or verification.[39] This fosters a sense of overconfidence in AI’s output.[40]

The ‘AI Crutch’ and Lack of Grunt Work of Future Generations

While AI promises unprecedented efficiencies, it also presents significant pedagogical challenges for the next generation of lawyers. The immediate availability of AI-generated answers, summaries, and even the automated draft of documents could inadvertently create an “AI crutch,” potentially circumventing the very “grunt work” that has traditionally formed the bedrock of foundational legal training.[41]

Historically, junior lawyers have built their skills through labor-intensive tasks like e-discovery, document review, in-depth case analysis, and drafting early versions of legal documents. This grunt work has been a crucial part of their training, as it helps them develop critical skills like analytical reasoning and attention to detail. This process is also key to their professional development, as their initial work is often refined or critiqued by senior attorneys. Grunt work is also crucial for developing pattern recognition and a nuanced understanding of legal principles in practice.[42] AI automation is taking away these fundamental learning experiences, and there is a risk that younger attorneys are developing an overreliance on these tools without fully grasping the underlying legal concepts, the context of the information, or the critical thinking necessary to identify AI’s inherent limitations, such as hallucinations or biased outputs.[43] This could lead to a generation of lawyers with diminished independent judgment and competence and an inability to “dig deeper” when AI falls short, potentially compromising the quality of legal advice and advocacy.[44]Addressing this issue will require a multi-pronged approach involving law schools, law firms, in-house legal departments and bar associations.[45]

Law Schools Will Need To Reimagine Legal Pedagogy

Law schools will need to reimagine legal pedagogy by “reverse engineering” AI output. Law schools and training programs should incorporate exercises where students receive AI-generated legal work and are tasked with deconstructing it. This should involve verifying every citation, scrutinizing the legal reasoning, identifying potential biases, and evaluating the completeness and accuracy of the arguments. By using AI for critical verification, students can learn to conduct the same deep analysis and rigorous effort that was expected before AI existed.[46]

Law schools should continue to emphasize core legal research methods, analytical frameworks, and the ability to synthesize information from primary sources, even if AI can assist. Just as law students once had to learn legal research using traditional books and methods, even after tools like Westlaw and LexisNexis became available, students today should be assigned tasks that require them to do research without relying on AI. This ensures they develop fundamental skills like critical thinking, identifying relevant sources, and analyzing primary materials before using AI.[47] While AI is a powerful tool, it is not a substitute for core competency.[48]

Courses should also cover a lawyer’s duties regarding data privacy and confidentiality. Students need to understand how AI tools, particularly large language models, handle data and the potential for client information to be inadvertently exposed if not managed properly. This includes learning about “closed-system” versus “open-system” AI and when to seek a client’s informed consent.[49] Curricula should also provide a comprehensive understanding of how AI can perpetuate bias. Students must learn that AI models are trained on historical data that may contain societal prejudices. The focus should be on how to identify and mitigate these biases in AI outputs, ensuring that the legal work remains fair and equitable.[50] And finally, law schools must teach students how to critically evaluate AI’s output. This means not just accepting AI-generated information, but understanding its limitations, such as the potential for hallucinations. The goal is to train lawyers to be competent and diligent supervisors of AI, treating it as an assistant rather than a source of infallible truth.[51]

The Responsibility of Law Firms

Law firms should do their own part in continuing to form young lawyers by implementing strict protocols for supervising AI-generated work, akin to how senior attorneys supervise junior associates.[52] This means partners and senior associates should actively review and verify AI outputs, using it as a teaching moment rather than a delegation of responsibility. Instead of full delegation, younger lawyers could initially use AI for “first-pass” tasks (e.g., initial document review, preliminary research suggestions) and then be required to perform traditional manual verification and deeper analysis to ensure they build foundational knowledge.[53]

This commitment to training should not be viewed as a cost to be passed on to clients, but as a strategic investment in the firm’s own growth and future. When firms invest in the development of young lawyers, they are cultivating the next generation of leaders, experts, and rainmakers who will sustain the practice for years to come.[54] By ensuring young lawyers fully understand the foundational legal work, even if AI does the “first pass,” the firm builds a team with deep, verifiable skills. This leads to higher quality work and a stronger reputation, which in turn attracts more clients.[55]

The Responsibility of Bar Associations

Bar associations such as the American Bar Association and the New York State Bar Association must continue to evolve their professional conduct rules to clearly define technological competence and the ethical boundaries of AI use.[56] By developing and sharing best practices, case studies (including examples of AI misuse), and practical guides for AI adoption, they can standardize responsible use across the legal profession. In addition, mandating continuing legal education credits focused on AI ethics, AI tools, and the critical assessment of AI output ensures lawyers at all stages of their careers remain current and responsible users of technology.[57]

NYSBA’s Task Force on Artificial Intelligence and its comprehensive Report and Recommendations serves as an excellent example of a bar association proactively addressing the ethical and practical implications of AI, rather than waiting for problems to arise. The task force recommended that NYSBA focus on educating lawyers, judges, and law students, recognizing that AI technology is evolving too rapidly for rigid legislation and that instead, the focus should be on providing lawyers with the tools to apply existing ethical rules to new technologies.[58] The task force’s report proposed an expansion to Comment [8] of the American Bar Association Rule 1.1 to clarify that the duty to stay “abreast of changes in the law and its practice” to explicitly include an understanding of AI tools.[59] It also provided a set of clear and actionable guidelines for lawyers using AI, such as the need for lawyers to protect client data, to not blindly accept AI output, and to consider disclosing the use of AI tools to their clients, especially if it involves sharing client confidential information.[60] By taking these steps, NYSBA demonstrates a path for other bar associations to follow.

The Unique Challenge of In-House Legal Departments

In-house legal departments are motivated by AI’s potential for cost savings and efficiency, but this must also be balanced with human oversight.[61] While AI can streamline routine tasks, it cannot replace the nuanced judgment of experienced in-house lawyers who understand the company’s risk tolerance, strategic goals, and unique culture.

Maintaining high-quality legal guidance within a company requires robust training for younger lawyers. This training must go beyond technological proficiency, focusing instead on a profound understanding of the business, its products, and its corporate culture. This contextual knowledge is where AI often falls short.

In addition, in-house counsel should develop internal policies for AI use across the entire company. These policies should set clear rules for handling confidential data and mandate that all AI outputs be verified before being used for business use, decisions or external advice. By doing this, legal departments can foster a culture of responsible AI use.[62]

Conclusion

The legal profession stands at an undeniable inflection point, not on the precipice of obsolescence, but rather on the cusp of profound transformation. Routine, high-volume tasks will increasingly be delegated to machines, freeing lawyers to focus on high-value advisory, negotiation, and litigation work. This shift is already impacting how law firms structure their billing, general counsel assess external legal services, and individual practitioners remain competitive in the market.[63]

Law schools like New York University, Columbia, and Fordham are now offering courses in legal technology, while the states’ court systems experiment with digital filings and remote proceedings.[64] Simultaneously, bar associations, including NYSBA, are actively examining how to regulate AI use while preserving public trust in the legal system.[65]

This is an opportunity for lawyers to modernize their practices, increase their value to clients, and ensure that technology serves to enhance, rather than undermine, the delivery of justice. Embracing AI strategically will allow legal professionals to adapt, innovate, and continue providing the essential human elements of legal counsel that AI cannot replicate.


Annabel V. Teiling is senior managing counsel at Booking.com, where she focuses on insurance regulatory compliance, litigation, digital transactions, and privacy. She previously served as general counsel at Samsung Fire & Marine Management Corp. and assistant general counsel at Chubb. She is admitted to practice in federal and state courts in New York, New Jersey, and the District of Columbia, including the U.S. Supreme Court.

Endnotes

[1] See Stu White, How AI Is Reshaping the Future of Legal Practice, The Law Soc’y (Nov. 20, 2024), https://www.lawsociety.org.uk/topics/ai-and-lawtech/partner-content/how-ai-is-reshaping-the-future-of-legal-practice.

[2] See A.B.A. Standing Comm. on Ethics & Pro. Resp., Formal Op. 512 (July 30, 2024), https://www.lawnext.com/wp-content/uploads/2024/07/aba-formal-opinion-512.pdf; Paul S. Hunter, ABA Says Lawyers Must Understand AI, Foley & Lardner LLP (July 31, 2024), https://www.foley.com/p/102jf3v/aba-says-lawyers-must-understand-ai/.

[3] See, e.g., Clio, How AI Enhances Legal Document Review (PAID CONTENT), A.B.A. (Feb. 13, 2025), https://www.americanbar.org/groups/law_practice/resources/law-technology-today/2025/how-ai-enhances-legal-document-review/; see also David K. Shook, From Automation to Generative AI, How E-Discovery Tools Are Evolving, A.B.A. J (Feb. 24, 2025), https://www.abajournal.com/columns/article/from-automation-to-generative-ai-how-e-discovery-tools-are-evolving.

[4] See Andrew W. Zickert, Diligencing AI-Enabled M&A Targets: Seven Things To Understand, A.B.A. (Jan. 29, 2024), https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-january/diligencing-ai-enabled-ma-targets/.

[5] See Marjorie Richter J.D., How AI Is Transforming the Legal Profession, Thomson Reuters Legal Solutions (Aug. 18, 2025), https://legal.thomsonreuters.com/blog/how-ai-is-transforming-the-legal-profession/.

[6] See Joshua S. Gans, Working Paper No. 32685, Demand for Artificial Intelligence in Settlement Negotiations, Nat’l Bureau of Econ. Rsch. 1–2 (July 2024), https://www.nber.org/system/files/working_papers/w32685/w32685.pdf.

[7] See Thomson Reuters, supra note 5.

[8] See Samuel Estreicher and Lior Polani, AI’s Limitations in the Practice of Law, Justia (Aug. 8, 2025), https://verdict.justia.com/2025/08/08/ais-limitations-in-the-practice-of-law; see also Mark Eldridge, The Limitations of AI in Legal Operations: Why Technology Alone Won’t Solve All Legal Challenges, Assoc. of Corp. Couns., (Apr. 27, 2025), https://www.acc.com/resource-library/limitations-ai-legal-operations-why-technology-alone-wont-solve-all-legal.

[9] See Kristin B. Gerdy, The Heart of Lawyering: Clients, Empathy, and Compassion, 57 J. Legal Educ. 195, 195 (2007), https://web.law.duke.edu/sites/default/files/clinics/healthjustice/gerdy_-_the_heart_of_lawyering_clients_empathy_and_compassion.pdf.

[10] See Amit Batra, When AI Gets It Wrong: Why LLMs Can’t Fully Navigate Banking’s Legal and Regulatory Gray Areas, Medium (Mar. 18, 2025), https://thought-walks.medium.com/when-ai-gets-it-wrong-why-llms-cant-fully-navigate-banking-s-legal-and-regulatory-gray-areas-b18c7bec0407.

[11] Eldridge, supra note 8.

[12] See Katherine B. Forrest, SHIELD: Guidelines for Navigating the AI Regulatory Landscape, N.Y.L.J. (Aug. 18, 2025), https://www.paulweiss.com/media/tovf3r10/shield_guidelines_for_navigating_the_ai_regulatory_landscape.pdf.

[13] See Batra, supra note 10; see also, Eldridge, supra note 8.

[14] See Matt Loeffelholz, 5 Risks of Relying on Artificial Intelligence Instead of Attorney Insight, FBFK L. (June 3, 2025), https://www.fbfk.law/5-risks-of-relying-on-artificial-intelligence-instead-of-attorney-insight.

[15] See Diane Moss and Ken Fishkin, Top AI Risks General Counsels Should Address, Lowenstein Sandler LLP (Feb. 18, 2025), https://www.lowenstein.com/news-insights/publications/client-alerts/top-ai-risks-general-counsels-should-address-privacy; see also Francisco Morales Barron, Op-Ed: What In-House Counsel Needs To Know About Generative AI, CorpGov, (Dec. 2, 2024), https://corpgov.com/op-ed-what-in-house-counsel-needs-to-know-about-generative-ai/.

[16] See Modernizing Unauthorized Practice of Law Regulations To Embrace Technology, Improve Access to Justice, Nat’l Ctr. For St. Cts. (Aug. 20, 2025), https://www.ncsc.org/resources-courts/modernizing-unauthorized-practice-law-regulations-embrace-technology-improve.

[17] See Ryan Black, Morgan McDonald, Keri Bennett and Tyson Gratton, Using AI Responsibly as In-House Counsel: Law Society of BC Releases Guidance on Professional Responsibilities, DLA Piper, (Nov. 27, 2023), https://www.dlapiper.com/en-ae/insights/publications/2023/11/using-ai-responsibly-as-inhouse-counsel.

[18] See Bradford J. Kelley, Mike Skidgel, and Alice Wang, Considerations for Artificial Intelligence Policies in the Workplace, Littler, (Mar. 10, 2025), https://www.littler.com/news-analysis/asap/considerations-artificial-intelligence-policies-workplace.

[19] See N.Y. R. Prof. Conduct 1.1(c) (2023) (stating a lawyer must maintain “the requisite knowledge and skill” to practice law competently); N.Y. State Unified Ct. Sys., Rules of Professional Conduct, Rule 1.1 cmt. [8], https://www.nycourts.gov/legacypdfs/rules/jointappellate/NY-Rules-Prof-Conduct-1200.pdf (last visited Aug. 27, 2025).

[20] See Luca CM Melchionna, Bias and Fairness in Artificial Intelligence, N.Y. St. B. Ass’n J. (July 2023), https://nysba.org/bias-and-fairness-in-artificial-intelligence/.

[21] See Thomson Reuters, supra note 5; see also N.Y. St. Bar Ass’n, Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence, 5 (April 2024), https://nysba.org/wp-content/uploads/2022/03/2024-April-Report-and-Recommendations-of-the-Task-Force-on-Artificial-Intelligence.pdf.

[22] See State Bar of Cal. Standing Committee on Professional Responsibility and Conduct, Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law, 3 (2025), https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf; see also ABA Issues First Ethics Guidance on a Lawyer’s Use of AI Tools, A.B.A., (July 29, 2024), https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/.

[23] See Thomson Reuters, Addressing Bias in Artificial Intelligence, 2 (2023), https://www.thomsonreuters.com/en-us/posts/wp-content/uploads/sites/20/2023/08/Addressing-Bias-in-AI-Report.pdf.

[24] See Melchionna, supra note 20; see also Algorithmic Discrimination: Examining iIs Types and Regulatory Measures With Emphasis on US Legal Practices, 1 PMC 1, 3 (2024), https://pmc.ncbi.nlm.nih.gov/articles/PMC11148221/.

[25] See Yale L. Sch., Algorithms in Policing: An Investigative Packet 2 (2025), https://law.yale.edu/sites/default/files/area/center/mfia/document/infopack.pdf.

[26] See Kadin Mesriani, AI & HR: Algorithmic Discrimination in the Workplace, Cornell J.L. & Pub. Pol’y, The Issue Spotter, (Oct. 31, 2024), https://jlpp.org/ai-hr-algorithmic-discrimination-in-the-workplace.

[27] See Melchionna, supra note 20.

[28] See Marjorie Richter J.D., Concerns and Legal Issues Surrounding AI, Thomson Reuters, (July 29, 2025), https://legal.thomsonreuters.com/blog/the-key-legal-issues-with-gen-ai/.

[29] See Mark Jennings-Bates, The Truth About AI – Why Artificial Intelligence Cannot Really ‘Understand’ Context, BIG Media, (Aug. 14, 2025), https://big-media.ca/the-truth-about-ai-why-artificial-intelligence-cannot-really-understand-context/.

[30] See Zach Warren, GenAI Hallucinations Are Still Pervasive in Legal Filings, But Better Lawyering Is the Cure, Thomson Reuters (August 18, 2025), https://www.thomsonreuters.com/en-us/posts/technology/genai-hallucinations/.

[31] No. 22-cv-1461 (PKC) (S.D.N.Y.) (2023).

[32] No. 22-cv-1461 (PKC), 2023 WL 4114995, at *3 (S.D.N.Y. June 22, 2023).

[33] Eastern District of Texas, 2024.

[34] , No. 6:23-CV-00609-ADA, 2024 WL 1636187, at *4 (E.D. Tex. Apr. 16, 2024).

[35] See Melchionna, supra note 20; see also Warren, supra note 30.

[36] Formal Opinion 512, supra note 2, at 10.

[37] See Lauren Kahn, Emelia S. Probasco and Ronnie Kinoshita, AI Safety and Automation Bias 1, Ctr. for Sec. & Emerging Tech., (2024), https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Safety-and-Automation-Bias.pdf.

[38] See Jason Bittel, AI Chatbots Remain Overconfident – Even When They’re Wrong, Carnegie Mellon U. Dietrich Coll. of Humans. & Soc. Scis. (July 22, 2025), https://www.cmu.edu/dietrich/news/news-stories/2025/july/trent-cash-ai-overconfidence.html.

[39] See Samir Passi and Mihaela Vorvoreanu, Overreliance on AI Literature Review 5, Microsoft, (2025), https://www.microsoft.com/en-us/research/wp-content/uploads/2022/06/Aether-Overreliance-on-AI-Review-Final-6.21.22.pdf.

[40] See Melchionna, supra note 20.

[41] See N.Y. St. Bar Ass’n. supra note 21; see also Travis Whitsitt, How AI-Powered Legal Assistants Are Transforming Entry-Level Legal Work, Vault (May 13, 2025), https://vault.com/blogs/vaults-law-blog-legal-careers-and-industry-news/how-ai-powered-legal-assistants-are-transforming-entry-level-legal-work.

[42] See N.Y. St. Bar Ass’n. supra note 21; see also Jeff Scurry, AI Advice for Young Lawyers, A.B.A. (July 15, 2025), https://www.americanbar.org/groups/law_practice/resources/law-practice-today/2025/july-2025/ai-advice-for-young-lawyers/.

[43] See N.Y. St. Bar Ass’n. supra note 21.

[44] See Samantha A. Moppett, Preparing Students for the Artificial Intelligence Era: The Crucial Role of Critical Thinking Skills, Suffolk U. L. Sch. Rsch. PAPER NO. 25-4, at 2 (Mar. 25, 2025), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5193298.

[45] See Tim Wilbur, AI in Law Firms Should Be a Training Tool, Not a Threat, for Young Lawyers (Opinion), Canadian Lawyer. (July 25, 2025), https://www.canadianlawyermag.com/news/opinion/ai-in-law-firms-should-be-a-training-tool-not-a-threat-for-young-lawyers/392807.

[46] See N.Y. St. Bar Ass’n. supra note 21.

[47] See Donna Campbell, AI Enters the Classroom as Law Schools Prep Students for  Tech-Driven Practice, The Nat’l Jurist (July 9, 2025), https://nationaljurist.com/ai-enters-the-classroom-as-law-schools-prep-students-for-a-tech-driven-practice/.

[48] See Ian Morris, 7 Ways Artificial Intelligence Is Already Changing Law School and Legal Careers, The Colleges of Law. (Mar. 24, 2025), https://www.collegesoflaw.edu/blog/2025/03/24/7-ways-artificial-intelligence-is-already-changing-law-school-and-legal-careers/.

[49] See George Wash. Univ. L. Sch., Privacy and Technology Courses, https://www.law.gwu.edu/privacy-and-technology-courses (last visited Aug. 28, 2025); see also N.Y. St. Bar Ass’n. supra note 21.

[50] See N.Y. St. Bar Ass’n. supra note 21; see also Richard Hua, AI and Racial Bias in Legal Decision-Making: A Student Fellow Project, Center on the Legal Profession, Harvard L. Sch., https://clp.law.harvard.edu/knowledge-hub/insights/ai-and-racial-bias-in-legal-decision-making-a-student-fellow-project/ (last visited Aug. 28, 2025).

[51] See A.B.A. Standing Comm. on Ethics & Pro. Resp., supra note 2; see also Scurry, supra note 42.

[52] See N.Y. St. Bar Ass’n. supra note 21; see also Jeff Scurry, supra note 42.

[53] See Stephen E. Seckler and David Rosenblatt, Why Law Firms Should Spend Time Investing in Their Talent, A.B.A. (Jan. 15, 2024), https://www.americanbar.org/groups/law_practice/resources/law-practice-today/2024/2024-january/why-law-firms-should-spend-time-investing-in-their-talent/; see also Stefan Nigam, Al-Karim Makhani and Reuben Miller, Is AI Finally Going To Take Our Jobs? Meeting Client AI/Technological Demands While Supporting Junior Lawyers’ Development, Int’l Bar Ass’n (Nov. 29, 2024), https://www.ibanet.org/is-AI-finally-going-to-take-our-jobs.

[54] See Seckler and Rosenblatt, supra note 53; see also Top-Performing Law Firms Investing in Their People and Firm Culture Amidst Ongoing Talent War (Press Release), Thomson Reuters (Mar. 14, 2022), https://www.thomsonreuters.com/en/press-releases/2022/march/top-performing-law-firms-investing-in-their-people-and-firm-culture-amidst-ongoing-talent-war-says-thomson-reuters-report.

[55] See The Hon. Maritza Dominguez Braswell, Legal Training in the Age of AI: A Leadership Imperative, Thompson Reuters (April 30, 2025), https://www.thomsonreuters.com/en-us/posts/ai-in-courts/legal-training-ai-leadership/.

[56] See A.B.A. Standing Comm. on Ethics & Pro. Resp., supra note 2; see also N.Y. St. Bar Ass’n. supra note 21.

[57] See Thomson Reuters, ABA Ethics Rules and Generative AI (Mar. 27, 2025), https://legal.thomsonreuters.com/blog/generative-ai-and-aba-ethics-rules/.

[58] See N.Y. St. Bar Ass’n, Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence, 5 (April 2024),. https://nysba.org/wp-content/uploads/2022/03/2024-April-Report-and-Recommendations-of-the-Task-Force-on-Artificial-Intelligence.pdf.

[59] Id.

[60] Id. See also A.B.A. Model Rules of Prof. Conduct r. 1.1 cmt. 8 (Am. Bar Ass’n 2024).

[61] See Richter, supra note 5.

[62] See Dr. Annette Demmel, Kyle R. Fath, Alan L. Friel, Julia B. Jacobson, Bartolomé Martín and David Naylor , AI Considerations for In-House Counsel, Cybersecurity Law & Strategy(June 2023), https://www.squirepattonboggs.com/en/insights/publications/2023/06/ai-considerations-for-in-house-counsel.

[63] SeeTomas Arvizu, From Billable Hours to Agentic Outcomes: Rethinking Legal Value in the Age of AI, Thomson Reuters (July 15, 2025), https://www.thomsonreuters.com/en-us/posts/legal/rethinking-legal-value/.

[64] See NYU Sch. of L., NYU Technology Law & Policy Clinic, https://www.law.nyu.edu/academics/clinics/tech-law-policy (last visited Aug. 28, 2025); Fordham U. Sch. of L., Intellectual Property and Information Technology Law, https://www.fordham.edu/school-of-law/academics/curriculum/llm-curriculum/llm-areas-of-study/intellectual-property-and-information-technology-law/ (last visited Aug. 28, 2025); Columbia Law Sch., Intellectual Property and Technology, https://www.law.columbia.edu/areas-of-study/intellectual-property-and-technology (last visited Oct. 3, 2025); N.Y. State Unified Court Sys., NYCOURTS.GOV, https://www.nycourts.gov/ (last visited Oct. 3, 2025).

[65] See A.B.A. Standing Comm. on Ethics & Pro. Resp., supra note 2; see also N.Y. St. Bar Ass’’n , supra note 21.

 

 

Related Articles

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)

Join NYSBA

My NYSBA Account

My NYSBA Account