AI Can Do Many Tasks for Lawyers – But Be Careful

By Rebecca Melnitsky

May 16, 2025

AI Can Do Many Tasks for Lawyers – But Be Careful

5.16.2025

By Rebecca Melnitsky

Paul J. Unger explained that more lawyers are using AI in their practices.
Paul J. Unger explained that more lawyers are using AI in their practices.

Artificial intelligence can perform several tasks to aid lawyers and save time. But lawyers must be cautious when using this new technology, lest they break confidentiality or violate ethical standards.

The New York State Bar Association hosted a hybrid program discussing AI’s potential and its pitfalls for the legal profession. More than 300 people watched the livestream.

Paul J. Unger, a prominent speaker, writer and thought leader in the legal technology industry, led the seminar, which included demonstrations of several AI tools. He is a founding partner of Affinity Consulting Group, a nationwide legal technology consulting company.

AI enables machines to learn from experience, adapt to new inputs, and execute tasks in a human way. While older models could predict outcomes based on input – like autocomplete in a word processor – the newest models are constantly learning as they receive more information and data.

With prompts, AI can take a handwritten to-do list and prioritize it, with notes of how long each task will take. AI can create a meeting summary with a checklist for next steps. AI can review and identify important emails, and draft responses based on tone and intent.

And beyond that, AI can perform legal tasks like drafting a deposition, or taking a 300-page court decision and summarizing it in three pages.

“You could see it’s just writing as if it were a law student or a young lawyer,” said Unger. “I wouldn’t say a seasoned lawyer, but it’s still quite impressive.”

However, Unger cautioned that it’s still important to check over the work of an AI tool, as it can hallucinate by creating false or misleading information. This can be especially perilous when using AI for legal research.

For that reason, Unger suggests using legal AI tools, like LexisNexis AI, Westlaw Edge, and vLex Fastcase, for legal research instead of general generative AI tools. While legal-specific tools still hallucinate, they hallucinate much less. A legal tool will hallucinate 10% to 20% of the time, while a tool like ChatGPT will hallucinate 50% to 80%.

However, it’s still important to check to take out hallucinations.

Unger advised rereading and rewriting AI-generated text. “It gets us a first draft,” he said. “It is never a final draft.”

It is also important to protect confidentiality and privacy while using AI tools.

“Do not enter anything confidential or client-related into one of these consumer generative AI tools,” said Unger. “ChatGPT, Perplexity, Claude, Sonar, Grok, Gemini – those are all what we call open systems. Meaning anything that we type in, or any document that we upload is shared to train the large language model. There’s a risk that we violate our duty of confidentiality. Keep it clean, keep it general.”

Unger said that attorneys should disclose the use of AI to their clients, train their staff to use it effectively and ethically, and set up policies and standards for using the technology. AI itself can even write the first draft.

It also helps to be polite to the AI, like saying please before making a request. “We know that polite prompts consistently outperform the impolite prompts,” said Unger. “There’s a 30% drop in performance when we are impolite.”

View the full program here.

Related Articles

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)

Join NYSBA

My NYSBA Account

My NYSBA Account