What Happens When AI Is Evidence

By Rebecca Melnitsky

October 13, 2023

What Happens When AI Is Evidence

10.13.2023

By Rebecca Melnitsky

As artificial intelligence becomes more prevalent, judges, juries and attorneys will have to know what to do when these technologies are evidence in court cases. A recent  New York State Bar Association Continuing Legal Education course focused on what may happen and how lawyers can present evidence in court when judges and juries are not familiar with the technology behind AI.

The panelists were:

  • James C. Francis, mediator/arbitrator/special master at JAMS and former U.S. magistrate judge in the Southern District of New York.
  • Paul R. Gupta, partner at Rimon and co-chair of the Dispute Resolution Section’s Technology Committee.
  • Ronald J. Hedges, principal of Ronald Hedges.
  • Christian P. Levis, partner at Lowey Dannenberg.

“The concept behind AI is that it utilizes technology to simulate some of the decision-making that would otherwise be conducted by human beings,” said Francis. “So the range of uses of AI is virtually as broad as the range of uses for the human mind.”

Admissibility of AI-Generated Reports

The panelists discussed a theoretical situation in which a high school uses AI tools to scan students’ activity on school-issued devices and identify students at risk for harmful behaviors. A student is recommended for counseling based on the AI tool’s analysis of the student’s  language and web activity. A parent then sues the school for its administrators’ decision, and the school introduces the AI tool’s report in its defense of its actions.

Levis said that an expert or a school official should be able to explain what the software is, why the school uses it, and how it is being used. “I think you can go a long way with framing your questions correctly when you seek to admit stuff like this in terms of establishing the credibility of why it should be relied on,” he said. “Never underestimate what good foundation questions can do to help you get something admitted.”

Francis added that it would be beneficial to think about how such a situation would play out without the use of AI, like if an assistant principal overheard a student’s conversation, and that led the assistant principal to believe the student was at risk of self-harm.

“That assistant principal becomes the witness and can be cross-examined about what he heard, why he gave it credence, what about his experience led him to believe that that statement was an indicator of potential self-harm,” he said. “So while the information that the witness is providing may be subject to cross-examination and is transparent, it may also be viewed by the finder of fact as insufficiently grounded in reality. How is the assistant principal really able to predict what this child is going to do? With AI, you end up with at least the veneer of scientific certainty. And that’s why there is a value in probing how that result was obtained through AI. Through the datasets, the algorithm and so forth.”

Determining if Data is Real

There’s also the concert that AI-generated images could be used to manipulate and fool juries.

Levis said that while there is technology to detect if images are fake or authentic, those tools can be fooled, citing a New York Times article. “Keep in mind that the detection technology is not perfect,” he said. “Use it as carefully as you use AI-generated content itself. But just keep in mind that it’s an evolving area to be aware of.”

Gupta said that juries will have to rely on experts to be character witnesses for technology to determine that an image is real.

Sponsors of the CLE included the Committee on Technology and the Legal Profession and the Dispute Resolution Section. The CLE is available on demand.

Related Articles

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)

Join NYSBA

My NYSBA Account

My NYSBA Account