AI Creates Challenges for Judges and Courts Around the World
2.3.2026

As more attorneys incorporate artificial intelligence into their practices, courts around the world are figuring out how to account for this technology while being aware of its risks. The New York State Bar Association hosted a global seminar for judges to share tips and strategies on handling AI in the courts.
The speakers were:
- Judge Seo Yoon Lee, Judicial Research & Training Institute of the Supreme Court of Korea in Seoul, South Korea.
- Judge Sidney H. Stein of the United States District Court, Southern District of New York.
- Judge Friedrich Joachim Mehmel, former president of the Hamburg Constitutional Court in Hamburg, Germany.
Jonathan Armstrong, partner at Punter Southall Law in London, served as moderator. Nearly 300 people attended.
Judge Stein shared that he recently had a case in which there were incorrect quotations on a brief due to an attorney’s use of AI. “My clerk alerted me to the fact that a number of the quotations from cases were not accurate,” he said. “The cases existed, and the case citation was correct. But the quotations from the cases were made up.”
Judge Stein said the lawyer in charge of the case took full responsibility for the error and promised that his firm would train lawyers in AI so it would not happen again. So the judge did not pursue disciplinary measures.
“I think as the risks of false citations [and] hallucinations become more well known, the occasions of that happening will decrease,” said Judge Stein. “I can’t guarantee it. We’ll see. But it would be extraordinary for me to try to impose criminal penalties for what really is a civil wrong. I think fines may be appropriate… The public embarrassment should be enough to avoid it happening.”
Judge Lee said that courts in South Korea have proposed a model for AI in judicial proceedings. In this model, AI can only be used if all parties are aware and consent to its use. Most importantly, everything produced by AI is reviewed by humans, and final decisions are made by human judges.
“Every judgment comes with a judge’s name on it,” said Judge Lee. “It carries the weight of the name. That’s not just the signature, it means that the person takes responsibility, and a machine cannot do that. And… courts don’t just decide right versus wrong. They decide how to balance things that both matter. Such as free speech versus reputation, public safety versus privacy. That requires human judgment, and machines cannot make those calls, and it’s not desirable either. And judging requires understanding people’s lives, their situations, what they’re going through. AI can analyze data. However, it can’t grasp human meaning, that’s why a judgment should be left to humans.”
The panelists also raised the concern that while AI can aid in low-level tasks, it is taking away opportunities for junior lawyers to learn from completing those tasks.
“If we block law graduates from these experiences, I think in 10 years, we will see not really desirable consequences,” said Judge Lee. “We will have mid-level lawyers who look good on paper, but crack under pressure – lawyers who cannot write a brief without AI, or lawyers who never developed real judgment… So, we are sacrificing the next generation professional for short-term efficiency. So, I think we should be aware of it and figure out how to educate our young lawyers.”
The program was sponsored by the New York State Bar Association’s International Section and its Committee on Artificial Intelligence and Emerging Technologies.




