In what has become a distressingly familiar pattern in courtrooms across America, two more cases have emerged of lawyers submitting briefs containing non-existent legal citations generated by AI tools.

At this point, one wonders if the legal profession needs a mandatory continuing legal education course titled, “How to Avoid Becoming the Next AI Hallucination Headline.”

Even more distressing, perhaps, is that one of the cases involves a major international law firm known for its litigation prowess.

Sanctions for AI-Generated Research

In the first of these two most-recent cases, arising out the U.S. District Court for the Central District of California, attorneys from the law firms Ellis George LLP and K&L Gates LLP submitted a brief to Special Master Michael Wilner containing numerous hallucinated citations.

The attorneys had used AI tools including CoCounsel, Westlaw Precision, and Google Gemini to outline and generate their brief.

Trent Copeland, an attorney at Ellis George, admitted that he had used these AI tools to create an outline that he then shared with colleagues, including lawyers at K&L Gates, failing to mention its AI origins or to verify the citations. His colleagues incorporated the fabricated authorities into the final brief without checking them.

When the special master initially questioned two suspicious citations, the attorneys filed a “corrected” version, with a K&L Gates associate thanking the special master for catching the errors and assuring that citations in the revised brief had been “addressed and update.”

Problem was, the corrected brief still contained at least six other AI-generated errors. In their declarations, the attorneys confessed that about nine of the 27 legal citations in their 10-page brief were incorrect in some way, including two completely non-existent cases.

Special Master Wilner didn’t mince words, finding that the attorneys had “collectively acted in a manner that was tantamount to bad faith.” He chastised the lawyer who first generated the brief for relying on AI without verifying its accuracy, and for then sending the brief to his colleagues without disclosing its “sketchy AI origins.”

He also chastised the K&L Gates attorneys, calling their conduct “deeply troubling” for their failure to check the validity of the research sent to them, both as to the original brief and the so-called corrected brief — after they had been put on notice of phony citations.

As a result, the special master decided to impose sanctions, specifically:

  • Striking all versions of the attorneys’ supplemental brief.
  • Denying the discovery relief they sought.
  • Ordering the law firms to jointly pay $31,100 in the defendant’s legal fees.
  • Requiring disclosure of the matter to their client.

Wilner called the situation “scary,” noting that he was “persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them — only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.”

In Toronto, Phantom Citations

In Toronto, in the case of Ko v. Li, lawyer Jisuh Lee found herself in hot water when Ontario Superior Court Judge Fred Myers discovered that her legal factum contained citations to two cases that simply do n0t exist.

When the judge asked Lee about this — and if she had used AI to prepare the factum, she responded that her office does not usually use AI but that she would check with her clerk. She was not able to provide corrected citations to the cases or copies of the cases.

After the hearing, the judge again reviewed the factum and found two more incorrect citations — one to a non-existent case and another to a case that stood for the opposite proposition than that for which it had been cited.

“It appears that Ms. Lee’s factum may have been created by AI and that before filing the factum and relying on it in court, she might not have checked to make sure the cases were real or supported the propositions of law which she submitted to the court in writing and then again orally,” the judge concluded.

“It should go without saying that it is the lawyer’s duty to read cases before submitting them to a court as precedential authorities,” he continued. “At its barest minimum, it is the lawyer’s duty not to submit case authorities that do not exist or that stand for the opposite of the lawyer’s submission.”

As a result, the judge ordered the attorney to show cause why she should not be cited for contempt. “She will have a fair opportunity to submit evidence to explain what happened if she wishes to do so.”

When Will Lawyers Learn?

Despite the mounting pile of sanctions, public embarrassment, and judicial warnings, attorneys continue to submit AI-generated hallucinations to courts. This happens despite bar association ethics opinions, judicial warnings, and enough legal tech articles to fill a virtual library.

One wonders how many more judges will need to impose sanctions before the message sinks in: AI tools may be useful for brainstorming, but they are not substitutes for traditional legal research and the age-old practice of actually checking your citations.

Until lawyers get this message, we can expect the string of AI hallucination horror stories to continue. Meanwhile, keep your eyes open for next month’s inevitable headline: “Yet Another Lawyer Sanctioned for AI-Generated Fake Cases.”

Photo of Bob Ambrogi Bob Ambrogi

Bob is a lawyer, veteran legal journalist, and award-winning blogger and podcaster. In 2011, he was named to the inaugural Fastcase 50, honoring “the law’s smartest, most courageous innovators, techies, visionaries and leaders.” Earlier in his career, he was editor-in-chief of several legal publications, including The National Law Journal, and editorial director of ALM’s Litigation Services Division.