In an unusual judicial response to the ongoing epidemic of AI-generated fictitious citations, Washoe County, Nev., District Court Judge David Hardy has crafted what he calls a “creative and unpredictable” solution that offers a different way for courts to address attorney misconduct involving artificial intelligence.
The unusual order, first reported by Mark Robison at the Reno Gazette Journal, emerged from a complex case involving Uprise, a company at the center of a failed $9 million fiber optic project that resulted in criminal charges against the company’s owner.
But it was the attorneys’ legal filings, not the underlying fraud case, that caught Judge Hardy’s attention and prompted his innovative response.
After Judge Hardy became aware of fictitious citations in their filing, he demanded that the attorneys — Jan Tomasik and Daniel Mann of the law firm Cozen O’Connor — appear in court last Friday to explain themselves and face possible disciplinary action.
Last May, Mann and Tomasik filed a brief with the court containing bogus legal references generated by ChatGPT. Hardy’s investigation revealed multiple fabricated citations that had been used to support the attorneys’ legal arguments.
A ‘Creative and Unpredictable’ Solution
Although Judge Hardy imposed traditional sanctions on the two lawyers, he also gave them an “out” if they agreed to his unusual request.
“This is larger than any of us,” he said. “I care more about what occurs systemically after today. Courts across the country are wrestling with this very issue.”
For traditional sanctions, the judge ordered that the attorneys be removed from the underlying lawsuit, be referred to the Nevada bar for discipline, and each pay a fine of $2,500, which would be donated to legal aid.
However, he then immediately suspended those sanctions, provided the attorneys agreed to his alternative.
Citing the theory of “reintegrative shame,” by which an offender is publicly shamed but nonetheless reintegrated into the community, the judge ordered the two lawyers to teach others about their mistakes and lessons learned.
Progress, Not Shame
Specifically, according to the Reno Gazette Journal article, he ordered them, within 60 days, to:
“Send a letter to the Nevada State Bar president and vice president, informing them of what they did in this case. Also, make themselves available as a resource or mentor to a committee that analyzes AI policies, volunteer to speak at continuing education classes on the topic, and maybe write an article for a legal publication about their mistakes and lessons learn.
“Send a letter to the deans of their respective law schools about their actions. Also offer to be guest lecturers at an ethics course on professional conduct and the use of AI.”
Further sweetening the offer, he added that if those deans and ethics professors choose to invite the lawyers to speak, “I would be proud to join you on a panel to share my perspective as a judge.”
As to why he took this approach to sanctions, Hardy said: “I think it reflects the greatest possibility of system improvement and redemption.”
“My main objective is not to shame, not to punish gratuitously, but to help our profession progress,” he said.
Firm Fires Lawyer, Apologizes
Meanwhile, Cozen O’Connor, the law firm for which the two attorneys worked, said in a court filing last week that it has fired Mann, who joined the firm only last April.
It did not fire Tomasik, who was lead attorney on the case and signed off on the filing, but did not know that Mann had used ChatGPT in preparing the brief.
The firm said it has a “strict and unambiguous” policy on the use of AI. The policy specifies:
“Public AI may not be used or relied upon to produce client work, and it is the responsibility of each attorney to produce client work that is the result of the attorney’s own legal analysis and professional judgement.”
“Cozen O’Connor and the attorneys deeply regret both the inexcusable use of ChatGPT and the erroneous and fictitious citations that resulted from that usage, and have taken immediate steps to ensure that there is no recurrence,” the firm said in a court filing.
Bottom Line
Way back in 2023, the early days of gen AI, Chief Justice John Roberts warned in his year-end report, “Any use of AI requires caution and humility,” specifically noting that commonly used AI applications can be prone to hallucinations that have caused lawyers to submit briefs with citations to non-existent cases.
The Uprise case — and many others — demonstrates that these warnings have n0t been sufficient to prevent continued incidents. Judge Hardy’s response suggests that more creative judicial interventions may be necessary to address what has become a profession-wide challenge.