For all the discussion of how generative AI will impact the legal profession, maybe one answer is that it will weed out the lazy and incompetent lawyers.
By now, in the wake of several cases in which lawyers have found themselves in hot water by citing hallucinated cases generated by ChatGPT, most notoriously Mata v. Avianca, and in the wake of all the publicity those cases have received, you would think most lawyers would have gotten the message not to rely on ChatGPT for legal research, at least not without checking the results.
Yet it happened again this week — and it happened not once, but in two separate cases, one in Missouri and the other in Massachusetts. In fairness, the Missouri case involved a pro se litigant, not a lawyer, but that pro se litigant claimed to have gotten the citations from a lawyer he hired through the internet.
The Massachusetts case did involve a lawyer, as well as the lawyer’s associate and two recent law school graduates not yet admitted to practice.
In the Missouri case, Kruse v. Karlen, the unwitting litigant filed an appellate brief in which 22 of 24 cases were fictitious. Not only that, but they were fictitious in ways that should have raised red flags, including that they had made-up-sounding generic names such as Smith v. ABC Corporation and Jones v. XYZ Corporation.
In the Massachusetts case, Smith v. Farwell, the lawyer filed three separate legal memoranda that cited and relied on fictitious cases. He blamed the mistake on his own ignorance of AI and attributed the inclusion of the cases to two recent law school grads and an associate who worked on the memoranda.
Let’s dive in to the details.
Kruse v. Karlen
Jonathan Karlen, who is not an attorney, filed a pro se appeal in the Missouri Court of Appeals. His initial filing was deficient in several respects, but after the court gave him several deadline extensions, he ultimately filed an appellate brief and a reply brief. The respondent moved to strike the brief based on its multiple failures to comply with the court’s requirements, among them its failure to provide accurate legal citations.
As to that last point, the court, in an opinion written by Presiding Judge Kurt S. Odenwald, found:
“Particularly concerning to this Court is that Appellant submitted an Appellate Brief in which the overwhelming majority of the citations are not only inaccurate but entirely fictitious. Only two out of the twenty-four case citations in Appellant’s Brief are genuine. The two genuine citations are presented in a section entitled Summary of Argument without pincites and do not stand for what Appellant purports.”
In some instances, neither the cited case nor quotes taken from the case “exist in reality,” the court said. In others, the citations had real case names — “presumably the product of algorithmic serendipity,” the court said — but did not stand for the propositions asserted by Karlen.
In a reply brief, Karlen apologized for citing fictitious cases and said that they came from an online consultant he hired to write the brief who claimed to be an attorney licensed in California. He said he did not not know the person would use “artificial intelligence hallucinations” and denied any intent to mislead the court.
The court was not sympathetic.
“Filing an appellate brief with bogus citations in this Court for any reason cannot be countenanced and represents a flagrant violation of the duties of candor Appellant owes to this Court. Appellant submitted the Appellate Brief in his name and certified its compliance with [the court’s rules] as a self-represented person. …
“We regret that Appellant has given us our first opportunity to consider the impact of fictitious cases being submitted to our Court, an issue which has gained national attention in the rising availability of generative A.I.”
The court concluded that Karlen’s submission of fictitious cases constituted “an abuse of the judicial system.” For that, it made him pay the price.
First, the court dismissed his appeal. Then, deeming his appeal frivolous, it ordered him to pay $10,000 in damages towards his opponent’s attorneys’ fees.
“We find damages … to be a necessary and appropriate message in this case, underscoring the importance of following court rules and presenting meritorious arguments supported by real and accurate judicial authority.”
Smith v. Farwell
In this Massachusetts Superior Court case, plaintiff’s counsel filed four memoranda in response to four separate motions to dismiss. In reviewing the memoranda, Judge Brian A. Davis wrote, he noted that the legal citations “seemed amiss.” After spending several hours investigating the citations, he was unable to find three of the cases cited in two of the memoranda.
At a hearing on the motions to dismiss, the judge started out by informing plaintiff’s counsel of the fictitious cases he’d found and asking how they’d been included in the filings. When the lawyer said he had no idea, the judge ordered him to file a written explanation of the origin of the cases.
In that letter, the attorney acknowledged that he had “inadvertently” included citations to multiple cases that “do not exist in reality.” He attributed the citations to an unidentified “AI system” that someone in his law office had used to “locat[e] relevant legal authorities to support our argument[s].” He apologized to the judge for the fake citations and expressed regret for failing to “exercise due diligence in verifying the authenticity of all caselaw references provided by the [AI] system.”
The court then scheduled another hearing to learn more about how the cases came to be cited and to consider whether to impose sanctions. As the judge further reviewed the attorney’s filings, he found an additional nonexistent case in a third memoranda, bringing it to four fictitious cases in three separate memoranda.
At the hearing, the attorney again apologized. He said that the filings had been prepared by three people in his office — two recent law school graduates and an associate attorney.
“Plaintiff’s Counsel is unfamiliar with AI systems and was unaware, before the Oppositions were filed, that AI systems can generate false or misleading information,” Judge Davis. “He also was unaware that his associate had used an AI system in drafting court papers in this case until after the Fictitious Case Citations came to light.”
While plaintiff’s counsel had reviewed the filings for style, grammar and flow, he told the court, he had not checked the accuracy of the citations.
The judge wrote that he found the lawyer’s explanation to be truthful and accurate, he believed the lawyer did not submit the citations knowingly, and the lawyer’s expression of contrition was sincere.
“These facts, however, do not exonerate Plaintiff’s Counsel of all fault, nor do they obviate the need for the Court to take responsive action to ensure that the problem encountered in this case does not occur again in the future.”
Citing the original and now famous hallucinated citations case Mata v. Avianca, in which the court said, “Many harms flow from the submission of fake opinions,” the judge wrote:
“With this admonition in mind, the Court concludes that, notwithstanding Plaintiff’s Counsel’s candor and admission of fault, the imposition of sanctions is warranted in the present circumstances because Plaintiff’s Counsel failed to take basic, necessary precautions that likely would have averted the submission of the Fictitious Case Citations. His failure in this regard is categorically unacceptable.”
After going through a thoughtful discussion of Mata and other prior cases involving hallucinated citations, the judge distinguished this case in that the lawyer was “forthright in admitting his mistakes” and had not done anything to compound them, as happened in Mata. Even so, he said, the conduct required sanctions of some sort.
“Plaintiffs Counsel’s knowing failure to review the case citations in the Oppositions for accuracy, or at least ensure that someone else in his office did, before the Oppositions were filed with this Court violated his duty under Rule 11 to undertake a ‘reasonable inquiry,'” Judge Davis said. “Simply stated, no inquiry is not a reasonable inquiry.”
For that reason, the judge decided to impose a sanction on the lawyer of $2,000 (payable to the court, not the opposing party).
The judge ended his opinion with what he described as the “broader lesson” for attorneys generally:
“It is imperative that all attorneys practicing in the courts of this Commonwealth understand that they are obligated under Mass. Rule Civ. P. 11 and 7 to know whether Al technology is being used in the preparation of court papers that they plan to file in their cases and, if it is, to ensure that appropriate steps are being taken to verify the truthfulness and accuracy of any AI-generated content before the papers are submitted. …
“The blind acceptance of Al-generated content by attorneys undoubtedly will lead to other sanction hearings in the future, but a defense based on ignorance will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known.”