Lawyers Blame ChatGPT for Including Bogus Case Law, Face Possible Sanctions
Attorneys Steven A. Schwartz and Peter LoDuca found themselves in hot water when a court filing they made included references to non-existent court cases.
The lawyers, apologizing to a judge in Manhattan federal court, attributed the error to ChatGPT, an artificial intelligence-powered chatbot.
Schwartz used ChatGPT to search for legal precedents supporting his client’s case against Avianca, a Colombian airline.
However, the chatbot suggested several cases that turned out to be fabricated or involving non-existent airlines.
Schwartz explained to the judge that he mistakenly believed that ChatGPT obtained the cases from an undisclosed source inaccessible through conventional research methods.
He admitted to failing in his follow-up research to verify the accuracy of the citations. Schwartz expressed surprise and regret, acknowledging that he did not comprehend ChatGPT’s capability to fabricate cases.
The lawyers now face potential sanctions for their inclusion of fictitious legal research in the court filing.
You may also like: ChatGPT – What is it and How Does it Work?
Lawyers Cite ChatGPT as the Source of Fictitious Legal Research

U.S. District Judge P. Kevin Castel expressed both confusion and concern over the lawyers’ reliance on ChatGPT and their failure to promptly correct the bogus legal citations.
Avianca’s lawyers and the court had alerted them to the problem, yet the citations were not rectified.
Judge Castel confronted Schwartz with a specific invented legal case with ChatGPT, highlighting its nonsensical nature.
He questioned Schwartz on his understanding of the confusing presentation. To which Schwartz offered an erroneous explanation based on different case excerpts.
Schwartz and LoDuca apologized sincerely to the judge, expressing personal and professional remorse for their actions.
Schwartz stated that he had learned from the blunder and implemented safeguards to prevent a similar occurrence in the future.
LoDuca, who trusted Schwartz’s work, acknowledged his failure to adequately review the compiled research.
The lawyers’ defense argued that the submission resulted from carelessness rather than bad faith and should not warrant sanctions.
Legal experts and observers have highlighted the dangers of using AI technologies without a thorough understanding of their limitations and potential risks.
The case involving ChatGPT illustrates how lawyers may not fully comprehend how the AI system works. Leading to the inclusion of fictional information that appears realistic.
The incident has raised concerns about the need for awareness and caution. When utilizing promising AI technologies in the legal field.
Two lawyers facing potential sanctions attributed the inclusion of fictitious legal research in a court filing to ChatGPT, an AI-powered chatbot.
Also the lawyers apologized to the judge, expressing their misconceptions and failure to verify the accuracy of the citations.