HOT

HOTUK Offshore Wind Industry 2023 Crisis: Navigating the Funding Storm READ NOW
HOTHyaluron Lip Filler Pens READ NOW
HOTCertificate of Advanced Study Online READ NOW
HOTAlgeria – The Largest Country in Africa READ NOW
HOTH-1B Visa Work Permit Program: Canada’s Successful Strategy for Attracting Global Talent READ NOW
HOTKU Star Kevin McCullar Sidelined for Big Dance Due to Lingering Knee Issues READ NOW
HOTLevel 1 EV Chargers READ NOW
HOTIs Long Life Possible? READ NOW
HOTUK Exits Energy Charter Treaty Pact Amid Stalled Talks READ NOW
HOTNC Dental Implants READ NOW
HOMEPAGE
parafiks menu
ADVERTISE :)
GET NEWS FROM THE WORLD OR LOCALLY! PLICKER OFFERS YOU A GREAT CONTENT EXPERIENCE AND GUIDANCE. START NOW TO EXPERIENCE. STAY HAPPY.
Sam Bennett

Sam Bennett

11 Jun 2023

3 DK READ

24 Read.

Lawyers Blame ChatGPT for Including Bogus Case Law, Face Possible Sanctions

Attorneys Steven A. Schwartz and Peter LoDuca found themselves in hot water when a court filing they made included references to non-existent court cases.

The lawyers, apologizing to a judge in Manhattan federal court, attributed the error to ChatGPT, an artificial intelligence-powered chatbot.

Schwartz used ChatGPT to search for legal precedents supporting his client’s case against Avianca, a Colombian airline.

 However, the chatbot suggested several cases that turned out to be fabricated or involving non-existent airlines.

Schwartz explained to the judge that he mistakenly believed that ChatGPT obtained the cases from an undisclosed source inaccessible through conventional research methods.

He admitted to failing in his follow-up research to verify the accuracy of the citations. Schwartz expressed surprise and regret, acknowledging that he did not comprehend ChatGPT’s capability to fabricate cases.

The lawyers now face potential sanctions for their inclusion of fictitious legal research in the court filing.

You may also like: ChatGPT – What is it and How Does it Work?

ChatGPT

U.S. District Judge P. Kevin Castel expressed both confusion and concern over the lawyers’ reliance on ChatGPT and their failure to promptly correct the bogus legal citations.

Avianca’s lawyers and the court had alerted them to the problem, yet the citations were not rectified.

Judge Castel confronted Schwartz with a specific invented legal case with ChatGPT, highlighting its nonsensical nature.

He questioned Schwartz on his understanding of the confusing presentation. To which Schwartz offered an erroneous explanation based on different case excerpts.

Schwartz and LoDuca apologized sincerely to the judge, expressing personal and professional remorse for their actions.

Schwartz stated that he had learned from the blunder and implemented safeguards to prevent a similar occurrence in the future.

LoDuca, who trusted Schwartz’s work, acknowledged his failure to adequately review the compiled research.

The lawyers’ defense argued that the submission resulted from carelessness rather than bad faith and should not warrant sanctions.

Legal experts and observers have highlighted the dangers of using AI technologies without a thorough understanding of their limitations and potential risks.

The case involving ChatGPT illustrates how lawyers may not fully comprehend how the AI system works. Leading to the inclusion of fictional information that appears realistic.

The incident has raised concerns about the need for awareness and caution. When utilizing promising AI technologies in the legal field.

Two lawyers facing potential sanctions attributed the inclusion of fictitious legal research in a court filing to ChatGPT, an AI-powered chatbot.

Also the lawyers apologized to the judge, expressing their misconceptions and failure to verify the accuracy of the citations.

Lawyers Blame ChatGPT for Including Bogus Case Law, Face Possible Sanctions