HOT

HOTFenris Chest Piece Division 2 READ NOW
HOTWays to Keep Relationships Strong READ NOW
HOTWNBA Suspends Becky Hammon for Allegations of Bullying Pregnant Player READ NOW
HOTThe Benefits of Fermented Foods READ NOW
HOTUS Navy’s Decisive Action Against Russian and Chinese Naval Presence Highlights US Navy Response Near Alaska READ NOW
HOTVoters To Have Their Say On The Bring Chicago Home Referendum After Legal Battle Ends READ NOW
HOTMetro Grocery Workers Strike: A Stand for Fair Wages in Toronto READ NOW
HOTKia EV9 Electric SUV READ NOW
HOTBC Wildfire Fighter’s Sacrifice: A Hero Lost in the Battle Against Flames READ NOW
HOTRebel Wilson Opens Up About Late Discovery of Sexuality in Memoir READ NOW
HOMEPAGE
parafiks menu
ADVERTISE :)
GET NEWS FROM THE WORLD OR LOCALLY! PLICKER OFFERS YOU A GREAT CONTENT EXPERIENCE AND GUIDANCE. START NOW TO EXPERIENCE. STAY HAPPY.
Sam Bennett

Sam Bennett

11 Jun 2023

3 DK READ

27 Read.

Lawyers Blame ChatGPT for Including Bogus Case Law, Face Possible Sanctions

Attorneys Steven A. Schwartz and Peter LoDuca found themselves in hot water when a court filing they made included references to non-existent court cases.

The lawyers, apologizing to a judge in Manhattan federal court, attributed the error to ChatGPT, an artificial intelligence-powered chatbot.

Schwartz used ChatGPT to search for legal precedents supporting his client’s case against Avianca, a Colombian airline.

 However, the chatbot suggested several cases that turned out to be fabricated or involving non-existent airlines.

Schwartz explained to the judge that he mistakenly believed that ChatGPT obtained the cases from an undisclosed source inaccessible through conventional research methods.

He admitted to failing in his follow-up research to verify the accuracy of the citations. Schwartz expressed surprise and regret, acknowledging that he did not comprehend ChatGPT’s capability to fabricate cases.

The lawyers now face potential sanctions for their inclusion of fictitious legal research in the court filing.

You may also like: ChatGPT – What is it and How Does it Work?

ChatGPT

U.S. District Judge P. Kevin Castel expressed both confusion and concern over the lawyers’ reliance on ChatGPT and their failure to promptly correct the bogus legal citations.

Avianca’s lawyers and the court had alerted them to the problem, yet the citations were not rectified.

Judge Castel confronted Schwartz with a specific invented legal case with ChatGPT, highlighting its nonsensical nature.

He questioned Schwartz on his understanding of the confusing presentation. To which Schwartz offered an erroneous explanation based on different case excerpts.

Schwartz and LoDuca apologized sincerely to the judge, expressing personal and professional remorse for their actions.

Schwartz stated that he had learned from the blunder and implemented safeguards to prevent a similar occurrence in the future.

LoDuca, who trusted Schwartz’s work, acknowledged his failure to adequately review the compiled research.

The lawyers’ defense argued that the submission resulted from carelessness rather than bad faith and should not warrant sanctions.

Legal experts and observers have highlighted the dangers of using AI technologies without a thorough understanding of their limitations and potential risks.

The case involving ChatGPT illustrates how lawyers may not fully comprehend how the AI system works. Leading to the inclusion of fictional information that appears realistic.

The incident has raised concerns about the need for awareness and caution. When utilizing promising AI technologies in the legal field.

Two lawyers facing potential sanctions attributed the inclusion of fictitious legal research in a court filing to ChatGPT, an AI-powered chatbot.

Also the lawyers apologized to the judge, expressing their misconceptions and failure to verify the accuracy of the citations.

Lawyers Blame ChatGPT for Including Bogus Case Law, Face Possible Sanctions