Aibytec

When AI Gets It Wrong: A Lawyer’s Misstep with ChatGPT

The year 2023 marked a turning point for generative AI tools like ChatGPT, as their potential to reshape industries—from education to healthcare—captured global attention. But alongside the excitement came hard lessons, particularly in fields like law, where precision and accuracy are non-negotiable.

Attorney Steven Schwartz, from the New York-based firm Levidow, Levidow & Oberman, learned this the hard way. While working on a personal injury lawsuit for Roberto Mata, an Avianca Airlines employee, Schwartz used ChatGPT to look up previous legal cases that might support his arguments. What he didn’t realize was that ChatGPT, despite sounding authoritative, had made up at least six of the cases it provided.

When Schwartz submitted these cases in a legal brief, the errors quickly caught the eye of U.S. District Judge Kevin Castel. Not only were the cases fictitious, but the fabricated names, docket numbers, and quotes were so blatantly off that Judge Castel called them out in a formal document.

Schwartz, for his part, admitted he was completely unaware that the AI could produce false information. He said it was his first time using ChatGPT for legal research and that he had failed to cross-check the results. “It was a mistake I deeply regret,” Schwartz explained to the court.

His colleague, Peter LoDuca, who had signed the brief as Mata’s lawyer of record, was also implicated. Both attorneys were fined $5,000, and Mata’s lawsuit against Avianca was ultimately dismissed.

The Bigger Problem: AI “Hallucinations”

This case brought a glaring issue into focus: AI “hallucinations.” Despite their advanced design, generative AI tools like ChatGPT can fabricate convincing but entirely false information. Studies have shown that these errors—particularly in niche fields like law—occur far more often than one might expect.

Legal experts and judges have taken notice. Some, like Judge Brantley Starr in Texas, have even introduced new requirements for attorneys to confirm whether AI was used in drafting legal documents—and if so, that the information has been thoroughly reviewed by a human.

Still, others see potential in AI when used carefully. U.S. Circuit Judge Kevin Newsom conducted a personal experiment using ChatGPT to clarify the meaning of legal terms. While he found the results promising, he emphasized the importance of human oversight.

A Wake-Up Call for the Legal Profession

Schwartz’s case serves as a cautionary tale for lawyers everywhere. It’s a reminder that while AI can be a powerful tool, it isn’t infallible—and mistakes in the legal world can have serious consequences.

The lesson here? AI is no substitute for thorough, human-driven research. Whether in law, medicine, or any other critical field, the key to success lies in using technology responsibly—double-checking, verifying, and never forgetting the value of human judgment.

1 thought on “When AI Gets It Wrong: A Lawyer’s Misstep with ChatGPT”

Leave a Comment

Your email address will not be published. Required fields are marked *

Advanced AI solutions for business Chatbot
Chat with AI
Verified by MonsterInsights