Lawyer Cites Fake Cases After ChatGPT 'Research'

John Lister's picture

A US lawyer is facing disciplinary sanctions after relying on ChatGPT for legal "research." It led to his side presenting a legal argument that referred to several non-existent cases.

Steven A Schwartz was providing support for the legal team of a man suing in a personal injury case. His side wanted to provide evidence of similar cases that had set a precedent, with this evidence helping support their argument that their case was strong enough to proceed. (Source: nytimes.com)

The opposing legal team questioned the argument saying they were unable to find the cases in question. That led to the judge writing that "six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations." (Source: bbc.co.uk)

Plausibility The Key

Schwartz later admitted that he had used ChatGPT to search for relevant cases. He will now appear at a disciplinary hearing along with the lead lawyer, Peter LoDuca. Although LoDuca presented the argument, he was not involved in the research and was unaware of how it was obtained.

Schwartz appeared to have completely misunderstood the purpose of tools such as ChatGPT. Although they appear to work that way, such AI tools are not always designed to accurately answer questions.

Technically they are not even attempting to "write" documents as we would understand it. Instead they are effectively simulating writing. The main goal is to create something plausible in a particular style rather than to make something that is true or informative.

Good Uses and Bad Uses

The technology certainly has many uses. It can generate "ideas" for content and outlines of documents, along with headlines and advertising copy. Its ability to produce text in a matter of seconds also makes it useful for generating multiple different versions of a piece of text, with a human then selecting the one which is most useful while still accurate. Indeed, many automated spelling and grammar checkers now use a form of artificial intelligence.

It appears Schwartz was relying on the free public version of ChatGPT, which generates text based on a database derived from a snapshot of the Internet in 2021. Although it can quickly and accurately summarize online information, it can display behavior dubbed "hallucination" where it gives answers that cite sources and give details that don't actually exist.

What's Your Opinion?

Should the lawyer face disciplinary action? Do you buy his argument that he didn't realize the "information" provided could be inaccurate? Are such tools useful as long as you double-check the supposed sources?

Rate this article: 
Average: 5 (7 votes)