Site icon

OpenAI Defamation Lawsuit: The first of its kind

Written By: Rebecca Cahill

What Happened?

Mark Walters, the founder of Armed America Radio and the self-proclaimed, “loudest voice in America fighting for gun rights,” filed a defamation lawsuit against OpenAI LLC (“OpenAI”) last week in a Georgia State court. Walters claims ChatGPT, a software of OpenAI, generated a complete fabrication about him that was libelous and harmful to his reputation.

It all started when Fred Riehl, editor-in-chief of gun news website Ammoland.com asked ChatGPT to summarize a complaint filed by Second Amendment Foundation (SAF), a gun rights nonprofit, in federal court. The complaint was against Washington Attorney General, Robert Ferguson, for misuse of legal process to pursue private vendettas and to stamp out dissent. Walters was not a party to the suit, or even mentioned. ChatGPT disregarded that. Riehl asked ChatGPT to point out specific paragraphs of the complaint, or to provide a full text version of it to show where Walters was mentioned. ChatGPT did just that, except it generated a complete fabrication that “bears no resemblance to the actual complaint, including an erroneous case number.”

The ChatGPT results stated that Walters had “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership.” Walters does not and never has worked for the SAF, though he did receive a distinguished service award from them in 2017 for promoting firearm rights.

After receiving the information, Riehl contacted SAF to disconfirm what ChatGPT had told him. He never repeated any of it in article of his own. He presumably shared the information with Walters as well, although he has not confirmed that detail.

How does this happen?

When chat bots fabricate information, it is referred to as a hallucination. According to The New York Times, Google C.E.O. Sundar Pichai said that “hallucinations are a recognized problem.” ChatGPT recognizes this problem as well, and provides terms of use that state that the chat bot cannot be trusted to generate accurate information and further, has admitted that hallucinations are a limitation of the product. The question is whether those terms of use will cover OpenAI from liability.

Walters’ attorney John Monroe stated that “while research and development in AI are worthwhile endeavors, it is irresponsible to unleash a platform on the public that knowingly makes false statements about people.”

This is not the first incident of its kind. In April, an Australian mayor made news when he claimed he was preparing to sue OpenAI for ChatGPT falsely claiming that he was convicted and imprisoned for robbery. In New York, an attorney is facing possible sanctions for filing legal briefs in federal court citing fake legal precedents generated by ChatGPT.

Legal Liability:

This is the first case of its kind for artificial intelligence. Lyrissa Lidsky, the Raymond & Miriam Ehrlich chairperson in U.S. Constitutional Law at the University of Florida Levin College of Law asks, “how do you apply our fault concepts to a nonhuman actor?” ChatGPT is a product, so normal liability is not straightforward. Lydsky believes cases like Walters’ will present a real challenge for the courts.

Courts normally treat harms created by speech differently than they treat physical harms caused by products. ChatGPT is a product, yet it’s not causing physical harm. Here, courts will have to find fault “between two sets of legal principles,” product liability and defamation. Courts are going to have to decide whether to adapt ordinary legal principles to fit the situation, or whether they will decide that the issue is too far gone from any ordinary situation. Regardless, the precedent set will likely have an impact on future lawsuits.

Walters is seeking monetary damages as a remedy due to potential reputation loss impacting future job opportunities or resulting in lost listeners of his radio commentary. Ari Cohn, a Chicago Attorney specializing in First Amendment and defamation law, is skeptical about the lawsuit’s viability. To sustain a claim for defamation, Mark Walters will have to prove several elements. First he must prove that he is the “Mark Walters” identified in the bot’s responses. Cohn notes that the name is not unique. Further, he must prove that “the average reader could reasonably think it’s about him.” Cohn states that those are the easy parts.

Next, Walters must prove that a reasonable person would consider the bot’s response as a statement of fact, especially given the coverage of the hallucination issue. Walters being a public figure creates another hurdle. He must also prove actual malice, or that “the statement was made with knowledge of its falsity or having entertained serious doubts about its truth.” The problem: ChatGPT is a program that lacks knowledge and intent. That’s why Walters sued OpenAI, but proving malice against them may be an obstacle since no one would have directly input the information into the system.

Damages may be another issue. Riehl is the only one who saw the output, and he did not believe it to be true. To sustain a defamation lawsuit, the remedy must be “specific, quantifiable damages caused by a false statement.” Still, Walters’ attorney states that he feels strongly about the case.

It seemed inevitable for a case like this to be filed. The result of this suit may have a lasting influence on how we treat AI-created falsehoods, and whether companies that create the software can be held liable.

Sources:

Ashley Belanger, OpenAI faces defamation suit after ChatGPT completely fabricated another lawsuit, ars Technica (June 9, 2023).

Matt Binder, OpenAI sued for defamation after ChatGPT allegedly fabricated fake embezzlement claims, Mashable (June 8, 2023).

Ryan J. Farrick, Georgia Radio Host Files Unprecedented Lawsuit Accusing OpenAI, ChatGPT of Defamation, Legal Reader (June 9, 2023).

Miles Klee, ChatGPT is Making Up Lies – Now it’s Being Sued for Defamation, RollingStone (June 9, 2023).

Isaiah Portiz, First ChatGPT Defamation Lawsuit to Test AI’s Legal Liability, Bloomberg Law (June 12, 2023).

Exit mobile version