Armed America Radio touts one of its hosts, Mark Walters, as the "loudest voice in America fighting for gun rights." Now it appears that Walters' prominent commentary on gun rights and the Second Amendment Foundation (SAF)—a gun rights nonprofit that gave him a distinguished service award in 2017—has led generative AI chatbot ChatGPT to wrongly connect dots and make false and allegedly malicious statements about the radio host. That includes generating potentially libelous statements that Walters was once SAF's chief financial officer and treasurer and that he was accused of embezzling funds and defrauding SAF.
Now, Walters is suing ChatGPT owner OpenAI in a Georgia state court for unspecified monetary damages in what's likely the first defamation lawsuit resulting from ChatGPT's so-called "hallucinations," where the chatbot completely fabricates information.
The misinformation was first uncovered by journalist Fred Riehl, who asked ChatGPT to summarize a complaint that SAF filed in federal court.
That SAF complaint actually accused Washington attorney general Robert Ferguson of "misuse of legal process to pursue private vendettas and stamp out dissent." Walters was never a party in that case or even mentioned in the suit, but ChatGPT disregarded that and all the actual facts of the case when prompted to summarize it, Walters' complaint said. Instead, it generated a wholly inaccurate response to Riehl's prompt, falsely claiming that the case was filed against Walters for embezzlement that never happened while serving at an SAF post that he never held.
Even when Riehl asked ChatGPT to point to specific paragraphs that mentioned Walters in the SAF complaint or provide the full text of the SAF complaint, ChatGPT generated a "complete fabrication" that "bears no resemblance to the actual complaint, including an erroneous case number," Walters' complaint said.
"Every statement of fact" in ChatGPT's SAF case summary "pertaining to Walters is false," Walters' complaint said.
OpenAI did not immediately respond to Ars' request for comment.
Is OpenAI responsible when ChatGPT lies?
It's not the first time that ChatGPT has completely fabricated a lawsuit. A lawyer is currently facing harsh consequences in court after ChatGPT made up six cases that the lawyer cited without first verifying case details that a judge called obvious "legal gibberish," Fortune reported.
Although the sophisticated chatbot is used by many people—from students researching essays to lawyers researching case law—to search for accurate information, ChatGPT's terms of use make it clear that ChatGPT cannot be trusted to generate accurate information. It says:
Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe and beneficial. Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.
Walters' lawyer John Monroe told Ars that "while research and development in AI are worthwhile endeavors, it is irresponsible to unleash a platform on the public that knowingly makes false statements about people."
OpenAI was previously threatened with a defamation lawsuit by an Australian mayor, Brian Hood, after ChatGPT generated false claims that Hood had been imprisoned for bribery. In that case, Hood asked OpenAI to remove the false information as a meaningful remedy; otherwise, the official could suffer reputation damage that he said could negatively impact his political career.
Monroe told Ars that Walters is only seeking monetary damages as a remedy at this time, confirming that Walters' potential reputation loss could impact future job opportunities or result in lost listeners of his radio commentary.
reader comments
222 with