Back to News

Who’s Submitting AI-Tainted Filings in Court?

Riana Pfefferkorn
October 16, 2025 at 02:50 AM
neutral
Who’s Submitting AI-Tainted Filings in Court?

Photo by Glenn Carstens-Peters / Unsplash It seems like every day brings another news story about a lawyer caught unwittingly submitting a court filing that cites nonexistent cases hallucinated by AI. The problem persists despite courts’ standing orders on the use of AI, formal opinions and continuing legal education (CLE) courses on ethical use of AI in law practice, and revelations that AI-powered legal research tools are more fallible than they purport to be. Who are the attorneys submitting AI-tainted briefs? A recent 404 Media article about lawyers’ use of AI drew my attention to a database of AI Hallucination Cases compiled and maintained by Damien Charlotin , a French lawyer and scholar. Charlotin classifies the nature of the incident by various types of inaccuracies: fabricated cases, false quotes from or misrepresentations of real cases, or outdated invocations of cases that have been overturned. Besides helping the public understand how lawyers are getting tripped up by AI, Charlotin’s database also enables a better view of who is getting tripped up by AI. Using the database, I analyzed 114 cases from U.S. courts where, according to either opposing counsel and/or the court’s own investigation, an attorney’s filing included inaccuracies that were suspected or shown to have been caused by the use of AI. I find that the vast majority of the law firms involved – 90% – are either solo practices or small firms. What’s more, in 56% of the cases, the AI hallucinations were attributed to the plaintiff’s counsel, compared with 31% to the defense. And, while most cases in the sample did not specify the AI tool used, of those that did, fully half involved some version of ChatGPT. Methodology I based my analysis on cases I downloaded in a .csv file from Charlotin’s database on October 9, 2025. The time period covers court orders issued from June 2023 (the month of the landmark order in Mata v. Avianca ) through October 7, 2025. [Note: October 9 was a Thursday; by the following Monday, when I began drafting this write-up, Charlotin had added three new matters involving pro se litigants (which I exclude from my analysis) as well as two updates on cases that were already in the database (and thus already in my sample), plus there was news coverage of an oral argument where an attorney was grilled about hallucinations in his briefing. I did not add that last matter, which had not yet yielded a written opinion at the time I wrote this, to my sample. This is all to give the reader some idea of just how frequently these incidents are happening and consequently to highlight that my sample should not be considered comprehensive – the data became outdated almost immediately.] Charlotin has helpfully coded the data by a number of selectors including country. I restricted my download to the USA only. After importing the .csv file into Google Sheets, I then filtered by Party to include all cases involving a Federal Defender, Lawyer, and/or Paralegal. (Prosecutor is also an option in the database, but there were zero such cases in the USA for that time period.) This resulted in 117 cases, which fell to 114 after I excluded three cases from the sample (two cases that actually involved pro se litigants rather than lawyers and one duplicate case). I reviewed the court orders that Charlotin included for each database entry in order to determine which party was accused of submitting hallucinated citations and the name and law firm affiliation of the attorney(s) for that party. If that information was unavailable in the court order, I looked up the case docket in federal or state court records. Most of the cases in the sample are typical adversarial matters where the parties can be classified as either plaintiff or defendant. For matters that fall outside that usual structure (such as bankruptcy cases), I created an “other” category. Where the court order came from an appellate case, I tried to classify the party as plaintiff or defendant as per the parties’ trial-court posture. My data for the number of attorneys at each firm came from either the firm’s website or some other authoritative source (such as the NALP , Vault , or Law.com ). I sometimes had to guess that an attorney was solo, typically where the attorney does not have a dedicated website and the firm name listed in court records, if any, is indicative of a solo practice (e.g., “Law Office of Jane Smith”). For firm size, I have used the following bands: solo; 2-25; 26-100; 101-200; 201-500; 501-700; 701-1000; 1001+. These are the bands the NALP uses for its Directory of Legal Employers , except that it uses “1-25” as a band. I chose to split out solo attorneys as a separate category because I believe solo attorneys deserve recognition as a standalone group with unique characteristics that differentiate their practices from firms of 10 or 20 lawyers. I added a “government” category for the rare cases involving government attorneys (two: a public defender and attorneys for a county), but did not attempt to count how many attorneys were part of that particular government unit. There may be errors in my data, thanks to having to guess about some things (such as whether someone is a solo practitioner) or relying on inaccurate or outdated sources (for example, third-party reporting on firm size). If you find an error, please email me (riana at stanford dot edu) and I’ll fix it and update this post. The Party Submitting Hallucination-Tainted Filings Is Usually the Plaintiff The plaintiff is more commonly the party allegedly responsible for submitting filings containing AI hallucinations. Out of 114 cases, 64 were attributed to the plaintiff (56.1%), compared with 35 to the defendant (30.7%). There were 15 “other” cases (13.2%): bankruptcy, family, probate, and tax court matters, agency matters, a habeas petition, and an attorney disciplinary proceeding. (The lawyer allegedly submitted filings with AI hallucinations during that disciplinary proceeding, not in an underlying case involving that lawyer like other disciplinary proceedings in the sample. Where the attorney was facing discipline for misusing AI while representing a client, I classified the lawyer according to the party they were representing in the underlying case.) Party Represented # of Cases Plaintiff 64 Defendant 35 Other 15 114 AI Hallucination Cases Overwhelmingly Involve Solo or Small Firms Some of the 114 cases in the sample involved attorneys from more than one firm – for example, local counsel filing briefs drafted by a different firm. I counted each firm separately, except that if the court’s order faulted only one firm’s attorney, I did not count the other firm(s). The total number of firms (including government entities) was 129. Solo practices and small firms represent the overwhelming majority of that number. Solos account for half (50.4%) and small firms of 2-25 lawyers for another 39.5%. Of the remaining 10% of firms, 3.1% are firms of 26-100 lawyers; 2.3% are firms of 201-500 lawyers; 1.6% are firms of 1001+ attorneys; 1.6% are government entities; and firms of either 101-200 or 501-700 lawyers each represent less than 1%. There were no cases involving firms of 701-1000 lawyers. The number of firms in the sample with more than 25 lawyers is small enough to count on two AI-generated hands. Four have up to 100 lawyers: Ellis George, Hagens Berman Sobol Shapiro, Merlin Law Group, and Williams Kastner. Five have 101-700 lawyers: Butler Snow, Goldberg Segalla, Morrison Mahoney, Quintairos Prieto Wood & Boyer, and Spencer Fane. Two have more than 1000 attorneys: K&L Gates and Morgan & Morgan. Five lawyers are implicated in more than one case in the sample. All are either solo practitioners or small-firm lawyers: solo Maren Miller Bam of Salus Law; Jane Watson of Watson & Norris (who was only admitted to the bar in 2024 ); Chris Kachouroff of McSweeney Cynkar & Kachouroff (who gained notoriety for appearing pantsless at a Zoom court hearing); solo Tyrone Blackburn (who got arrested for assault in June in connection with a different case of his); and William Panichi, a family-court attorney. While the first four allegedly misused AI in two separate cases, Panichi was called out in an astonishing four cases in one 30-day period; he has supposedly begun winding down his law practice and surrendering his license. Firm Size # of Incidents 1001+ 2 701-1000 0 501-700 1 201-500 3 101-200 1 26-100 4 2-25 51 Solo 65 Government 2 129 ChatGPT Was the Most Commonly Used AI Tool Of the 114 cases in the sample, only 34 (30%) identified the specific AI tool(s) used by the attorneys. Some cases involved the use of more than one AI tool. OpenAI’s ChatGPT (any version, including in-house versions and the ChatGPT-powered app Ghostwriter Legal) was far and away the most common: it was implicated in fully half (18) of the 34 cases that specified a tool. Coming in a distant second were AI tools offered by Westlaw, followed by Anthropic’s Claude (any version), Microsoft Copilot, Google Gemini, and LexisNexis’s AI tools. Tool # of Cases ChatGPT (any) 18 Westlaw (any) 6 Claude (Anthropic) (any) 5 Copilot (Microsoft) (any) 4 Gemini (Google) (any) 3 Lexis (any) 3 Archie (Smokeball) 1 ChatOn 1 CoCounsel 1 EyeLevel 1 Grammarly 1 Grok (xAI) 1 Perplexity 1 ProWritingAid 1 Discussion This analysis confirms what many lawyers and judges may have suspected: that the archetype of misplaced reliance on AI in drafting court filings is a small or solo law practice using ChatGPT in a plaintiff’s-side representation. Ultimately, the buck stops with the attorney to make sure that she can stand behind every word of every brief filed over her signature. But the 404 Media article that led me to Charlotin’s database paints a picture of how hard it is to live up to that obligation, particularly for solo or small-firm attorneys. Lawyers struggle with busy caseloads, the trustworthiness of their co-counsel, junior attorneys, and support staff, and personal issues (health problems, caregiving obligations, etc.) that compete with work for their time and attention. Of course, that was already true long before AI. Lawyers, even very good ones, have always made the occasional mistake or oversight in their work. AI tools have merely provided a new way to make those errors – while also promising a way out of the underlying issues that contribute to them, like time crunches and insufficient support. As the 404 Media article observed, “the legal industry is under great pressure to use AI.” To overworked attorneys at small law offices, these tools must seem like a godsend. However, as the lawyers in this analysis learned the hard way, these tools are not reliable for their core purpose of accurate, comprehensive legal research results. Several of my Stanford colleagues are coauthors on a recent paper that investigated AI legal tools’ claims to be “hallucination-free” or to “eliminate” or “avoid” hallucinations. To the contrary, they found disturbingly high levels of hallucinations in all the tools they studied: OpenAI’s GPT-4, Lexis+ AI (offered by LexisNexis), Westlaw’s AI-Assisted Research, and Ask Practical Law AI (which, like Westlaw, is owned by Thomson Reuters). All of those companies are represented in the 34 cases analyzed above. The incidents in Charlotin’s database illustrate the real-world impact of AI legal tools’ shortcomings – and not just on the lawyers, who end up humiliated and sanctioned for relying on tools they thought were reliable. AI-tainted legal briefs negatively affect those lawyers’ clients, who depend on them for high-quality representation, including in incredibly high-stakes matters such as criminal prosecutions or the termination of parental rights. They affect opposing counsel, who must waste their time tracking down nonexistent case citations. And they affect the courts, which are busy enough already without also having to police this new form of attorney ethics violations and take care not to let nonexistent cases cited by counsel creep into court opinions. What Is To Be Done? These cases keep happening at an alarming pace. Dozens of cases have been added to Charlotin’s database since the American Bar Association (ABA) issued its formal opinion warning about generative AI tools in July 2024. For all the news stories about lawyers caught flat-footed by these tools, clearly there are lawyers who never read them and subsequently become the headline of the next one. It may be that nothing will sufficiently penetrate lawyers’ consciousness about the pitfalls of relying on AI tools until every practicing lawyer is personally confronted with that knowledge through some combination of (1) every single type of court – federal, state, tribal, agency; civil, criminal, bankruptcy, family, probate, you name it – requiring every lawyer who appears in every case to file a declaration attesting that they understand and acknowledge the fallibility of AI tools and have educated all their staff as well, and (2) every single state bar (including D.C. and U.S. territories) imposing CLE requirements specifically about AI tools for legal research, like they now do for topics like substance abuse and elimination of bias. Even then, there will be failures. Inevitably, some lawyers will dutifully certify that they understand that AI tools are unreliable, then file an AI-tainted brief anyway. But perhaps the incidence of lawyers sanctioned for unwittingly misusing AI will slacken with time and more pervasive awareness of AI’s perils. And hopefully AI legal research tools themselves will improve over time (as their paying customers surely expect them to) – though it is as unreasonable to expect perfection from them as from humans. “Trust, but verify” must remain the watchword. With all that said, no amount of CLE courses and state bar ethics opinions will fix the problem I haven’t discussed until now: use of AI by pro se litigants. I wanted to figure out which lawyers were getting tripped up by AI, so I only analyzed U.S. cases involving lawyers or paralegals, for a sample of 114 cases. But in the .csv file I downloaded from Charlotin’s database, there are 160 cases involving a pro se litigant. That is: Pro se litigants account for the majority of the cases in the United States where a party submitted a court filing containing AI hallucinations. In a country where legal representation is unaffordable for most people, it is no wonder that pro se litigants are depending on free or low-cost AI tools. But it is a scandal that so many have been betrayed by them, to the detriment of the cases they are litigating all on their own. Conclusion This analysis speaks to both the urgent need for high-quality legal research tools in a legal profession dominated by small and solo practices, and the yawning gap between current AI tools’ actual and perceived reliability. In many cases in the analysis, the attorney had not understood that AI tools may produce inaccurate results. True, lawyers are ethically obligated to ensure the accuracy of their work product. But it is also incumbent upon the companies offering AI tools, especially those tailored specifically for legal research, not to oversell them or hide their shortcomings; that is, their marketing shouldn’t outgun their disclaimers . So long as these tools remain flawed without lawyers understanding that, AI tools for legal research threaten to be, not a timesaver, but a source of unnecessary extra work for lawyers and the courts.

Related Articles