Preprint
Article

This version is not peer-reviewed.

Citing the Unseen: AI Hallucinations in Tax and Legal Practice a Comparative Analysis of Professional Responsibility, Procedural Legitimacy, and Sanctions

Submitted:

02 January 2026

Posted:

05 January 2026

You are already at the latest version

Abstract
Advances in generative AI have brought advanced tools to tax & legal practice, but with them comes AI hallucinations, which are fabricated citations, quotes, or facts that appear plausible but are entirely false. In 2023, a notable U.S. case (Mata v. Avianca) revealed this risk when attorneys, relying on ChatGPT for research, submitted a brief containing fictitious case law and were sanctioned as a result. This incident revealed substantial risks for the tax & legal profession. Increased AI use in tax & legal submissions and decision drafting subsequently led to numerous similar global incidents. By late 2025, a collection of various datasets logged nearly 800 cases of AI-related citation errors or hallucinations” in at least 25 countries, with a marked increase in 2025 alone. These cases span court filings by lawyers and pro se litigants, as well as orders drafted by judges or tribunals. This development necessitates an examination of professional responsibility and procedural fairness concerning AI-generated falsehoods. This article analyzes how courts and administrative bodies across jurisdictions have responded to AI-generated hallucinations in tax & legal submissions and decisions, and what these responses indicate regarding emerging verification standards under existing law. The analysis compares incidents from the United States, Canada, the United Kingdom, India, Israel, and other jurisdictions, focusing on the imposition or withholding of sanctions, the treatment of various actors (e.g., lawyers, self-represented parties, experts, judges), and the adaptation of tax & legal doctrines to this challenge.
Keywords: 
;  ;  
Subject: 
Social Sciences  -   Law

Introduction

Advances in generative AI have brought advanced tools to tax & legal practice, but with them comes AI hallucinations”, which are fabricated citations, quotes, or facts that appear plausible but are entirely false1. In 2023, a notable U.S. case (Mata v. Avianca) revealed this risk when attorneys, relying on ChatGPT for research, submitted a brief containing fictitious case law and were sanctioned as a result2. This incident revealed substantial risks for the tax & legal profession. Increased AI use in tax & legal submissions and decision drafting subsequently led to numerous similar global incidents. By late 2025, a collection of various datasets logged nearly 800 cases of AI-related citation errors or hallucinations” in at least 25 countries, with a marked increase in 2025 alone3. These cases span court filings by lawyers and pro se litigants, as well as orders drafted by judges or tribunals. This development necessitates an examination of professional responsibility and procedural fairness concerning AI-generated falsehoods. This article analyzes how courts and administrative bodies across jurisdictions have responded to AI-generated hallucinations in tax & legal submissions and decisions, and what these responses indicate regarding emerging verification standards under existing law. The analysis compares incidents from the United States, Canada, the United Kingdom, India, Israel, and other jurisdictions, focusing on the imposition or withholding of sanctions, the treatment of various actors (e.g., lawyers, self-represented parties, experts, judges), and the adaptation of tax & legal doctrines to this challenge.

Core Research Question

This inquiry addresses the following questions:
  • When do courts impose sanctions, issue warnings, or excuse AI hallucinations?
  • Does user intent, or lack thereof, affect outcomes, or is the duty to verify AI outputs effectively strict?
  • How do responses differ based on the actor involved, including lawyers, pro se litigants, expert witnesses, and judges?
  • Are new rules and doctrines emerging to address AI use in the tax & legal process, or are existing professional and procedural standards being adapted?
These questions will define benchmarks for AI integration into tax & legal practice and illustrate practical implications for the judicial process.

The Surge of AI Hallucinations in Tax & Legal Practice (2023–2025)

AI hallucinations rapidly became a pervasive issue in global courtrooms. The vast majority of these incidents involve “Type 1” hallucinations, which are wholly fabricated case citations or legal authorities4.
Figure 1. Author’s Tax AI Research, Texas A&M University. See supra note 3.
Figure 1. Author’s Tax AI Research, Texas A&M University. See supra note 3.
Preprints 192668 g001
Other instances include fictitious quotations or misrepresentations of real precedents, but fake citations are the most prevalent error. Data shows a significant increase in cases during 2025, with over 85% of known AI-related citation mishaps occurring in that year alone5. Starting with a few examples in 2023 (19 tracked cases), incidents increased to hundreds by 2025, coinciding with the widespread adoption of tools like ChatGPT in tax & legal research and drafting6.
The demographics of involved actors have shifted. Initially, licensed attorneys were responsible for many AI-generated errors (often junior lawyers experimenting with AI); by late 2025, however, pro se litigants outnumbered lawyers as principal actors in these cases (approximately 50% versus 43%)7. While AI tools enable non-lawyers to draft court filings, this development also indicates potential adverse outcomes for pro se litigants, who may be less aware of the technology’s unreliability. A typical scenario involves a litigant (or lawyer) asking an AI tool for relevant cases or for help writing a brief; the AI then produces text citing plausible but fabricated precedents. If those fabricated authorities make it into a filing unchecked, the issue arises when a judge or opposing counsel cannot locate the cited cases. This wastes time and misleads the court (albeit unintentionally). The judiciary must then determine an appropriate response.
Across jurisdictions, courts have consistently condemned the submission of fake citations; however, responses range from educational or lenient in some situations to disciplinary and punitive in others. Approximately 90% of these incidents have not resulted in formal professional sanctions against the filer8. Instead, courts address many cases by striking the offending material or issuing warnings. A significant minority (about 10%), however, have led to sanctions or referrals for professional discipline9. To understand this variance, this analysis examines factors that determine whether a court sanctions, warns, or excuses the conduct.

Factors Influencing Sanctions, Warnings, or Excuses for AI Hallucinations

Judicial responses have largely hinged on the culpability of the responsible party and the context of the case. Early on, some courts showed leniency, especially if the error was caught and corrected quickly or if the filer was a non-lawyer. As awareness of AI’s propensity for generating inaccurate information increased, courts emphasized that tax & legal filings must be accurate and truthful, irrespective of intent. Licensed attorneys face a stricter standard and risk sanctions for unverified AI output; pro se litigants, however, often receive warnings or mild consequences for a first offense10.
The following examples demonstrate these varying standards:

Leniency and Warnings

In the United Kingdom, a self-represented claimant who unknowingly relied on an AI-generated false case citation did not incur costs or contempt charges; the tribunal noted the claim’s unreasonable pursuit but declined to award costs, stating that it declined to award costs because ’AI is a relatively new tool which the public is still getting used to,’ and ’the Claimant acted honestly.’11 The court viewed the incident as an opportunity for instruction rather than a basis for sanctions. In a U.S. immigration case, a judge suspected a pro se filing was drafted by AI and contained fabricated citations. The judge admonished the litigant and reminded all court users of the duty of candor, and chose not to impose sanctions12. Courts often recognize when a party lacks tax & legal training, and treat a lack of tax & legal training as a mitigating factor for unfamiliarity with AI’s inherent risks13. Good faith and prompt correction mitigate penalties. Judges often issue warnings and order record correction instead of punishment when a filer admits and rectifies an error. Data from the global tracker shows pro se litigants frequently receive warnings rather than formal sanctions14.

Sanctions for Lawyers

Courts have shown less leniency toward licensed attorneys who file AI-fabricated information. Attorneys are presumed to know their duty to vet sources. In the Avianca case in New York, which first brought this issue to prominence, the judge found the lawyers had “failed to verify [the] AI output” and imposed sanctions, including a $5,000 fine and a requirement to notify all involved judges and clients about the misconduct15. This case, termed a “first major AI hallucination case,” prompted significant discussion within the tax & legal community16. Since then, numerous courts have sanctioned attorneys for similar lapses. Sanctions have ranged from modest fines (typically $1,000 to $5,000) to orders requiring attendance at continuing legal education (CLE) courses on ethics and AI17, and even referrals to bar disciplinary authorities in egregious cases. For example, an Israeli magistrate court fined lawyers 3,000 ILS and 7,500 ILS in separate cases for submitting non-existent citations, even after one attorney attempted to attribute fault to “the software and an intern”; the court pointedly rejected this excuse, stating it “was unimpressed” by attempts to shift responsibility18. In Trinidad & Tobago, a judge struck the AI-generated authorities from a lawyer’s filing and referred the matter to the law association’s disciplinary committee, emphasizing that citing non-existent cases constitutes a serious abuse of process and a breach of professional duty19. These cases impose a de facto strict verification duty on tax & legal professionals. Submitting unvetted AI output violates basic expectations of competence and candor, even if an attorney did not intend to mislead; in most reported cases, carelessness, rather than intent, caused the violations. An English judge observed, “It would have been negligent for this barrister, if she used AI and did not check it,” in a case involving a barrister suspected of using ChatGPT to produce a flawed brief20. The barrister was ordered to pay wasted costs and referred to regulators. This outcome demonstrates a lack of tolerance for professional negligence in this context.21.

Intermediate Cases: Partial Sanctions

Certain situations warrant a nuanced judicial approach, leading courts to impose partial sanctions or specific corrective measures. For instance, in a 2025 U.S. federal case (Oneto v. Watson), an attorney under time pressure included an AI-sourced argument with fabricated citations. The court noted the lawyer’s acknowledgment of error and lack of intent, and imposed a proportionate response: a reprimand and an order to correct the record, without a monetary fine22. In another U.S. case, an attorney’s submission containing fabricated citations led the judge to issue an Order to Show Cause, to provide the lawyer an opportunity to explain their actions before sanctions.23. Courts in these cases enforce standards while avoiding excessive punishment for inadvertent errors. Prompt candor, cooperation, and the AI tool’s novelty often mitigate penalties. However, these lapses are recorded and warn the tax & legal profession.
Courts tend to sanction AI hallucinations committed by officers of the court (lawyers) who demonstrate a pattern of misuse, whereas one-off mistakes by laypersons are often excused or lightly rebuked24. As AI use becomes more prevalent, the “new tool” leniency diminishes25. Courts increasingly require all litigants to verify the authenticity of citations and quotations26. This duty applies to all court participants, including lawyers, pro se litigants, experts, and judges; however, the consequences vary by role27.

Role-Based Differences in Accountability

AI hallucinations affect various justice system participants. Institutional responses differ based on role:

Lawyers and Law Firms

Attorneys face the most stringent expectations. Professional responsibility rules mandate competent representation and prohibit deceiving the court. AI assistant use increases scrutiny on a lawyer’s diligence; it does not diminish professional obligations. Courts affirm that lawyers cannot outsource the duty of accuracy to an algorithm. In a U.S. sanctions order from May 2025, a judge explicitly stated, “Attorneys cannot delegate [the] verification role to AI, computers, robots, or any other form of technology.”28. Lawyers remain responsible for verifying every citation and assertion in a filing. Failure in this gatekeeping role incurs sanctions.
Sanctions for AI-generated falsehoods vary among lawyers. Many U.S. judges impose monetary fines, frequently in the low thousands of dollars, alongside mandatory continuing legal education (CLE) training on AI or ethics29. Beyond monetary penalties, some courts have imposed more stringent measures. For example, a federal judge in Oklahoma issued public reprimands and required attorneys to adhere to specific remedial filing and certification requirements30. Reputational sanctions may exceed the impact of monetary penalties 31. Courts have also required lawyers to notify their clients about the error and sanctions32. These orders inform clients of procedural failures caused by counsel’s AI use. In severe cases, judges have referred lawyers to state bar or professional conduct offices for potential further discipline33. Some attorneys have also faced pro hac vice status revocation due to AI-related misconduct34. International jurisdictions report similar incidents. An Australian lawyer who filed a submission with fabricated citations was referred to the bar regulator, although he was ultimately only cautioned35. In South Africa, a lawyer’s misuse of ChatGPT led the court to refer the matter to the Legal Practice Council for investigation36. International tax & legal professional bodies implement self-regulatory frameworks to manage lawyer-supervised AI hallucinations.
Law firms and senior attorneys implement internal controls. Following the Avianca incident, many firms issued directives prohibiting citation of AI-suggested cases without independent verification. Some courts acknowledge lawyers’ implementation of firm-wide AI usage policies and staff training to prevent recurrence.37. These remedial actions influence a judge’s sanction decision; evidence of a firm’s proactive approach may mitigate sanctions.38.

Pro Se Litigants

Courts must balance competing considerations when addressing pro se litigants (individuals representing themselves). Pro se filings containing fabricated legal citations waste court resources and can disrupt proceedings, similar to attorney errors. Lay litigants often misunderstand AI output’s fictitious nature and lack formal tax & legal training. Comparative data indicates that most AI-hallucination cases now involve pro se parties; however, judges rarely sanction pro se filers as they do lawyers39. Instead, the typical response is to strike or disregard the faulty material and issue a warning. Warnings typically declare AI reliance for tax & legal research unreliable and caution litigants that future errors will incur significant penalties40.
For example, a Canadian small claims court case involved a self-represented plaintiff who presented entirely fictitious AI-generated case law. Upon discovering the truth, the judge denied the claim but declined to order the individual to pay opposing counsel’s costs, recognizing the litigant’s reliance on a new technology41. In Israel, where pro se AI usage has also been observed, courts have occasionally imposed modest fines (ranging from 250 to 1000 ILS, or a few hundred dollars) on pro se litigants who persistently misuse AI-generated content42. Some U.S. courts, faced with repeat frivolous filings from pro se litigants that appeared to be AI-drafted, have started to threaten Rule 11 sanctions or have required the litigant to seek leave before filing further motions43. These instances, however, remain exceptional. Courts generally proceed cautiously, acknowledging the public’s unfamiliarity with AI use. As one UK tribunal observed regarding a self-represented litigant who produced incorrect citations, “we do not consider the Respondent to be highly culpable because he is not legally trained”, issuing only a general warning to all court users about unchecked AI information44.
Although such conduct may be excused, the underlying case results in adverse outcomes. Pro se litigants who base their suits on non-existent authority typically face dismissal of their cases or denial of their claims on the merits45. In a U.S. federal case, a pro se plaintiff’s complaint was dismissed as frivolous. The court also ordered the plaintiff to pay the opposing counsel’s attorney fees; this constituted a sanction because the complaint’s legal foundation relied on fictitious citations46. The court noted the litigant’s filings wasted judicial time and party resources, serving as a warning against similar conduct47. Even if formal penalties are avoided, losing a case, sometimes with an adverse fee award, deters future misuse by pro se individuals48.

Expert Witnesses and Other Non-Attorney Contributors

Expert witnesses, consultants, and other non-attorney contributors have used AI, leading to inaccuracies, although such instances are less frequent in reported cases. For example, a U.S. case log entry records an incident where an AI tool (Anthropic’s Claude”) was used by an expert or lawyer to generate a fictitious article citation; the judge ordered a written explanation for the error49. In another matter, a pro se litigant hired an online legal consultant (not a licensed attorney) who apparently used AI to draft an appellate brief containing fictitious citations; the appeal was dismissed as frivolous, and the litigant was ordered to pay damages for the opponent’s costs50. These scenarios demonstrate that any contributor to court submissions, regardless of professional status, risks sanctions or adverse outcomes if AI-generated inaccuracies are included. Courts prioritize the submission’s content and its effect on the proceeding over the author’s professional status. A Singaporean family court case involved a self-represented litigant who used ChatGPT to prepare filings. The court not only issued a costs order (approximately SGD $1000) against that party but also mandated a written declaration for any future filings disclosing generative AI use51. This remedy mandates transparency, enabling courts to verify AI-influenced material and alerting them to AI use without imposing harsh penalties on lay users.

Judges and Adjudicators Using AI

AI hallucinations extend beyond filings, impacting judicial decision-making. Several notable incidents illustrate the risks of judicial reliance on AI. In late 2024, India experienced a notable instance. The Bengaluru bench of an Income Tax Appellate Tribunal (ITAT) issued a detailed order in a complex tax dispute. The tribunal withdrew the order within a week upon discovering it cited several non-existent Supreme Court and High Court precedents52. The withdrawal cited inadvertent errors, and observers strongly suspected that parts of the tribunal’s reasoning, including fabricated supportive case law, were drafted by ChatGPT or a similar tool. The matter prompted the Karnataka High Court to intervene. The court transferred the case to a different tribunal bench53. The court’s transfer confirmed the decision’s compromised integrity due to AI. The AI-generated order prompted concerns about bias and reliability. Uncritical adoption of AI-generated text by a judge or judicial officer risks reliance on non-existent authority, which compromises the decision’s legitimacy. Legal experts uncovered the fake citations, leading the tribunal to revoke its order and schedule a fresh hearing. The tribunal appeared to have copied portions of the government lawyer’s AI-generated submission, incorporating this unverified information into a judgment. This incident demonstrated the necessity of authentic sources in judicial decisions.
Other jurisdictions have encountered similar challenges with judges or arbitrators employing AI. In Colombia, a judge openly admitted in early 2023 to using ChatGPT for parts of a ruling on minors’ medical rights. This admission prompted scrutiny of AI’s impact on judicial transparency and propriety. By 2025, the Colombian Supreme Court quashed a lower court order suspected of relying on AI-generated reasoning54; this action demonstrated appellate review’s capacity to correct AI-induced errors and uphold due process. In Canada, an arbitrator’s decision in a dispute (Osman Medical Centre v. Santé Québec) contained anomalous citations and text, prompting one party to seek to set aside the award55. It emerged that the arbitrator had likely employed an AI tool to draft the award, which inserted non-existent references. The pending case suggests that even quasi-judicial officers like arbitrators risk invalidation of their decisions if AI hallucinations are present56. Judicial bodies have issued preemptive guidance. Several countries’ judicial councils have issued advisories on AI use. For example, the U.K. Courts and Tribunals Judiciary released guidance in 2025 cautioning judges about the limits of AI and stating that AI assistance in decision-writing requires rigorous care and transparency57. Judges cannot delegate judicial decision-making responsibility to a machine; parties must trust that rulings rely on evidence and law presented. In the United States, individual judges (particularly those with technological expertise) openly discuss experimenting with AI for rote tasks; however, they generally acknowledge that any outputs must be verified and that using AI to decide cases raises ethical issues if done without parties’ knowledge. Procedural fairness demands that judicial decisions not introduce facts or authorities from outside the record; an AI constitutes an “outside source” if it provides information not subjected to adversarial testing58. AI hallucinations in judgments demonstrate the critical role of human judges in independently validating any research or text supplied by an AI tool.

Professional Responsibility and the Emerging Duty of Verification

Legal institutions have not immediately developed new laws or ethical codes to address AI-related errors; rather, existing doctrines of professional responsibility and procedural fairness have been extended to cover AI usage. Existing rules apply: lawyers must competently research the law and avoid misleading the court; litigants, including pro se parties, cannot submit frivolous or false information; and judges must base decisions on the record and sound law. Generative AI use without rigorous human oversight violates these established duties59. For lawyers, the duty of competence now clearly encompasses technological competence. Bar associations globally have issued commentary on this development. The American Bar Association (ABA) and numerous U.S. state bars have cautioned attorneys that AI tools have limits, particularly their capacity to produce believable fabrications. Consequently, attorneys must carefully validate AI-derived research before reliance60. Failing to do so can breach ABA Model Rule 1.1 (Competence) and Model Rule 3.3 (Candor to the Tribunal), among others61. The fallout from the Avianca case prompted the New York State courts and bar to circulate warnings to attorneys: citing fake cases, even inadvertently, can lead to sanctions under existing court rules (like Rule 11 in federal court or its state equivalents)62. Attorneys’ intent may not prevent sanctions if a filing objectively violates the certification that legal contentions are warranted by law or a non-frivolous argument for its extension63. Ethics continuing legal education (CLE) emphasizes that rigorous verification is mandatory for AI use. International jurisdictions have adopted similar approaches. The Law Society of England and Wales and the Bar Standards Board (BSB) have updated their professional guidance to explicitly mention AI. They have not created new “AI rules,” but they clarify that using an AI assistant does not excuse a lawyer from the fundamental duty to not mislead the court64. In Ayinde v. Haringey in the UK, the judge condemned the barrister’s unverified use of ChatGPT as negligent; this condemnation applied the Bar’s existing Code of Conduct, which requires barristers to exercise skill, care, and candor in their pleadings65. Following that case, a barrister was formally referred to the BSB for potential discipline, likely under provisions that already prohibit deceiving the court66. The BSB and the Solicitors Regulation Authority (SRA) in the UK have since jointly noted that AI use in practice must align with a lawyer’s duty to maintain public trust in the profession and the administration of justice. This ensures lawyers remain accountable for their professional duties, even when utilizing technology67.

New Procedural Rules

Courts and agencies are implementing new procedural rules for AI use. Some courts now require explicit certifications or disclosures regarding AI. U.S. District Judge Brantley Starr in Texas issued a much-publicized standing order in May 2023 mandating that any attorney filing in his court certify either that no generative AI was used in preparing the filing, or that if it was used, a human verified all the AI-generated content for accuracy68. The order asserts that while AI is powerful, it is unsuitable for legal briefing, warning of the technology’s tendency to fabricate information, including quotes and citations69. Other judges have adopted similar requirements; for example, Magistrate Judge Fuentes in Illinois mandated disclosure of AI use, This mandate underscores that a Rule 11 certification requires a human attorney’s attestation to factual and legal grounding, not a machine’s70.
Administrative bodies have implemented comparable requirements. The U.S. Patent and Trademark Office (USPTO) and other agencies have issued notices to practitioners, requiring practitioners to rely on verifiable sources in filings, a response to incidents where patent attorneys submitted AI-invented case law71. The European Commission, in its discussions on the upcoming EU AI Act, has contemplated classifying AI used in justice systems as “high risk,” which would mandate quality controls and possibly audit requirements for any AI tools deployed by courts or lawyers in the EU72. The AI Act acknowledges that AI use in tax & legal advice or decisions implicates fundamental rights, such as the right to a fair trial. This requires regulatory oversight exceeding current professional discipline73.
These developments adapt existing tax & legal principles to a new technological context without establishing fundamentally new doctrine. Law firms, courts, and agencies must adopt ’institutional verification’ as the minimum standard for AI use. This standard updates traditional diligence and authenticity requirements. AI serves as a helpful aid only if a human professional verifies its output before entry into the tax & legal record. Absence of such verification triggers existing sanctions for false or frivolous filings74. One U.S. court stated that AI’s advent does not diminish Rule 11’s requirements; attorneys retain responsibility for verifying the existence and accuracy of every cited authority75. Another order cautions that allowing AI-generated spurious judicial decisions into filings jeopardizes the courts’ mission76. Therefore, the evolving standard requires continuous human oversight for AI use.

Conclusions and Implications

AI hallucination incidents from 2023 to 2025 have exposed vulnerabilities in tax & legal systems worldwide. Court and institutional responses demonstrate a converging standard of care. Jurisdictions consistently require human verification and accountability for AI-derived content in tax & legal proceedings. Professional responsibility doctrines, including competence, candor, and the duty to the court, govern this new challenge. Sanctions and remedies depend on the lapse’s severity and the offender’s role. While a novice pro se litigant might receive an educational rebuke for unknowingly submitting AI falsehoods, a seasoned lawyer faces monetary fines or other disciplinary action for the same conduct. A judge who copies AI-generated text into an order may see that decision vacated to preserve the integrity of the process77. Legal outcomes must derive from truth and established law; AI fabrications are unacceptable. To date, courts have relied on established rules. They treat AI hallucinations as analogous to existing problems, such as citing overruled or non-existent cases or presenting false evidence. Implementation has introduced novel requirements, such as certifications of AI review or mandatory AI-specific continuing legal education training. Substantive legal theory, however, remains consistent. Formal standards are expected to develop. Bar rules will likely explicitly mention AI, and malpractice insurers may require law firms to adopt AI-use policies. If AI tools become ubiquitous in law practice, failing to use them carefully could constitute an ethical violation. Institutions require human control over AI output as a minimum verification standard. Verification of every citation and quote against trusted sources is necessary; anything less constitutes a breach of the duty of care78. The implications for judges and procedural fairness are significant. The judiciary’s response, including withdrawing AI-tainted rulings and issuing AI guidance, reveals that court legitimacy is jeopardized. An errant AI hallucination in a judgment risks undermining public confidence. A party’s right to be heard is violated because fictional authority, never introduced by either side, cannot be contested. Courts will continue to emphasize transparency. If AI is used in any form, it should be disclosed and its outputs verified independently. Formalized rules of court or legislation may require disclosure of AI assistance in litigation documents and judicial drafting. These measures, coupled with robust professional discipline for violations, seek to subordinate technology to tax & legal principles79. From 2023 to 2025, tax & legal institutions encountered challenges and adapted in response to AI hallucinations. The comparative survey identifies consistent regulatory responses. Jurisdictions including the United States, the United Kingdom, India, and Israel require human verification of AI contributions to legal work to avoid adverse outcomes. Intent may mitigate the outcome, but it does not absolve the breach of duty. While outright fraud is still rare, even good faith is no defense to a lack of diligence. Courts apply traditional norms of professionalism and fairness to AI use. The tax & legal profession is adjusting its practices. Lawyers now exercise greater caution, and litigants approach AI tax & legal advice skeptically. Judges maintain the integrity of the adjudicative process. AI hallucinations necessitate that the tax & legal community treat AI as a tool, rather than an authoritative source. Jurisdictions increasingly recognize an implicit rule: citing unverified ’authority’ entails substantial risk. Institutions committed to the rule of law deem fabricated legal authority, irrespective of its source, inadmissible in courts80.
1
ABA Comm. on Ethics & Pro. Resp., Formal Op. 512, at 3 (2024); see also Shatrunjay Kumar Singh, Risk-Weighted Hallucination Scoring for Legal Answers: A Conceptual Framework for Trustworthy AI in Law, 10 Int’l J. Innovative Sci. & Res. Tech. 1907, 1917 (2025) (defining hallucinations as core system behaviors emerging from predictive modeling rather than factual retrieval).
2
Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023); see also Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models, 16 J. Legal Anal. 64, 66 (2024).
3
Damien Charlotin, AI Hallucination Cases Database (last updated Nov. 13, 2025), https://www.damiencharlotin.com/hallucinations/ (logging 596 unique tax & legal proceedings involving unverified artificial intelligence output); see also Matthew Lee, 12 False Citations/AI Hallucinations in UK Courts (Ayinde), Natural and Artificial Intelligence in Law (2025) (noting over 50 international cases in July 2025 alone) and also author’s dataset (last updated Dec. 31, 2025) available at Tax AI Research, Tex. A&M Univ., https://sites.google.com/tamu.edu/pramod-kumar-siva/ai-hallucination.
4
Matthew Dahl et al., Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models, 16 J. Legal Anal. 64, 66 (2024); see also Damien Charlotin, AI Hallucination Cases Database (last updated Nov. 13, 2025), https://www.damiencharlotin.com/hallucinations/ (indexing 596 distinct judicial proceedings involving confirmed fabricated content).
5
Charlotin, supra (documenting that 506 of the 596 tracked incidents occurred between January and November of 2025).
6
Id.; see also Matthew Lee, 12 False Citations/AI Hallucinations in UK Courts (Ayinde), Natural and Artificial Intelligence in Law (2025) (noting that over 50 international cases were logged in July 2025 alone) and also author’s dataset (last updated Dec. 31, 2025) available at Tax AI Research, Tex. A&M Univ., https://sites.google.com/tamu.edu/pramod-kumar-siva/ai-hallucination.
7
Charlotin, supra (listing approximately 298 pro se litigants and 256 licensed attorneys in the verified registry).
8
Damien Charlotin, AI Hallucination Cases Database (last updated Nov. 13, 2025), https://www.damiencharlotin.com/hallucinations/.
9
Id..
10
See Ayinde v. Borough of Haringey EWHC 1383 (Admin) at [page number], (formalizing the asymmetry between litigants-in-person and regulated lawyers and noting that while thresholds for contempt may be met by unverified AI use, professional discipline is the preferred initial recourse for attorneys); Harber v. HMRC UKFTT 1007 (TC) (declining sanctions for a pro se litigant unaware of hallucination risks); Delano Crossing 2016 v. County of Wright, No. 86-CV-23-2147, 2025 WL 1539250 (Minn. Tax. Ct. May 29, 2025) (finding a Rule 11 violation but opting for a board referral over monetary sanctions for an attorney); see also Damien Charlotin, AI Hallucination Cases Database (last updated Nov. 13, 2025), https://www.damiencharlotin.com/hallucinations/ (logging approximately 596 proceedings and noting that roughly 90% of documented incidents do not result in formal professional sanctions against the filer).
11
Kuzniar v. Gen. Dental Council (UK Trib.) (unreported); see also Damien Charlotin, AI Hallucination Cases Database (last updated Nov. 13, 2025), https://www.damiencharlotin.com/hallucinations/.
12
Ex parte Lee, 673 S.W. 3d 755, 757 n.2 (Tex. App. 2023) (declining sanctions for a suspected AI-generated habeas petition); see also In re Marriage of Melinda Johnson v. Sabastian Johnson, No. 18A-DR-123, 2025 WL 2198352, at *4 (Ind. Ct. App. Oct. 30, 2025) (issuing a candor warning to a pro se litigant who admitted ChatGPT use).
13
HMRC v. Gunnarsson UKUT 247 (TCC) at (declining sanctions because the respondent was not legally trained and under time pressure); see also Harber v. HMRC UKFTT 1007 (TC).
14
Charlotin, supra (documenting that roughly 90% of unique legal proceedings involving AI hallucinations result in warnings or striking of material rather than formal professional sanctions).
15
Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023).
16
Id..
17
Gauthier v. Goodyear Tire & Rubber Co., No. 1:23-CV-281, 2024 WL 4882651, at *3 (E.D. Tex. Nov. 25, 2024).
18
Case No. 14748-08-21 (Isr. July 9, 2025) (dismissing fabricated evidence and imposing 3,000 ILS fine); Backhoe Center Ltd. v. Abu Gwaid (Beersheba Mag. Ct. Sept. 1, 2025) (imposing 7,500 ILS penalty for nine non-existent citations).
19
Nexgen Pathology Services Ltd. v. Darcueil Duncan, CV2023-04039 (Trinidad & Tobago High Ct. Apr. 30, 2025).
20
R (Ayinde) v. London Borough of Haringey EWHC 1383 (Admin).
21
Id..
22
Oneto v. Watson, No. 1:24-cv-09662, 2025 WL 3698822, at *1 (N.D. Cal. Oct. 10, 2025). Note that while the court in Oneto characterized its action as a proportionate response to a negligent error, the same case record contains a conflicting entry indicating a $1,000 penalty was imposed.
23
Doe v. Noem, No. 1:25-cv-01352, 2025 WL 2828015 (D.D.C. July 1, 2025); see also United States v. Burke, No. 8:24-cr-00068 (M.D. Fla. May 20, 2025) (striking a motion to dismiss containing nine nonexistent passages and requiring a written explanation detailing the use of artificial intelligence).
24
Ayinde v. London Borough of Haringey EWHC 1383 (Admin) (formalizing the asymmetry between litigants-in-person and regulated lawyers); Harber v. HMRC UKFTT 1007 (TC) (suggesting unrepresented parties may be excused for ignorance); Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023) (holding that technological assistance does not supplant the duty of counsel to ensure the accuracy of their filings).
25
Smith v. Farwell, No. 2282CV01197, at 15-16 (Mass. Super. Ct. Feb. 12, 2024) (warning that ignorance of AI risks will be a less credible defense as those risks become widely known); Al-Hamim v. Star Hearthstone, LLC, 564 P.3d 1123, 1125–26 (Colo. App. 2024).
26
Reddy v. Saroya, 2025 ABCA 322 (CanLII), at para. 81; Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. Ct. App. 2024) (observing that citing nonexistent law is a false statement to a court regardless of the source).
27
Reddy, 2025 ABCA 322, at para. 83 (lawyers and self-represented litigants); Kohls v. Ellison, No. 24-cv-3754, 2025 WL 66514, at *5 (D. Minn. Jan. 10, 2025) (expert witnesses); Buckeye Trust v. PCIT, ITA No. 1051/Bang/2024 (ITA Bengaluru Bench Dec. 30, 2024) (judges); Shahid v. Esaam, No. A24A1054 (Ga. Ct. App. June 30, 2025).
28
Versant Funding LLC v. Teras Breakbulk Ocean Navigation Enters., LLC, No. 17-CV-81140, 2025 WL 1440351, at *4 (S.D. Fla. May 20, 2025); see also Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023) (emphasizing the gatekeeping role” of attorneys to ensure the accuracy of their filings).
29
Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023) (imposing a $5,000 penalty); Gauthier v. Goodyear Tire & Rubber Co., 2024 WL 4882651 (E.D. Tex. Nov. 25, 2024) (imposing a $2,000 penalty); Coomer v. MyPillow, Inc., (D. Colo. July 7, 2025) (imposing a $6,000 penalty).
30
Mattox v. Product Innovation Research, LLC, (E.D. Okla. Oct. 22, 2025); see also In re Loletha Hale, (N.D. Ga. Oct. 28, 2025) (requiring counsel to file a notice of the sanction order in all pending and future cases within the district for five years).
31
Zhang v. Chen, 2024 BCSC 285, at para. 42 (CanLII) (noting reputational harm associated with "fake cases" is career-defining); Ko v. Li, 2025 ONSC 2965, at para. 73 (CanLII) (observing that public shaming has a deterrent effect beyond a small fine).
32
Park v. Kim, 91 F.4th 610, 616 (2d Cir. 2024); Benjamin v. Costco Wholesale Corp., (E.D.N.Y. Apr. 24, 2025); Ramirez v. Humala, (E.D.N.Y. May 13, 2025).
33
People v. Zachariah C. Crabill, No. 23PDJ067 (Colo. O.P.D.J.) (imposing a one-year and one-day suspension); Nexgen Pathology Servs. Ltd. v. Darcueil Duncan, CV2023-04039 (Trin. & Tobago High Ct. Apr. 30, 2025); R (Ayinde) v. London Borough of Haringey, EWHC 1383 (Admin).
34
Wadsworth v. Walmart Inc., 2025 WL 608073 (D. Wyo. Feb. 24, 2025); Mavy v. Comm’r of Soc. Sec. Admin., (D. Ariz. Aug. 14, 2025).
35
Valu v. Minister for Immigration & Multicultural Affairs (No 2), FedCFamC2G 95 (Austl. Jan. 31, 2025).
36
Mavundla v. MEC, ZAKZPHC 2 (S. Afr. Jan. 8, 2025).
37
Wadsworth v. Walmart Inc., 2025 WL 608073 (D. Wyo. Feb. 24, 2025) (noting the court’s appreciation for the implementation of internal firm policies to prevent errors); UB v Secretary of State for the Home Department, UI-2025-000834 (UKUT) (describing the solicitor’s development of AI policy and staff training as “commendable steps”).
38
Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023) (declining to mandate further education because the firm had already arranged for internal AI training); In re Whitehall Pharmacy LLC, No. 4:20-bk-12345 (E.D. Ark. 2025) (declining sanctions where counsel admitted negligence and promptly implemented new safeguards); Mid Cent. Operating Eng’rs Health & Welfare Fund v. Hoosiervac LLC, 2025 U.S. Dist. LEXIS 31073 (S.D. Ind. May 28, 2025) (reducing recommended monetary sanctions from 15 , 000 t o 6,000 in light of remedial education efforts).
39
Damien Charlotin, AI Hallucination Cases Database (last updated Nov. 13, 2025), https://www.damiencharlotin.com/hallucinations/ (noting pro se litigants comprise approximately 50% of tracked cases); HMRC v. Gunnarsson UKUT 247 (TCC) (declining sanctions because respondent was not legally trained and qualfied); Hall v. The Academy Charter School, No. 23-CV-2016, 2025 WL 3702522, at *2 (E.D.N.Y. Aug. 7, 2025) (observing that warnings or reprimands are typically meted out in cases involving pro se litigants).
40
GNX v. Children’s Guardian NSWCATAD 117 (cautioning applicant against relying on ChatGPT for tax & legal advice); Bischoff v. S.C. Dep’t of Education, No. 23-ALJ-30-0123 (S.C. Admin. L. Ct. Apr. 10, 2025) (warning that fictitious citations waste judicial resources); LYJ v. Occupational Therapy Board of Australia (emphasizing that unverified AI use diminishes credibility and undermines the judicial process).
41
NCR v. KKB, 2025 ABKB 417 (CanLII); see also Kuzniar v. General Dental Council, (UK) (declining to award costs because AI is a relatively new tool and the claimant acted honestly).
42
Ploni v. Wasserman, (Isr. June 1, 2025) (250 ILS fine); 534246-07-22 (Isr. Nov. 3, 2025) (1000 ILS fine).
43
Monster Energy Co. v. Owoc, No. 20-cv-62515, 2025 WL 3705511 (S.D. Fla. Aug. 14, 2025) (imposing community service and certification requirements); Celli v. New York City, No. 1:24-cv-09743, 2025 WL 3698822 (S.D.N.Y. June 9, 2025) (warning of pre-filing restrictions); Northern Marianas College v. Zajradhara, No. 2024-SCC-0019-CIV (N. Mar. I. July 22, 2025) (declaring appellant a vexatious litigant).
44
HMRC v. Gunnarsson, UKUT 247 (TCC) at.
45
Nash v. Director of Public Prosecutions, WASCA 75; Ivins v KMA Consulting Engineers Pty Ltd & Ors, QIRC 164.
46
Thomas v. Pangburn, No. CV423-046, 2023 WL 9425765 (S.D. Ga. Oct. 6, 2023); Crespo v. Tesla, Inc., No. 24-61529-CIV-DAMIAN, 2025 WL 10625 (S.D. Fla. Jan. 10, 2025).
47
Muhammad v. Gap Inc., No. 2:24-cv-3676, 2025 WL 2828015 (S.D. Ohio July 3, 2025); Johnson v. Dunn, No. 2:21-cv-01701-AMM (N.D. Ala. May 19, 2025).
48
Scott v. Federal National Mortgage Association, (Maine Superior Court, June 14, 2023); Strong v. Rushmore Loan Management Services LLC, No. 8:24-cv-00100 (D. Neb. Jan. 15, 2025).
49
Concord Music Grp., Inc. v. Anthropic PBC, No. 5:24-cv-03811-EKL, 2025 WL 3111818, at *1 (N.D. Cal. May 23, 2025).
50
Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. Ct. App. 2024).
51
XAI v. XAH, SGFJC 33 (Sing.).
52
Buckeye Trust v. PCIT, ITA No. 1051/Bang/2024 (ITAT Bengaluru Bench Dec. 30, 2024).
53
Buckeye Trust (Petitioner) v. Registrar, ITAT, WP No. 25280 of 2025 (Karnataka High Ct. Sept. 18, 2025).
54
STC17832-2025 (Colombian Supreme Court).
55
Osman Medical Centre v. Santé Québec.
56
Id..
57
Courts and Tribunals Judiciary, Artificial Intelligence (AI) – Judicial Guidance (Oct. 31, 2025).
58
Id..
59
See Reddy v. Saroya, 2025 ABCA 322 (CanLII); see also R (on the application of Ayinde) v. London Borough of Haringey EWHC 1383 (Admin).
60
ABA Comm. on Ethics & Pro. Resp., Formal Op. 512, at 3–4 (2024); see also State Bar of Mich., Judicial Officers Must Maintain Competence with Advancing Technology, Including But Not Limited to Artificial Intelligence (Oct. 27, 2023) (noting that the increasing use of AI requires judicial officers and lawyers to understand how tools affect their conduct).
61
ABA Formal Op. 512, supra note 2, at 10; see also State Bar of Cal., Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (2024) (emphasizing that professional judgment cannot be delegated to generative AI).
62
Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence 39 (Apr. 6, 2024); see also Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023).
63
See Shelton v. Parkland Health, No. 3:24-cv-2190, 2025 WL 3012828 (N.D. Tex. Nov. 10, 2025) (holding that an attorney’s good faith is not enough to protect her from Rule 11 sanctions).
64
The Law Society, Generative AI: The Essentials (Oct. 1, 2025); see also Bar Council, Considerations when using ChatGPT and generative artificial intelligence software based on large language models (Nov. 25, 2025).
65
R (Ayinde) v. London Borough of Haringey EWHC 1383 (Admin) at.
66
Id.; see also The BSB Handbook, rC9.
67
Francesca Whitelaw KC, Judging the use of AI, Local Government Lawyer (Dec. 12, 2025).
68
Judge Brantley Starr, N.D. Tex., Mandatory Certification Regarding Generative Artificial Intelligence (2023); see also Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023) (noting that while technological advances are commonplace, there is nothing inherently improper about using reliable AI tools provided they are verified by counsel).
69
Judge Brantley Starr, supra; see also EDRM, Repository of Judicial Standing Orders Including AI Segments, https://edrm.net/judicial-orders-2/ (last visited June 13, 2024).
70
Magistrate Judge Gabriel A. Fuentes, N.D. Ill., Standing Order For Civil Cases Before Magistrate Judge Fuentes (May 31, 2023); see also Magistrate Judge Jeffrey Cole, N.D. Ill., Standing Order regarding The Use of “Artificial Intelligence” In the Preparation of Documents Filed Before this Court (July 25, 2023) (clarifying that mere reliance on an AI tool will not be presumed to constitute a reasonable inquiry under Fed. R. Civ. P. 11).
71
In the Matter of Anthony Matos, Proceeding No. D2025-13 (USPTO March 6, 2025).
72
Regulation 2024/1689, 2024 O.J. (L 1689) (EU) [hereinafter AI Act], Annex III, point 8(a); see also id. arts. 9, 10, 11 (prescribing mandatory requirements for risk management, data governance, and technical documentation for high-risk systems).
73
AI Act, supra note 2, recitals 59, 61 (stating that AI systems used in the administration of justice are qualified as high-risk to address the potential for bias and errors that may impact the right to a fair trial); see also id. recital 1 (stating the objective of ensuring a high level of protection for fundamental rights).
74
Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023); Delano Crossing 2016, LLC v. Cnty. of Wright, No. 86-CV-23-2147, 2025 WL 1539250, at *3 (Minn. Tax Ct. May 29, 2025) (concluding that the inclusion of fake case citations creates sham legal authority and violates Rule 11.02(b)).
75
Versant Funding LLC v. Teras Breakbulk Ocean Navigation Enters., LLC, No. 17-CV-81140, 2025 WL 1440351, at *4 (S.D. Fla. May 20, 2025) (Attorneys cannot delegate [the] verification role to AI, computers, robots, or any other form of technology.”); see also Standing Order of Magistrate Judge Jeffrey Cole, The Use of Artificial Intelligence” in the Preparation of Documents Filed Before this Court, (N.D. Ill. 2023) (noting that a certification on a filing constitutes a representation that counsel has personally read and analyzed cited authorities to ensure they exist).
76
Standing Order of Magistrate Judge Jeffrey Cole, supra note 2 (The mission of the federal courts to ascertain truth is obviously compromised by the use of an AI tool that generates legal research that includes false or inaccurate propositions of law and/or purport to cite non-existent judicial decisions”).
77
See Buckeye Trust v. PCIT, ITA No. 1051/Bang/2024 (ITAT Bengaluru Bench Dec. 30, 2024) (recalling an order containing non-existent precedents); STC17832-2025 (Colombian Supreme Court) (quashing a lower court order relying on AI-generated reasoning); Reddy v. Saroya, 2025 ABCA 322 (CanLII) (emphasizing the necessity of manual verification); Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023) (imposing sanctions for unverified citations).
78
Court of Appeal of Alberta, Notice to the Profession and Public: Ensuring the Integrity of Court Submissions When Using Large Language Models (Oct. 6, 2023); ABA Comm. on Ethics & Pro. Resp., Formal Op. 512, at 3–4 (2024); CCBE, Guide on the Use of Generative AI by Lawyers (Oct. 2, 2025).
79
Courts and Tribunals Judiciary, Artificial Intelligence (AI) – Judicial Guidance (Oct. 31, 2025); Cal. Rules of Court, rule 10.430 (effective Sept. 1, 2025) (mandating generative AI use policies for court personnel); see also Buckeye Trust (Petitioner) v. Registrar, ITAT, WP No. 25280 of 2025 (T-IT) (Karnataka High Ct. Sept. 18, 2025).
80
See R (on the application of Ayinde) v. London Borough of Haringey, EWHC 1383 (Admin) (issuing wasted costs orders for fictitious references); Tajudin bin Gulam Rasul v. Suriaya bte Haja Mohideen, SGHCR 33 (imposing costs for unverified AI output); Pro Health Solutions Ltd. v. ProHealth Inc., BL O/0559/25 (UKIPO June 20, 2025) (dismissing appeal based on fabricated AI citations); Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated