Submitted:
18 February 2026
Posted:
26 February 2026
You are already at the latest version
Abstract
Keywords:
I. Introduction
II. AI Applications in Transfer Pricing
A. Current Uses of AI in TP
B. Tax Authority Uses and Audit Selection
III. Opportunities Presented by AI in TP
- 1.
- Boosted Efficiency and Accuracy: The most immediate benefit is automating labor-intensive TP tasks. AI can gather, process, and analyze data far faster than humans, “alleviat[ing] the burden on tax professionals and expedit[ing] processes such as tax filing and auditing while diminishing the likelihood of human error.”45 In transfer pricing, this yields quicker benchmarking and documentation with fewer clerical errors. Firms report substantial time savings; in one case, AI-driven tax preparation cut preparation time by about 40 percent.46 These gains allow TP practitioners to focus on complex issues such as business restructurings and dispute resolution instead of manual number-crunching.
- 2.
- Improved Compliance and Predictive Analytics: AI can flag compliance issues before they materialize. Using predictive analytics, it analyzes historical tax and financial data to forecast TP exposures or adjustments and project future tax liabilities, allowing businesses to refine TP policies.47 For example, AI can model how small transfer price changes affect each entity’s taxable income and warn when results deviate from arm’s length. This supports audit readiness by allowing proactive remediation. A 2024 survey reports that leading tax functions already use GenAI tools in tax controversy to identify risks early and strengthen audit defense files.48 By simulating tax authority risk models (where known) and running “what-if” scenarios, AI helps multinationals maintain “compliance-ready documentation” for inquiries.49
- 3.
- Higher Consistency and Fairness: Properly designed AI can reduce the subjectivity and inconsistency of human decision-making. Automated systems apply uniform criteria to all transactions, supporting more consistent TP outcomes. Evidence indicates that AI-driven tax decisions can be perceived as fairer and less biased than human ones.50 In a survey experiment, Decuypere and Van de Vijver found that taxpayers rated audit-selection scenarios with more AI involvement as more procedurally fair, citing AI’s bias suppression and consistency.51 A well-calibrated AI does not “cut special deals” or react to irrelevant factors, but treats comparable cases alike, in line with the arm’s length principle. Jordan’s tax authority, for instance, reported that AI-based audit selection “significantly improved perceptions of detection probability and procedural fairness” among corporate taxpayers, thereby strengthening voluntary compliance.52 Transparently used, AI can thus enhance the fairness and integrity of the TP system.
- 4.
- Strategic Insights and ESG Alignment: Beyond compliance, AI tools generate strategic insights from TP data. Advanced analytics can show how supply chain or operational changes might legally optimize the group’s tax position. For instance, AI scenario analysis can propose alternative intercompany pricing models that better align profit with value creation, supporting management decisions. This links to ESG (Environmental, Social, Governance) priorities, as stakeholders increasingly expect tax transparency and fair intercompany pricing as elements of good governance.53 Companies face pressure to ensure that tax practices, including TP, reflect ESG values and that they pay a fair share in each jurisdiction. AI can support this by tracking and reporting effective tax rates and profit allocation by country, helping show that TP outcomes reflect real economic activity rather than artificial profit shifting. EY notes that “given the increased focus on tax transparency, TP policies aligned with ESG outcomes will assume significance for stakeholders.”54
- 5.
- Faster Dispute Resolution and Audit Defense: AI can improve dispute outcomes by rapidly analyzing large datasets to find evidence supporting the taxpayer in audits or litigation. AI-based analytics can reveal patterns or precedent that manual review might miss, strengthening competent authority negotiations or court cases. Because AI-driven TP adjustments are expected to increase double taxation disputes, bilateral resolution mechanisms become critical; companies should therefore “align data and narratives across jurisdictions to minimize MAP exposure, and deploy APAs where recurrent risk profiles are flagged by their own AI-driven engines.”55 AI can thus identify recurring transfer pricing issues (e.g., a transaction repeatedly flagged by tax authority models), enabling taxpayers to seek Advance Pricing Agreements (APAs) for greater certainty and fewer disputes.56
IV. Risks and Challenges of AI in TP
- 1.
- Explainability and “Black Box” Algorithms: Many AI models (especially machine learning) function as black boxes with opaque internal logic. In TP, this creates explainability problems for both taxpayers and tax authorities. The duty to give reasons for tax decisions is a core legal requirement, so AI outputs without human-interpretable rationale are vulnerable to challenge. The UK Elsbury tribunal, for example, held that opacity in HMRC’s AI tool undermined trust and could be legally contested.57 Explainability is therefore a critical risk: AI models should be supported by interpretable documentation (such as “model cards” or feature-importance explanations) to meet lawful standards and enable mutual understanding in controversy.
- 2.
- Data and Bias Issues: AI models are only as sound as their training data. Biased or unrepresentative data can produce biased outcomes, which is especially problematic in tax. A notorious example is the Dutch “toeslagenaffaire” (childcare benefits scandal), where a tax authority algorithm discriminated based on ethnicity, causing severe harm.58 In transfer pricing, bias may arise if training data (e.g., past audit adjustments or industry margins) reflect historical biases or selective enforcement. Automation bias compounds this problem: tax officers and company staff may over-rely on AI outputs and fail to question them. Research shows that even experts may “uncritically accept computer-generated outcomes,” struggling to recognize algorithmic errors.59 Experts recommend that tax administrations “perform audits every three months of AI models to detect bias and confirm fairness in tax enforcement.”60
- 3.
- Legal and Regulatory Exposure: AI in TP operates in a complex and developing legal environment. Automated tax decisions can trigger data protection rules, administrative law, and new AI regulations. Under the GDPR, individuals have rights against solely automated decisions that significantly affect them.61 The EU AI Act (2024) draws a distinction for tax: AI used by tax authorities in “administrative proceedings” (e.g., civil tax assessments) is not treated as a high-risk law enforcement tool, provided it is not used for criminal enforcement.62 This carve-out eases compliance for tax authorities but heightens regulatory risk: a system not labelled “high-risk” can still cause serious harm without the Act’s full safeguards.63
- 4.
- Antitrust and Coordination Risks: A key risk is antitrust or collusion concerns when many companies or advisors utilize comparable AI tools. If numerous firms rely on the same AI pricing engine, the algorithm may “converge” on similar pricing strategies, creating tacit collusion risks. In TP, regulators may question whether widespread use of a single AI tool produces uniform profit outcomes that undermine the arm’s length principle.
- 5.
- Litigation and Liability Issues: AI introduces new liability questions in TP disputes. If an AI analysis is wrong (e.g., poor comparables, faulty risk flags), responsibility may fall on the user, the company, or the developer. The Elsbury case already shows taxpayers seeking disclosure of AI use to test reliability.64 Courts may soon have to decide whether assessments based on undisclosed algorithms breach administrative law. If a tax authority’s AI suggests a TP adjustment, taxpayers may seek the model’s code, training data, and error rates in discovery,65 while authorities often resist on confidentiality grounds.66
- 6.
- Operational and Implementation Challenges: Implementing AI in TP entails practical risks: cost overruns, integration failures, and talent gaps. Advanced AI can be expensive, and small tax departments may question the return on investment. Human expertise remains essential, yet many organizations face resistance to change among tax professionals.67
V. Legal and Policy Frameworks Governing AI in TP
- 1.
- OECD Transfer Pricing Guidelines (2022/2024): The OECD TP Guidelines are the core reference68. Any AI tool must support, not undermine, the arm’s length principle. Although the Guidelines do not mention AI explicitly, they implicitly require human review to ensure AI outputs align with approved methods.69 AI tools must uphold the arm’s length principle. For instance, AI-driven comparables selection should adhere to established criteria and required adjustments. The 2024 Amount B update provides clear target returns for routine distributors, supporting real-time pricing or testing. Complex cases still require expert judgment and traditional methods. While the Guidelines do not mention AI, they imply the need for human review to ensure AI outputs comply with approved methods. Documentation of AI contributions is also required. OECD standards remain the benchmark for evaluating algorithmic TP analyses, and AI should enhance, not replace, sound economic analysis.
- 2.
- Domestic Laws and Administrative Guidance: Several jurisdictions now issue guidance on advanced technology in tax. Evidence indicates that HMRC utilizes the “Connect” artificial intelligence system to facilitate data-driven risk profiling through multi-source data correlation. HMRC maintains that these automated systems serve to prioritize enforcement resources while human judgment is preserved for final tax assessment. Germany’s 2023 TP administrative principles (Verwaltungsgrundsätze Verrechnungspreise 2023)70 similarly recognize “digital tools” for functional analysis and comparables searches, but stress that tax authorities will scrutinize the tools’ inputs and assumptions. The U.S. IRS has launched AI programs for large partnership compliance and signaled use of similar analytics in TP examinations. In mid-2023, the IRS announced it was using AI to detect abusive tax schemes, including possible transfer mispricing, indicating a shift toward high-tech enforcement. Where explicit rules are absent, existing tax principles still govern: decisions must reflect economic reality, taxpayers can appeal and seek explanations, and tax administrators must use information reasonably. Oversight bodies are increasingly involved: the U.S. GAO’s 2024 report, “IRS Artificial Intelligence: Use in Audit Selection and Compliance,” 71 urges the IRS to improve transparency and assess model bias, and national audit offices (e.g., in Australia) have reviewed whether tax authorities’ AI projects comply with law and governance standards. Overall, an emerging framework is taking shape in which AI used in tax must be auditable and accountable, even if not yet fully codified.
- 3.
- EU AI Act (2024) and High-Risk Systems: The EU AI Act will affect some tax-related AI, especially tools from vendors and possibly those used by large corporate taxpayers in the EU. Under its risk-based approach, most business AI decision-support tools will be “limited risk” or “general purpose” systems, subject mainly to light obligations (e.g., transparency when interacting with humans). One might assume AI used by tax authorities to select audit targets is “high-risk” because it can significantly affect individuals’ rights. However, Recital 59 excludes purely administrative tax AI from the law-enforcement high-risk category.72 Tax-related AI could still be high-risk under other headings: for example, AI used in hiring or credit decisions is high-risk; if tax compliance scoring were treated like credit scoring, similar rules might apply. Even non-high-risk AI in tax must meet the Act’s general quality requirements (e.g., no real-time biometric ID; encouragement of voluntary codes of conduct for low-risk AI). The Act also mandates transparency for AI interacting with people: if a taxpayer uses a chatbot (as in some AI tax assistants), it must disclose that it is not human.73 Once the Act is fully in force, tax solution providers may voluntarily align TP AI tools with “trustworthy AI” principles (accuracy, robustness, explainability) to satisfy compliance-focused corporate buyers. The non-binding OECD AI Principles (2019) similarly call for transparency, robustness, accountability, and respect for human rights, and have shaped many national AI strategies. Tax departments deploying AI should therefore consider voluntary measures such as bias testing, explanation features, and human-in-the-loop decision protocols.
- 4.
- Data Protection (GDPR) and Confidentiality: GDPR restricts AI processing of personal data and guarantees a right to human review of automated decisions with legal effects. Although corporate tax and TP focus on entity data, they may involve identifiable individuals (e.g., ultimate owners or payroll data in cost allocations). European tax authorities are also bound by GDPR; for example, the Dutch Data Protection Authority imposed its largest-ever fine on the tax administration for unlawful AI-driven fraud detection.74 This shows that robust data governance is essential: TP AIs must handle data securely and in compliance with GDPR. The use of sensitive attributes (e.g., citizenship, as in the Dutch case) is strictly prohibited unless properly justified and disclosed.75 Tax secrecy laws add another constraint: tax data are confidential in most jurisdictions, so sharing it with AI vendors or on cloud platforms raises legal issues. MNEs must ensure that external AI tools for TP do not breach tax confidentiality rules or contracts, typically by using trusted platforms, anonymizing data where possible, and implementing secure data transfer agreements.
- 5.
- OECD and EU Tax Initiatives: At the policy level, the OECD’s Forum on Tax Administration (FTA) has been studying the role of AI. The OECD’s Tax Administration 2025 report highlights digital transformation, widespread AI adoption, and the need for safeguards.76 It urges tax authorities to share best practices on AI governance. In the EU, beyond the AI Act, there are calls for specific rules on government use of AI. The European Parliament and scholars have criticized the omission of tax from the AI Act’s high-risk list.77 Future implementing acts or guidance may address this gap, potentially designating some tax AI systems as needing extra oversight. Further, initiatives such as the EU’s VAT in the Digital Age and the OECD’s Pillar One and Pillar Two reforms rely heavily on technology; although not AI-specific, they show that complying with complex new rules will likely require advanced software and possibly AI for calculations and reporting. Policymakers appear to recognize that human review and accountability must evolve with AI deployment. For example, in 2022 the French Conseil d’État advocated a doctrine of “trusted public AI” based on transparency, auditability, and human liability for decisions, a view that could extend to tax enforcement tools.78
VI. Stakeholder Perspectives and Best Practices
- 1.
- Multinational Enterprises (MNEs): Corporate tax departments are cautiously optimistic about AI. They expect automation to cut compliance costs and errors and analytics to improve tax planning, but most are still in early adoption. A 2024 global TP technology survey found that 46% of organizations still relied on basic tools like Excel for TP processes, though this is gradually falling as more invest in dedicated TP software and AI.79 Common barriers include budget limits, lack of in-house AI expertise, and uncertainty about how tax authorities will treat AI-assisted analyses. Tax executives also feel pressure to “get it right,” since missteps with new technology could trigger adjustments or penalties. Larger MNEs therefore pilot AI tools with Big Four firms and specialist vendors, often using AI only to validate traditionally prepared results before relying on it more fully. Companies stress human-in-the-loop governance: they generally reject fully autonomous TP systems. AI outputs are reviewed by tax managers, and key judgments (such as final selection of comparables or whether to adjust a price) remain with humans. This corresponds to governance principles that “taxpayers have to maintain human review, override logs, and accountable sign-offs” when AI is used in decisions.80 From a risk-management perspective, many MNEs are adopting internal AI use policies, such as requiring IT and legal vetting for data security and compliance, and documenting AI recommendations for possible audits. Overall, they view AI as a source of efficiency and insight, but only under strong controls to avoid unexpected risks.
- 2.
- Tax Authorities: Revenue bodies worldwide are among the heaviest users of AI in tax. Their focus is enforcement effectiveness and resource efficiency, with AI credited for uncovering far more underpriced intercompany royalties or service fees than traditional audits.81 AI also shortens audit cycles by processing years of data in seconds instead of months. High-profile failures (the Dutch scandal, the UK tribunal case, Australia’s “Robodebt” welfare incident) have illustrated public trust risks and the need for a “human face.” Emerging best practices include: thorough impact assessments before deployment, public explanation of the nature and purpose of algorithms (subject to security limits), and guaranteed human review or appeal. HMRC, for example, requires “an independent review by an HMRC officer” before final decisions,82 reflecting a human-in-the-loop safeguard. Tax administrations collaborate via forums like the OECD FTA and CIAT to share governance frameworks, and many adopt ethics guidelines similar to the private sector’s, highlighting responsibility, transparency, fairness, and legality. Some countries, such as France, are considering disclosing parts of the algorithm or at least providing affected taxpayers with the “principles of the algorithm” on request.83 Overall, tax authorities seek to use AI to improve compliance and revenue while protecting taxpayer rights and maintaining confidence in the fairness of the tax system.
- 3.
- Advisory Firms and Tech Providers: Big Four firms and boutique tax tech companies are key promoters of AI in transfer pricing, seeing it as both a market opportunity and an inevitable evolution of tax practice. EY, KPMG, Deloitte, and PwC have launched proprietary AI platforms (e.g., EY’s “Agentic” AI for tax, KPMG’s generative AI TP assistant, Deloitte’s TP digital workbench), marketed as tools that augment consultants. They highlight success stories, such as a Big Four report claiming an AI benchmarking tool reached 95% accuracy in selecting comparables, well above a junior analyst’s rate.84 Advisors emphasize AI’s ability to uncover patterns across global operations and flag risk factors that inform planning. Reputable firms stress a hybrid “AI + Human” model, developing best practices like model validation (testing AI against known cases) and disclosure strategies (how to explain AI-derived analyses in documentation and audits). Many are training tax consultants in AI literacy to oversee and interpret these tools. The sector also lobbies policymakers for clear, flexible rules so that measures like safe harbors or Amount B are “easily operationalized” by AI, encouraging compliance. A central concern is liability, prompting advisors to define in engagement letters how responsibility is allocated when AI tools cause errors. Overall, advisors are simultaneously expanding AI use and building guardrails to preserve their role as trusted experts in a tech-enabled tax environment.
- 4.
- Public Interest and Judicial Perspective: Legal scholars, NGOs, and courts adopt a more careful position. Administrative law scholars insist that algorithmic opacity must not undermine the rule of law. Courts in several jurisdictions (the Netherlands, Australia, the UK) have already intervened when algorithmic tax systems overreached.85 From this perspective, best practices center on disclosure and contestability: taxpayers should know if and how AI influenced their assessment and be able to challenge the result as they would a human decision. A common principle is that “the administration can be held liable for decisions if AI-caused errors harm citizens”86. This requires traceability and effective remedies. Public interest groups further stress data protection: tax authorities must not trade privacy for efficiency; AI systems should minimize personal data use and prevent leaks. They call for independent audits of tax algorithms and, in some cases, open-sourcing code so external experts can detect bias or flaws. Though still rare, such measures mark one end of the spectrum advocating strong oversight. Recent case law (e.g., SyRI, Elsbury) suggests that if tax AIs cause unfair treatment, courts may annul decisions or compel disclosure, confirming that automation does not shield tax administration from legal review.
VII. Best Practices for Governance
- 1.
- Maintain Human Oversight: “Human-in-the-loop” is essential. AI should support, not replace, tax professionals. Critical judgments, such as selecting final comparables, deciding on adjustments, or approving an APA position, must be made or at least reviewed by a qualified human who can override AI outputs when they conflict with expertise or context. Audit trails should document these interventions.
- 2.
- Ensure Transparency and Explainability: Organizations should maximize transparency around their AI systems by documenting how they work, what data they use, and why they make certain recommendations. Externally, companies must be able to justify TP positions without relying solely on AI, translating algorithmic outputs into traditional analysis understandable to tax authorities and courts. Tax authorities should disclose their AI use (as some do via annual reports) and consider issuing simplified “explanation reports” to taxpayers selected for audit by AI.
- 3.
- Data Governance and Quality: Taxpayers and authorities must invest in high-quality data, as AI reliability depends on it. Data should be cleaned, validated, and checked for outliers or anomalies before use to avoid skewed results. Strong data security is essential to protect sensitive tax information. AI should be limited to data relevant to the TP analysis to avoid using prohibited attributes (such as demographics). For example, an AI benchmarking tool should use financial metrics only and not receive racial or gender data, which might otherwise slip in via business owners’ names.
- 4.
- Regular Audits and Bias Checks: Continuous monitoring of AI performance is essential. Organizations should periodically audit outputs to confirm they remain reasonable and consistent with standards, with any systematic deviation prompting review or retraining. They should also test for bias e.g., ensuring the AI is not disproportionately flagging transactions in certain countries without justification. As suggested in the literature, tax authorities might publish fairness reports on their AI to build public trust.87 Companies could likewise track when AI advice is followed or overridden, and why.
- 5.
- Training and Change Management: Effective AI integration requires upskilling staff. Tax professionals must learn to interpret AI outputs and limits. For example, a 95% confidence score is not a 95% certainty of correctness and how to probe what drives that score. Cross-functional collaboration among tax experts, data scientists, and IT helps align AI with tax logic. Engaging stakeholders (internal units or external advisors) through workshops can demystify AI, reduce mistrust,88 and calibrate the tool with real-world insight.
- 6.
- Clear Governance Framework: Organizations should set up governance for AI in TP, including approval processes for new tools, tax-specific ethical guidelines, and clear accountability (e.g., a “Tax AI Officer”). Governance should also include scenario simulations such as “war-gaming” failures: How would we respond if the AI made a major error in a key filing, and how would we detect and correct it quickly? Such contingency planning keeps AI a support to the process, not a single point of failure.
VIII. Conclusion
| 1 | Org. for Econ. Coop. & Dev., Tax Administration Digitalisation and Digital Transformation Initiatives 13 (2025) [hereinafter OECD 2025]. |
| 2 | Id. |
| 3 |
SeeOECD 2025, supra note 1, at 13; Thomas Herr, AI May Take Transfer Pricing Up a Notch but Carries Some Risk, Bloomberg Tax (Oct. 21, 2024), https://news.bloombergtax.com/transfer-pricing/ai-may-take-transfer-pricing-up-a-notch-but-carries-some-risk. |
| 4 | Marius Boiță et al., Leveraging Artificial Intelligence to Enhance Documentation Management and Transfer Pricing Compliance in Multinational Corporations: A Strategic Sustainability Perspective, Preprints.org 2 (June 10, 2025) (unpublished manuscript) [hereinafter Boiță et al.]. |
| 5 |
Elsbury v. Info. Comm’r [2025] UKFTT 915 (GRC) [2] (UK). |
| 6 |
Id. at [13]. |
| 7 | Id. |
| 8 | Id. |
| 9 | Roberto Moro-Visconti, Artificial Intelligence and Transfer Pricing: A Multilayer Network Model for Compliance and Risk Mitigation, Econ. Int’l (forthcoming 2025) (manuscript at 3–4); B. Steens et al., Transfer Pricing Comparables: Preferring a Close Neighbor over a Far-Away Peer?, 47 J. Int’l Acct. Auditing & Tax’n, art. 100471, at 1 (2022). |
| 10 | K.H. Chan et al., An Empirical Analysis of the Changes in Tax Audits Focuses on International Transfer Pricing, 24 J. Int’l Acct. Auditing & Tax’n 94, 95 (2015); see also Org. for Econ. Coop. & Dev., Tax Administration 2025: Comparative Information on OECD and Other Advanced and Emerging Economies (2025). |
| 11 | Anouk Decuypere & Anne Van de Vijver, AI: Friend or Foe of Fairness Perceptions of the Tax Administration?, 42 Gov’t Info. Q., art. 102002, at 1 (2025). |
| 12 |
SeeDavid Hadwick, Slipping Through the Cracks: The Carve-outs for AI Tax Enforcement Systems in the EU AI Act, 9 Eur. Papers 936 (2024); Cristiana Bulbuc, AI and Tax: Litigation, Risk, Use Cases, Fieldfisher (Oct. 21, 2025), https://www.fieldfisher.com/en/insights/ai-and-tax-litigation-risk-use-cases. |
| 13 | Boiță et al., supra note 4, at 2. |
| 14 | Boiță et al., supra note 4, at 3; The Transformative Potential of GenAI in Transfer Pricing and Valuation, KPMG Int’l (2025). |
| 15 | Mimi Song, AI is Transforming Transfer Pricing, Tax Adviser (Sept. 1, 2025), https://www.thetaxadviser.com/issues/2025/sep/ai-is-transforming-transfer-pricing.html; AI and the Future of Transfer Pricing: Risks and Opportunities, Aprio (Aug. 14, 2025), https://www.aprio.com/ai-and-the-future-of-transfer-pricing. |
| 16 | Moro-Visconti, supra note 9, at 3. |
| 17 | Song, supra note 15. |
| 18 | Id. |
| 19 | Boiță et al., supra note 4, at 6. |
| 20 | Id. |
| 21 |
Global Transfer Pricing Services, PwC (2025); TPGenie: All-in-One Transfer Pricing Documentation Software, Intra Pricing Sols. (2025). |
| 22 | Boiță et al., supra note 4, at 6. |
| 23 | Id. |
| 24 | Song, supra note 15; Boiță et al., supra note 4, at 13. |
| 25 | Erin March & Rafi Berkson, Navigating the Shift: AI in Transfer Pricing—Greater Accuracy and Enhanced Insights, PwC (Oct. 20, 2025). |
| 26 | Samit Shah & AJ Kindley, How Technology Is Transforming Transfer Pricing Compliance, Bloomberg Tax (Sept. 17, 2025). |
| 27 |
Aibidia Report 2025: The State of Transfer Pricing, Aibidia (2025) [hereinafter Aibidia 2025]. |
| 28 | Id. |
| 29 | Id. |
| 30 | March & Berkson, supra note 25. |
| 31 | Id. |
| 32 | Herr, supra note 3. |
| 33 | Id. |
| 34 |
Id.; OECD 2025, supra note 1. |
| 35 |
AI in Tax Administration: Governing with Artificial Intelligence, OECD (2025) [hereinafter OECD AI]; Role of AI in Transforming How Tax Authorities Work, PwC (2025). |
| 36 | Id. |
| 37 | Id. |
| 38 |
IRS Using AI for Tax Audits in 2025, Ryan & Wetmore (Sept. 8, 2025), https://www.ryanandwetmore.com/blog/irs-using-ai-for-tax-audits-in-2025. |
| 39 | Id. |
| 40 | Id. |
| 41 | Id. |
| 42 |
IRS Audits and the Emerging Role of AI in Enforcement, Holland & Knight (Nov. 13, 2025). |
| 43 | OECD 2025, supra note 1; Herr, supra note 3. |
| 44 | March & Berkson, supra note 25. |
| 45 | Muhammed Zakir Hossain et al., The Role of Artificial Intelligence in Taxation and Compliance: Challenges and Future Prospects, 1 Eur. J. Sci. & Mod. Tech. 73, 76 (2025). |
| 46 | Id. |
| 47 | Id. |
| 48 |
EY 2025 Tax Risk and Controversy Survey, EY (2025), https://www.ey.com/en_ch/insights/tax/tax-risk-and-controversy-survey (last visited Jan. 12, 2026). |
| 49 | Ghaith Ahmad Salem Al-Khalaileh, Artificial Intelligence in Tax Administration and Corporate Tax Compliance: Evidence from Jordan, 23 Lex Localis 3382 (2025). |
| 50 | Decuypere & Van de Vijver, supra note 11. |
| 51 | Id. |
| 52 | Al-Khalaileh, supra note 51. |
| 53 |
ESG and Transfer Pricing, EY (2025), https://www.ey.com/en_fi/insights/tax/esg-and-transfer-pricing (last visited Jan. 12, 2026). |
| 54 | Id. |
| 55 | Bulbuc, supra note 12. |
| 56 | Id. |
| 57 |
Elsbury, supra note 5 (noting that undisclosed automated logic can undermine taxpayer confidence in fair treatment). |
| 58 | Hadwick, supra note 12. |
| 59 | Decuypere & Van de Vijver, supra note 11. |
| 60 | Al-Khalaileh, supra note 51. |
| 61 | Hossain et al., supra note 46; Bulbuc, supra note 12. |
| 62 | Regulation 2024/1689 of the European Parliament and of the Council, 2024 O.J. (L 1689) (EU) [hereinafter EU AI Act]; Hadwick, supra note 12. |
| 63 | Id. |
| 64 |
Elsbury, supra note 5. |
| 65 | Id. |
| 66 | Id. |
| 67 | Hossain et al., supra note 46. |
| 68 | Org. for Econ. Coop. & Dev., Transfer Pricing Guidelines for Multinational Enterprises and Tax Administrations (2022) [hereinafter OECD 2022 Guidelines] |
| 69 | March & Berkson, supra note 25. |
| 70 | Bundesministerium der Finanzen [BMF] [Federal Ministry of Finance] June 6, 2023, Bundessteuerblatt I [BStBl I] at 986 (Ger.). |
| 71 | U.S. Gov’t Accountability Off., GAO-24-105477, IRS Artificial Intelligence: Strategic Planning and Workforce Measurement Needed to Realize Potential (2024). |
| 72 | Hadwick, supra note 12. |
| 73 | Bulbuc, supra note 12. |
| 74 | Hadwick, supra note 12. |
| 75 | OECD 2025, supra note 1. |
| 76 | Hadwick, supra note 12. |
| 77 | Bulbuc, supra note 12. |
| 78 | Conseil d’État, Intelligence artificielle et action publique : construire la confiance, servir la performance (2022) (Fr.). |
| 79 | Tom Baker, TP Costs Rocket for MNEs, Report Reveals, Int’l Tax Rev. (Sept. 9, 2025), https://www.internationaltaxreview.com/article/2fb1v8cwpmadb057jsg74/transfer-pricing/tp-costs-rocket-for-mnes-report-reveals. |
| 80 | Bulbuc, supra note 12. |
| 81 | Hossain et al., supra note 46. |
| 82 | Kunal Nathwani, Artificial Intelligence in Automated Decision-Making in Tax Administration: The Case for Legal, Justiciable and Enforceable Safeguards (Inst. for Fiscal Stud., Tax L. Rev. Comm. Discussion Paper, Sept. 2024). |
| 83 | Bulbuc, supra note 12. |
| 84 | Cornelia Năstase, Transfer Pricing and Tax Compliance in Romania and Poland: A Comparative Study with Insights on AI’s Role in Modern Tax Administration. |
| 85 | Bulbuc, supra note 12; Hadwick, supra note 12. |
| 86 | Bulbuc, supra note 12. |
| 87 | Al-Khalaileh, supra note 51. |
| 88 | Id. |
| 89 | Boiță et al., supra note 4, at 6; OECD AI, supra note 35. |
| 90 | Boiță et al., supra note 4; Bulbuc, supra note 12. |
| 91 | Decuypere & Van de Vijver, supra note 11; Al-Khalaileh, supra note 51. |
| 92 | Uyen Chu, AI in Tax: Top Use Cases You Need To Know, SmartDev (Oct. 1, 2025). |
| 93 | Hadwick, supra note 12; Bulbuc, supra note 12; see also Elsbury, supra note 5. |
| 94 | Błażej Kuźniacki et al., Towards eXplainable Artificial Intelligence (XAI) in Tax Law: The Need for a Minimum Legal Standard, 14 World Tax J. 4 (2022). |
| 95 | Al-Khalaileh, supra note 51. |
| 96 | Hadwick, supra note 12; Al-Khalaileh, supra note 51. |
| 97 |
See, e.g., Decuypere & Van de Vijver, supra note 11; Moro-Visconti, supra note 9. |
| 98 |
SeeAibidia 2025, supra note 27. |
| 99 |
See Decuypere & Van de Vijver, supra note 11. |
| 100 |
See Bulbuc, supra note 12. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
