Preprint
Review

This version is not peer-reviewed.

Integrating Artificial Intelligence in Audit Workflow: Opportunities, Architecture, and Challenges: A Systematic Review

Submitted:

27 January 2026

Posted:

27 January 2026

You are already at the latest version

Abstract
This paper presents a systematic review of 100 peer-reviewed studies (2015-2025) on Artificial Intelligence (AI) applications in auditing, as they relate to machine learning (58%), natural language processing (31%), robotic process automation (24%), and other AI techniques (15%). Among other important results, it was shown that AI-powered anomaly detection is wiser than manual solutions by as much as 70 percent, and pilot projects experience improvements of up to 50 percent. The review breaks down AI methods by the various stages of auditing, such as planning, risk assessment, and reporting. This highlights the importance of machine learning in fraud detection and natural language processing in document analysis. Despite these improvements, challenges such as data quality, model explainability, and regulatory compliance persist. This paper proposes a reference architecture for AI-driven audit workflows and describes how data can be integrated, AI models can be developed, and a human in the loop can be provided. It highlights key research gaps, such as the absence of longitudinal studies on the impact of AI, comparisons of AI techniques, and the absence of regulatory frameworks. The review offers practical suggestions for integrating AI into auditing, which could be used to improve audit quality, increase coverage, and optimize resources in the digital audit space.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

The introduction of artificial intelligence (AI), machine learning, and data analytics is changing the auditing profession [1]. Traditionally, audits were based on representative sampling, expert judgement, and control testing based on rules to make audit opinions [2,3]. However, the emergence of digital transactional data and sophisticated AI algorithms has led to a shift toward population-level analysis, anomaly detection, continuous monitoring, and predictive risk analysis [4,5]. AI-enabled platforms are nowadays applied in different stages of the audit process, such as identifying high-risk accounts in the planning phase, generating automated internal control audit tests, anomaly detection in journal entries, analyzing unstructured text, and building real-time dashboards for control deviations [6,7]. The “big four” accounting firms and even smaller and mid-tier firms are making significant investments in AI, either by purchasing AI startups, building in-house teams, or licensing specialized audit platforms [8,9].
Despite these advancements, the adoption of AI technologies in auditing has led to concerns raised by regulatory bodies, such as the IAASB and PCAOB, about auditors’ judgment, explainability, and even the regulation of the use of computers [10,11]. Standard setters have started to offer guidance on the nature of oversight auditors should provide for the appropriateness of AI tools and be accountable for the conclusions that are reached based on AI-assisted procedures [12,13]. While AI in auditing is gaining momentum, the available academic and professional literature is fragmented, addressing individual technologies or applications such as fraud detection or internal audits and not discussing the integration of AI into the entire audit process [14,15,16]. Existing reviews tend to focus on subdomains and leave us with a lack of knowledge on how AI components integrate into the overall audit workflow and organization [17,18].
To fill this gap, a systematic review is required to pool empirical evidence on the role of AI in auditing. This review categorizes AI techniques and their applications in every stage of auditing, identifies research gaps, and provides both academic and practical applications. The aim is not only to provide a conceptual framework to incorporate AI throughout the audit process but also to provide an integrated approach to the orchestration of technology. This systematic review provides three main contributions: (i) an AI technique-based taxonomy aligned with audit stages, (ii) a reference architecture describing functional layers, control points, and boundaries between humans and AI, and (iii) an analysis of the adoption barriers and research gaps for system designers and audit practitioners. This comprehensive framework goes beyond descriptive surveys and makes actionable recommendations for designing a system and implementing changes in practice.

1.1. Research Objectives and Questions

This systematic review will attempt to provide an answer to the following primary research questions:
  • RQ1: What AI techniques are used at which stages of the audit workflow and with what reported objectives and outcomes?
  • RQ2: How are the capabilities of AI architecturally integrated into audit systems, and what are the artistic rules that specify the effective interaction between humans and AI?
  • RQ3: What is the empirical evidence of AI’s effect on audit for the effectiveness, efficiency, and quality?
  • RQ4: What are the technical, organizational, and regulatory barriers to the large-scale deployment of AI in audit workflow?

2. Methodology

This systematic review was designed and reported following the guidelines for systematic literature reviews [19,20,21], modified for software engineering and information systems research in the audit field. The protocol was registered and documented before the searches were initiated to increase transparency, decrease bias, and support critical appraisal and reproducibility [22,23,24].

2.1. Search Strategy and Information Sources

Electronic searches were conducted in five major multidisciplinary databases (Scopus, Web of Science, IEEE Xplore, ScienceDirect, and Google Scholar). These databases were deemed to represent publications in the domains of accounting, auditing, information systems, computer science, artificial intelligence, and management, as AI in auditing is, by its nature, interdisciplinary [25,26,27,28].
The search strategy was a combination of controlled vocabulary (MeSH, Scopus subject terms) and keyword searches. Core search strings included:
  • Search String 1: (“machine learning” OR “deep learning” OR “neural network” OR “artificial intelligence”), AND (“auditing” OR “auditor” OR “audit” OR “internal audit”)
  • Search String 2: (“NLP” OR “text mining” OR “natural language processing”) AND (“auditing” OR “audit” OR “audit workflow”)
  • String 3: “RPA” OR “process automation” OR “robotic process automation”), AND (“audit” OR “internal audit”)
  • Search String 4: (“continuous monitoring” OR “real-time audit*”, “continuous audit*”) AND (“artificial intelligence” OR “machine learning”)
  • Search 5 Search String Description: “audit effectiveness”, “audit quality”, AND “automation”, “data analytics”, “artificial intelligence”)
  • Change from 2015 to 2025: The committee will cover the following timeframe for the review: 1 January 2015 - 31 December 2025 (a period of 10 years). This period was selected because it spans the most recent peak in AI-enabled audit innovation while also capturing contemporary trends in machine learning and AI techniques.
  • Language: English Language Article only
In addition to database searches, backward snowballing (reviewing the references of the included studies) and forward snowballing (tracking cited records) were undertaken to identify additional relevant studies [29,30]. Grey literature from major audit firms (e.g., published research from Big Four and mid-tier firms, professional white papers, and audit standard-setting bodies) was searched, as inclusion was limited to documents that included substantive technical or methodological detail [31,32,33].

2.2. Inclusion and Exclusion Criteria

Inclusion Criteria
The specific study is around the application of artificial intelligence, machine learning, deep learning, natural language processing, robotic process automation, or similar techniques of artificial intelligence in audit or assurance.
  • The study outlines artificial intelligence applications, architectures, frameworks, and tools/human assessments or empirical evaluations that relate to at least one of the identifiable stages of the audit workflow (planning, risk assessment, controls testing, substantive procedures, reporting, or continuous monitoring).
  • Study is a peer-reviewed journal article, peer-reviewed conference proceedings, a high-quality institutional/professional report from recognized audit firms, standard-setting bodies, or research organizations.
  • For empirical studies, adequate methodological information is given to allow a judgment on study design, sample characteristics, and criteria for evaluation.
Exclusion Criteria
  • Studies on generic data analytics, business intelligence, or business process management without explicitly mentioning AI/machine learning approaches.
  • Articles with a focus on applications of AI or machine learning in accounting, finance, and other areas of business, other than audit or assurance,
  • Pure opinion pieces, editorial commentaries or speculative pieces without substantive technical, methodical or empirical content.
  • Research conducted in non-English languages or studies that lacked sufficient details to extract relevant data.
  • Duplicate of Article or Multiple Publication of the same work

2.3. Study Selection Process

Two independent reviewers screened the titles and abstracts of all records obtained from the database and supplement searches for the inclusion/exclusion criteria using DistillerSR or similar systematic review software [34,35]. Disagreements were resolved by discussion or by consulting a third reviewer. Full-text articles for all potentially relevant records were retrieved, and their eligibility was independently evaluated by two reviewers using the same inclusion criteria. Reasons for exclusion at the full-text stage were recorded [36,37].
The process resulted in a final sample of 100 studies to be thoroughly reviewed, from which data were extracted. This sample size was deemed appropriate to ensure breadth across audit contexts, technologies, and research approaches and to ensure that this would be manageable for in-depth analysis and synthesis [38].

2.4. Quality Assessment

A structured quality assessment checklist was created and adapted from the systematic review guidelines in software engineering and information systems [39,40]. The key dimensions of the checklist were as follows:
  • Clarity/specificity of research objectives and scope
  • Transparency of methodologies, including description of data source, selection of sample
  • Adequacy of data description and auditable quality checks.
  • Specification of AI techniques, models, and parameters.
  • Appropriateness of evaluation design (e.g., empirical methods, metrics, comparators)
  • Completeness of reporting of results
  • Reasons for limitations and possible bias, and acknowledgement and discussion
  • Clarity in terms of generalizability, applicability to context X.
Each study was independently assessed on each dimension by two groups of reviewers using a scale (e.g., high-clear, comprehensive, low-bias risk; moderate-adequate, some gaps in clarity or completeness of information; low-significant gaps in transparency or rigor). The overall quality rating of the studies was determined based on the pattern of their dimension ratings. Studies rated as Moderate or Low quality were not excluded but were noted, and the strength of evidence for particular findings was qualified by the underlying quality of the studies [41,42,43].

2.5. Data Extraction

A standardized data extraction form, in spreadsheet or specialized software format, was created and pilot-tested on a subsample of studies to ensure standardization. The form contained the following information:
  • Bibliographic information Author(s), year of publication, type of study (empirical, design-science, conceptual or review), source (journal name, conference, report)
  • Audit context: Type of audit (internal audit, external audit / financial Statements audit, public sector audit, tax audit / compliance audit, forensic / fraud audit, others)
  • Participants/setting: Organization Type: Multinational firm, large audit firm, Small and medium-sized entities (SME), Public/government, non-profit, Audit Domains
  • AI techniques and tools: Specific AI techniques that were used (machine learning algorithms and approaches for NLP, RPA platforms, knowledge-based systems, and hybrids), software/platforms that were mentioned
  • Areas of audit documentation covered were: How did AI support the task in the following audit workflow stages: planning, risk assessment, controls testing, substantive procedures, reporting, and continuous monitoring
  • Key findings and outcomes: Reported benefits, efficiency measures, detection rates, accuracy measures, user satisfaction, lessons learnt
  • Architectural/design features System architecture, Data sources, Model governance Explainability mechanisms Human-AI interaction design Integration Points
  • Challenges and barriers: Technical: data quality, model performance; Organizational: adoption, skills, change management; Regulatory/ethical: compliance, bias, transparency; Governance
  • Research gaps and future directions: The open questions and recommendations that emerged from the research
Data extraction was performed by one reviewer and validated by a second reviewer on a random 20% sample to determine extraction accuracy [43,44].

2.6. Synthesis Approach

Given the heterogeneity of the study designs, audit context, and AI techniques, the synthesis was a combination of the narrative thematic analysis and the mapping and tabulation approaches [45,46].
  • Thematic analysis commenced by coding the extracted data in terms of research question, audit workflow stage, artificial intelligence technique, and challenge type, followed by data synthesis by refining themes/subthemes iteratively [47,48].
  • Mapping: Refers to structural tables and visual diagrams, which were created to correlate AI techniques with the audit workflow stages, architecture elements, opportunities, and challenges to facilitate pattern recognition and gap identification.
  • Selected studies: Excerpts from the abstract paragraphs and a summary of the study characteristics (e.g., year of publication, audit context, and AI techniques) will be presented in a quantitative summary [49,50].
A meta-analysis of quantitative outcomes was considered but found inappropriate because of a great deal of heterogeneity in study designs, outcome measures, and contexts; instead, findings were synthesized in narrative form with tabular summaries of effect estimates and outcome measures where available [50].

3. Results

3.1. Study Selection and Characteristics

Figure 1 shows the PRISMA Flow Diagram for Study selection. Database and supplementary searches provided 3847 records, of which 1256 records remained after de-duplication. After title and abstract screening, 387 full-text articles were evaluated for their eligibility. A follow-up of the detailed evaluation found 100 studies that satisfied the final inclusion and exclusion criteria.
The 100 included studies:
  • Empirical studies (45%): Case studies, controlled experiments, field evaluation, survey, mixed methods studies
  • Design-science and development studies 28% Prototype development System design descriptions technical architecture papers
  • Studies of the types listed below comprise the .17% conceptual and literature review studies: Frameworks, position papers, story syntheses.
  • Practice-oriented reports 10%: White papers/technical guidance from major audit firms & professional organizations.
  • Publication timeline: The first of our matching publications was identified from 2015, publications between 2018 accelerated, with notable growth in number from 2020-2025, under the pressure of increased attention to AI in audit in light of technological advances and the adoption of it by competition.
  • Geographic distribution: Authors and studies originated mainly from Europe (38%), North America (35%), Asia-Pacific (20%), and Africa/Middle-East (7%), while there were significant contributions coming from jurisdictions with large response from audit firms and advanced financial markets [51,52,53].
  • Contexts of Audit addressed: The studies of audits have covered extensively different contexts. Internal audit has had 42 studies, which give importance to monitoring and evaluating the internal processes of an organization. External financial statement audits have been studied by 35 studies, and the focus was on the transparency and accuracy of financial reporting.
Public-sector and Government audits have been the topics of 12 such studies, in which an audit essentially judges the efficiency and accountability of government expenditure and spending in specific sectors. Tax and compliance audits are covered in eight studies, demonstrating the importance of complying with financial and tax laws and regulatory standards. Lastly, forensic and fraud audits have been examined in three studies, which focus on the detection and prevention of fraudulent activities in organizations.
Inter-reviewer reliability was assessed in the title/abstract screening and quality evaluation steps at percent agreement and using Cohen’s kappa and was substantial (>’s kappa> 0.75). The review process followed the reporting guidelines (PRISMA), and the filled PRSMA checklist is included as supplementary material.

3.2. AI Techniques and Tools Identified

Machine learning and predictive analytics (58 studies): Supervised and unsupervised methods of machine learning-threat models and methods featuring the most significant portion of studies, including:
  • Classification algorithms (random forests, gradient boosting, support vector machines, logistic regression) for anomaly detection and classification (fraud and spam) and risk scoring.
  • Clustering concepts (k-means, hierarchical clustering, DBSCAN) for the segmentation of transactions and finding patterns
  • Deep learning Neural network (multilayer perceptrons, convolutional neural networks, recurrent neural networks, and LSTMs) for sequential pattern recognition and time series forecasting
  • Ensemble methods, which combine multiple models (to get better robustness).
Natural language processing (31 studies): Natural language processing techniques for unstructured audit data:
  • Named entity recognition (NER) and information extraction for contract analysis & regulatory compliance document review.
  • Sentiment and tone analysis for management commentary, earnings calls, and internal communications.
  • Topic modeling (Latent Dirichlet Allocation, Non-negative Matrix Factorization) based on categorization and summarization of the audit documentation.
  • Document matching and validation of standard classification and semantic similarity for text.
  • Large language models (GPT variants, BERT, transformer architectures) for document summarization and question and answer answering.
Robotic process automation (24 studies): Rule-based automation, Workflow orchestration:
  • RPA for the development of bots that can extract data, navigate systems easily, and produce reports upon request.
  • Integration of RPA with machine learning for “intelligent automation” enables context-aware decision-making.
  • Workflow orchestration engines coordinating multiple bots and AI services.
  • Exception handling and escalation mechanisms are also important.
Other AI techniques (15 studies): expert systems and knowledge-based reasoning, reinforcement learning for dynamic sampling and testing strategy optimization, process mining and discovery, and computer vision for physical asset verification.
Table 1 summarizes 100 studies, which are mostly empirical and more recent publications. Research primarily focuses on internal and external audits, is primarily from Europe and North America, and focuses on machine learning, NLP, and automation, often combining multiple applications of AI techniques.

3.3. Audit Workflow Stages and AI Application Patterns

Planning and risk assessment (42 studies): AI is used to aid the earliest stages of audit engagement.
  • Integrated risk scoring models use combinations of financial metrics, control scoring, process indicators, and management narrative analysis to prioritize accounts, entities, or processes to focus audits.
  • Time-series forecasting and anomaly detection on past financial data to detect out-of-the-ordinary trends in potential risk areas.
  • Sentiment and tone analysis of management commentary and regulatory filings to determine tone at the top and quality of the disclosure.
  • Entity linking and network analysis to identify the related parties and complex structure requiring increased attention during the audit.
  • Machine learning based inherent risk models based on client industry, regulatory context, and organization factors.
Tests of controls and control monitoring (38 studies): Continuous or periodic AI-enabled control monitoring includes the following:
  • Real-time analysis of system logs and user access patterns to identify segregation of duties violations and unauthorized system transactions.
  • Process mining algorithms to reconstruct real flows in processes based on transaction logs and compare them to the designed controls, flagging deviations.
  • Rule-based and machine learning models to monitor transaction approvals and authorization limits, and patterns.
  • Continuous monitoring dashboards that raise alarms for auditors concerning breaches of controls, anomalies, or exceptions in near real-time.
Substantive procedures and transaction testing (48 studies): The most significant number of studies dealt with AI in the substantive testing, including:
  • Unsupervised and supervised anomaly detection used for journal entries, accounts receivable, inventory, and other transaction populations to find unusual transactions for focused audit investigation
  • Automated verifying and validating supporting documents (invoices, purchase orders, receipts, contracts) with the help of NLP and computer vision
  • The project aims to combine Journal entry testing based on a mix of rule-based (unusual timing, round amounts, top accounts) and machine learning based Anomaly detection.
  • Predictive models for the prediction of the likelihood of misstatement or estimates of the account balances for providing the basis of audit judgment and identifying unexpected variances.
  • Fraud risk scoring, which is based on characteristics of the transaction, user behavior, and historical patterns, to prioritize items for substantive review.
  • Supporting or external reporting, communication, and ongoing assurance (31 studies): post-fieldwork and continuous activities by following AI:
  • Automated generation of audit documentation, workpapers summaries, and management letter with the findings and recommendations
  • Interactive dashboards and visualization tools for communicating with the audit committee and management on how risk heat maps, anomaly profiles, control status, and findings
  • Continuous auditing and monitoring systems to allow the ongoing assessment of control effectiveness and the emerging risks instead of point-in-time audit opinions
  • Predictive models for forecasting future control performance/misstatement likelihood to inform audit strategies and allocate resources.
Table 2 shows the AI techniques mapped to audit workflow stages.
A mapping of AI techniques to audit workflow stages is illustrated in Figure 2, which shows where machine learning, NLP, RPA, expert, and hybrid approaches offer added value. The circle size indicates the level of impact. Applications concentrate on planning, controls testing, and substantive procedures, with growing roles in reporting and continuous monitoring in assurance and risk-focused activities in the enterprise.

4. AI Technologies in Audit: Findings

4.1. Machine Learning and Anomaly Detection

Machine learning techniques are high and dry in the AI in auditing literature, as they are key to detecting anomalies and risks. Supervised learning approaches use historical labeled data (e.g., transactions later determined to be fraudulent or erroneous) to learn a model for classifying new transactions as normal or anomalous [28,39,54,55].
  • Transaction-level Anomaly detection: Random forests, gradient boosting machine (XGBoost, LightGBM), and logistic regression are employed widely to detect outlier transactions in journal entries, receivables, payables, and inventory [16,33,35,46]. These models learn patterns in millions of routine transactions and identify transactions with unusual characteristics (amount, timing, counterparty, approval chain, and account combination) [25,33,35]. Studies have reported that such models, when properly trained and validated, can detect fraud and errors more than those achievable through manual sampling [25,35].
  • Unsupervised anomaly detection (clustering, isolation forests, and autoencoders) is useful if labeled fraud data are limited, as they are often in audit environments [35]. These methods are used to identify a transaction that presents a significant deviation from the learned standard behavior pattern without the need for explicit fraud labels [33,38].
  • Key findings: Based on empirical investigations, we find an improvement in detection rates of 20-70% compared with manual sampling methods, but the absolute detection rate varies greatly depending on the data quality, feature engineering, and actual probability of anomalies in the dataset [25,35]. However, several studies have also reported high false-positive rates, which require the auditor to review and triage the alerts, and discuss the importance of investing in data quality, feature engineering, and model validation [39,41].

4.2. Natural Language Processing and Document Analysis

NLP applications overcome the audit challenge of handling volumes of unstructured text. Contracts, board minutes, policy documents, email communications, and management narratives are, increasingly, subjected to automated analysis [50,51,53].
  • Contract and regulatory document analysis: NLP models identify important clauses (covenants, termination conditions, related party terms, and contingencies) from contracts and deviations from templates or standard language [51,52,54]. Named entity recognition tools are used to identify named entities such as persons, dates, and amounts of money [55,56]. Such capabilities support audit procedures for confirming the completeness of contracts, identifying unusual terms, and ensuring disclosure adequacy [57,58,59].
  • Sentiment and tone analysis: Tools measure the tone, complexity, and linguistic indicators of possible bias or management overrides in earnings call transcripts, management commentary, and internal communications [59,60,61]. Studies suggest that combining quantitative sentiment measures with human review will improve auditors’ ability to evaluate management’s attitude toward controls and the tone at the top [62,63,64].
  • Document classification and clustering: Topic modeling and text classification fall into the category of labeling (assigning audit documents into categories, such as controlling narratives, risk assessment, and regulatory filings) and create a way to retrieve and rank audit documents for review [51,52,65]. Large language models (LLMs), such as GPT and BERT, can be used to summarize long documents and answer questions related to their content, which could save auditors time when reading and synthesizing information [56,61].
  • Challenges and limitations: The performance of NLP relies on domain adaptation. Natural language models cannot be used to perform NLP on technical texts about accounting or documents related to accounting [51,52,61]. Multilingual scenarios, sarcasm, and implicit meanings [56,65] are further challenges. Studies highlight the carelessness of validating NLP tools in an audit setting and communicating clear information on confidence and explainability to auditors and clients [60,61].

4.3. Robotic Process Automation and Workflow Orchestration

RPA is used to automate repetitive, rule-based tasks in the auditing process workflow [65,66]. Audit bots are used to extract data from ERP systems, perform reconciliations, feed spreadsheets, and produce standardized reports, and are often executed on some kind of schedule or event [66].
  • Data preparation and integration: RPA bots are used to move across different systems to retrieve and consolidate their data for audit analysis, saving manual data compilation time and errors [67,68,69]. This helps improve the efficiency and reliability of the data foundation for further AI analysis [65,70].
  • Intelligent automation: RPA can be combined with machine learning, enabling “intelligent automation,” where bots can use the logic of decision-making to route transactions, approve exceptions, or fill out fields based on patterns or scores that they have learned from [65,71]. For example, a bot could sample transactions and identify them as normal or anomalous (using a deployed machine learning model) and forward the items of high risk to human auditors for analysis [66,72].
  • Continuous control monitoring: RPA scripts can be set up to run continuously or have a high frequency of execution, monitoring control logs and flagging breaches of a violation, unauthorized activities, or policies (in near real-time) [66,73,74]. This provides a transition from periodic audit testing to continuous assurance [66,72].
  • Challenges: The Governance of RPA scripts and change management are key issues that are accompanied by versioning, exception handling, and documentation [72,74]. Studies provide important evidence for robust audit controls over RPA bots to ensure that RPA robots operate as designed and that exceptions are managed properly [65,75].

4.4. Hybrid and Emerging Approaches

Hybrid and emerging approaches in auditing are based on the use of advanced technologies to improve the audit process. Process mining tools reconstruct actual process flows from the event logs of the systems and compare them with the designed process, bringing out deviations to aid the control testing process and identify unusual process execution patterns [66,74]. Reinforcement learning and dynamic sampling; AI-based adaptive audit sampling with an intelligent testing sequence for optimal feedback, although very few practitioners can apply it. Additionally, computer vision is becoming a method of inventory observation, asset condition evaluation, and physical checks, and is used in situations where items are remote or in high-volume settings. These innovations are designed to make audits more efficient and accurate; however, this idea has not yet been fully realized across all industrial sectors under the right conditions [72].

5. Opportunities and Benefits

5.1. Enhanced Detection Capability

One of the main opportunities of AI in auditing is its capabilities to analyze whole transaction populations (instead of samples), sophisticated pattern recognition, and the detection of anomalies and exceptions that could be missed by the human eye [72,75]. Empirical studies show that appropriately designed machine learning models can reveal fraud, mistakes, and misstatements at rates and speeds that are not possible using manual methods [65,72].
Implications: This capability has the potential to enhance the effectiveness of auditing in terms of identifying a wider variety of issues, including subtle or low-value anomalies that aggregate to material amounts [65,75].

5.2. Expanded Audit Coverage and Population-Level Analysis

Rather than using representative samples, AI allows auditors to test entire populations or very high percentages, creating more confidence and facilitating a transition from exception-based testing to comprehensive analysis [75]. This is particularly useful for high-volume routine transactions [76].
Implications: Expanded coverage may increase audit quality by decreasing the risk of missing material items and may improve auditor efficiency by automating routine testing of populations that are at low risk of problems to free up time for higher-value, judgment-intensive work [75].

5.3. Continuous and Real-Time Monitoring

AI makes continuous auditing and monitoring systems possible to check the effectiveness of controls, compliance of transactions, and status of risks on a rolling basis instead of at period-ends [77,78,79]. This brings audit assurance in line with real-time business cycles and decision-making [80,81].
Implications: Continuous monitoring can be useful to address issues such as faster control failure detection and resolution, dynamic risk assessment, audit lag, and reduction of cycle time [75,78,81].

5.4. Improved Efficiency and Resource Optimization

Automation of routine data-intensive procedures (data gathering, reconciliation, document matching, and rule-based testing) minimizes manual effort, possible errors, and audit cycle time [82]. This allows audit professionals to focus on more valuable activities, such as interpretation, judgement, and client engagement [75,81].
Evidence: Gains in efficiency of 10-50% in pilot projects/case studies, but variation is due to differences in scope of implementation and baseline efficiency [81,82].

5.5. Deeper Insights and Richer Analysis

AI Analysis of unstructured data (contracts, communications, and documents) reveals insights that would otherwise be impossible owing to time-consuming manual analysis [80,82]. Combined with the analysis of structured data, this demands a more holistic understanding of risk, control effectiveness, and management intent [75,79].
Implications: A more valuable analysis potentially allows auditors to identify more valuable advisory insights for audit committees and management [76,79].

6. Reference Architecture for AI-Enabled Audit Workflow

6.1. Conceptual Layers and Components

The conceptual framework for AI-enabled audit workflows consists of five major interrelated layers, each playing a distinct role in the overall process. These layers are meant to guarantee the integration, development, and deployment of artificial intelligence (AI) models in an organized, controlled, and efficient manner. Ultimately, these layers can improve the audit process. Table 4 lists the layers and components of the AI-enabled audit workflow, and Figure 3 depicts the reference architecture for the AI-enabled audit workflow.
Layer 1: Data Integration and Data Governance
The foundation of any AI-enabled audit system is the Data Integration and Governance layer [18,28,57]. The task of this layer is to ingest, validate, and integrate structured and unstructured data from disparate sources, including ERP systems, data warehouses, logs, documents, and external data sources [66]. The important components present in this layer are data connectors and ETL (Extract, Transform, Load) pipelines that are used to reliably extract data from multiple systems and in different formats. Additionally, data quality checks and anomaly detection mechanisms are implemented at the point of data ingestion to flag any issues with data quality in the ingestion process, which may impact the analysis [73]. Data lineage and provenance tracking are important to support the audit trail and documentation of evidence to ensure transparency and traceability of the data [77]. Access and security controls are also implemented in this layer to safeguard confidential audit and client data, where data governance policies set rules about data ownership, retention, and use. This governance structure ensures that data are handled in a consistent, reliable, and secure manner, which builds trust in the entire audit process [79].
Layer 2: Feature Engineering and AI Model Development
The second layer, called Feature Engineering and AI Model Development, converts raw data into applicable insights by creating AI models [25,38,53]. This layer includes the workflows responsible for data transformation, feature engineering, and model training, which are essential for transforming raw data into predictions and classifications. The key components of this layer include feature engineering pipelines that compute derived variables, aggregate statistics, and temporal features from raw data [46]. Machine learning models, both supervised and unsupervised, are then trained on historical labeled data or for pattern discovery. Model architecture selection and hyperparameter tuning were performed to optimize the performance and ensure that the models provided accurate results. Model Validation and back-testing are important to ensure that models perform well on holdout data or historical test cases [75]. Continuous monitoring of the models is implemented to detect possible degradation of performance (model drift) or data drift over time. Moreover, the cosmetics of explainability and interpretability, such as SHAP values or Local Interpretable Model-agnostic Explanations (LIME), are incorporated so that auditors can understand why specific predictions are being made, which plays an essential role in making AI-driven decisions transparent.
Layer 3: Orchestration and Smart Automation
The third, Orchestration and Intelligent Automation, helps ensure the smooth running of AI models and the automated processes involved in executing audit-based processes [45,55]. This layer coordinates the execution of robotic process automation (RPA) bots, workflow engines, and API-based services triggering AI models and automation throughout the audit process [59]. The components of this layer include a workflow engine to orchestrate the multi-step audit procedure, giving coordination between data extraction, model inference, result interpretation, and escalation is accomplished seamlessly [81]. RPA bots are used to automate routine tasks, enhance efficiency, and reduce manual errors. Event-driven architecture is used to initiate AI models at particular points in the audit process, such as when new transactions are added or at the beginning of substantive testing. Exception handling and escalation rules are established to pass high-priority items or unexpected results to human reviewers for further investigation. In this layer, audit trail logging is also a key element that ensures that all automated actions and AI decisions are recorded for future review, which can bring transparency and accountability.
Layer 4: Application layer User Interface layer
The fourth layer, Application and User Interface, focuses on providing auditors with tools to interact with AI, allowing in-depth investigations and the possibility of learning [32]. This layer provides dashboards and visualizations, including a risk heatmap, anomaly profiles, control status, and key findings, to inform decisions for decision-makers and auditors [33,36]. Workpaper systems include AI-generated summaries, alerts, and evidence directly into the audit documentation process and streamline the process. Explainability interfaces are designed to assist auditors in their comprehension of why particular transactions or accounts were flagged in their audit report by providing model confidence scores and stating the factors that contributed to this. Feedback mechanisms are also built into this layer, allowing auditors to ples, mark false positives or false negatives, and mark cases for retraining the model [50,72]. These interfaces help ensure that auditors can effectively interact with AI models to improve the efficiency and accuracy of the audit process.
Layer 5: Governance, Compliance, and Security
The Governance, Compliance, and Security layer underlies overseers and control mechanisms throughout the operation of AI systems to ensure that their reliable operation is ethically performed, and formations are compliant with the relevant audit standards and regulations [2,5,6]. This pertains to governance policies within models, which span the life cycle of AI models from development and validation to deployment and retirement of AI models. Change management and version control processes are implemented to track changes in the models and ensure that the models are reliable over time [21,23]. Performance monitoring and KPI dashboards have been implemented to monitor the model accuracy, coverage, and model compliance with service level agreements (SLAs). Bias and fairness assessments are also carried out in order to detect and alleviate any discriminatory outcomes from the AI models [39]. Regulatory compliance checks are built into this layer to ensure that industry standards (such as ISA, PCAOB, and SEC regulations) and data protection laws (like GDPR) are followed. Security controls are also vital for effectiveness, including access to data encryption, authentication, and data protection. Incident response procedures are outlined for handling model failures or breaches of data or anomalous behavior of AI to ensure the integrity of the audit process [40,80]. Finally, documentation and evidence management practices are established to meet the quality standards in audits and regulatory expectations to ensure that all AI-driven decisions can be traced back and are verifiable.
Each of these five layers works together to create a strong AI-enabled audit system that will not only improve the efficiency and reliability of the audit process but also ensure compliance and transparency.

6.2. Human-In-The-Loop Design and Professional Judgment

A common theme in the literature is the emphasis on maintaining human judgment, auditor responsibility, and professional skepticism in audits that use AI-enabled tools. To ensure that these elements are maintained, reference architectures for AI-assisted auditing should account for human-in-the-loop (HITL) designs, in which auditors are actively involved in major decision-making processes. The following are the critical aspects of these frameworks:
  • Review and Interpret AI Outputs: Auditors must review flagged transactions, anomaly explanations, and risk scores from AI before making conclusions, so that they have the responsibility to retain professional judgment and decision making [2,6,25,36].
  • Override AI Recommendations: In instances where there is a discrepancy between recommendations from an AI mechanism and auditor judgment grounded in other, related information, auditors may decide not to accept AI conclusions and provide reasons for their conclusions for accountability [2,5,39].
  • Provide Feedback for Model Improvement: Auditors can provide feedback for improving AI models like annotated examples, marking misclassifications, and working with data scientists to improve the model’s behavior and accuracy [16,61].
  • Maintain Zones of Judgment Some audit tasks are still purely manual and require complete use of professional judgment because of a very intensive understanding of the context, skepticism, or tasks unsuited for automation [21,36,39].
The reference architecture as shown in Figure 3 should be clear on which tasks are automated (rules-based, low risk), augmented (AI recommendations for auditor validation), and manual (requiring auditor’s professional judgment), and ensure accountability, but avoid excessive reliance on AI.
Human-in-the-loop mechanisms provide explicit boundaries between automated analysis and professional judgement. AI systems produce risk scores, flags of anomalies, and summaries, but auditors maintain the power of acceptance, override, and documentation of conclusions. All overrides must be logged with a rationale to retain accountability. Tasks that involve reasoning about a context, ethical assessment, or determination of materiality are still not automated. This allocation maintains adherence to auditing standards, with the potential to build AI assistance on a scalable basis.

7. Challenge and Implementation Barriers

The implementation of artificial intelligence (AI) in auditing can deliver significant benefits. However, their practical application is restricted by various, interrelated challenges. These barriers range from the availability and quality of data, technical limitations of models, organizational and human factors, and regulatory and governance considerations. Understanding these challenges is crucial for developing realistic, responsible, and effective audit frameworks enabled by artificial intelligence (AI). Table 5 presents the key challenges and barriers to implementation.

7.1. Data-Related Challenges

Data quality and completeness remain fundamental issues in AI-driven auditing. Machine learning models are susceptible to the quality of the input data, and problems such as incomplete records, missing values, inconsistent definitions of data across systems, and incorrect data entries can significantly degrade the performance of the model and introduce a systematic bias [83,84]. Many organizations do not have mature data governance structures, standardized data taxonomies, and effective data stewardship practices, making it a challenge to have the level of data reliability needed for AI applications in auditing [85,86,87]
Another important issue is the lack of labeled data in this field. Supervised machine learning methods for detecting fraud or material misstatements are based on labeled examples of confirmed fraud or error. In practice, such data are scarce, highly imbalanced, and often skewed, because most transactions are legitimate and only a small percentage of fraud cases are identified and confirmed [87,88]. This scarcity limits the processes of model training, validation, and performance benchmarking, thereby limiting trust in AI results. [89].
System integration and data access complicate AI deployment. Many audit clients have fragmented IT environments with legacy systems, siloed databases, and poor integration of financial, operational, and transactional data sources [89,90]. The radius of restricted data access, inconsistent data formats, and manual data extraction processes add considerable effort to the data preparation process, which compromises the scalability and efficiency of AI-based audit procedures [90,91].

7.2. Model Challenges and Technical Challenges

A significant technical challenge is the explainability and interpretability of the model. Advanced AI models, especially deep neural networks and ensemble techniques, can be a “black box” that produces accurate predictions, but not in a reasonable manner [91,92]. In auditing situations, where there is a need for auditors, regulators, and stakeholders to be treated with transparent justification for flagged transactions or for assessed levels of risk, this opacity destroys trust and makes audit documentation and review difficult [92,93].
Monitoring of model performance and drift are other issues of concern. AI models trained on past data can suffer from performance degradation when the underlying data distributions shift owing to changes in business practices, economic conditions, or fraud tactics [93]. Continuous monitoring, drift detection mechanisms, and structured retraining processes are therefore necessary; however, many organizations do not yet possess the infrastructure and/or governance maturity to definitively get them working [94,95].
Additionally, generalization across contexts is poor. As can be seen in [95], models trained on data originating from one organization or industry or from one audit domain do not usually transfer well to other environments, which impedes the reusability of AI solutions and leads to increased cost and effort to customize them for a specific setting. Concerns about robustness and adversarial manipulation make deployment even more challenging, as fraudsters can intentionally change their behavior to bypass AI-based detection systems [96].

7.3. Organizational and Human Issues

Beyond technical issues, organizational aspects and change management are important for the AI adoption. Successful implementation requires cultural changes, redesigning audit processes, and long-term investment in infrastructure and talent, all of which are difficult for many audit organizations to implement [55,61,82]. Resistance from audit professionals, fear of job replacement, and dependence on historical methods can slow or block adoption [83,85,88].
Skill and competency gaps also constitute major barriers. Effective AI-enabled auditing requires close cooperation between auditors, data scientists, and IT professionals; however, many audit firms, lack in-house data science knowledge or structured programs to make auditors AI literate and able to evaluate them critically [83,91,92]. Developing auditors who can exploit the benefits of AI tools and be professional and skeptical is a significant training challenge [91,92,94].
Furthermore, interoperability with existing audit methods remains a complex task. AI tools must be consistent with existing audit standards, workpaper systems, and quality control processes. This requires an update of the audit guide, specifying proper use cases, and formally introducing AI-assisted procedures into audit methodologies.

7.4. Regulatory, Compliance, and Governance Issues

Regulatory expectations and guidance regarding the use of AI in auditing are evolving but are currently fragmented. Regulators and standard-setters have started paying more attention to audits with the help of artificial intelligence, although there is uncertainty about what audit evidence is acceptable, how trust in AI should be recorded, and what validation criteria will be used [82,86].
Issues of professional responsibility and accountability are closely related. When AI systems produce biased or erroneous outputs, it is unclear whether the auditor, audit firm, or AI vendor is responsible [94,95]. Existing professional standards and liability frameworks have not been fully developed to address AI-based decision-making.
Ethical considerations and data protection add to the complexity of implementing these systems. AI models can result in historic bias embedded in the data used to train the model, leading to discriminatory situations that compromise fairness and trust. [95,96]. Moreover, the use of sensitive financial and personal data requires strict adherence to the data protection rules (GDPR and local privacy laws) [88,92].

7.5. AI Governance and Quality Assurance

Effective Implementation of AI in Auditing. Proper model governance and lifecycle management are essential for the implementation of AI in auditing. Organizations need to document model assumptions, track versions, keep them up to date, and decommission obsolete models to avoid uncontrolled “model sprawl” and ensure accountability [93,95,97]. However, audit organizations do not have mature governance frameworks to support these activities [87,98].
Finally, validation, testing, and incident management are crucial for auditing quality. AI systems must be rigorously validated, such as back-testing, sensitivity, and stress testing against adverse conditions, to comply with professional standards for audits [80,84,99,100]. Continuous monitoring and incident response procedures should be implemented to identify failures, data-related problems, or unexpected behaviors and ensure trust in AI-aided auditing [91,93].

8. Research Gaps

The existing literature on AI in auditing shows that experts are ripe for similar AI in auditing, but specifically in the context of real-world impact and AI methodology comparison. The lack of longitudinal studies on the long-term consequences of the use of AI tools on audit quality, efficiency, and judgment is a major concern. Many studies related to short-term controlled environments leave a gap in understanding the sustained impact of AI. Additionally, there is scarce research comparing the effectiveness of different approaches to AI, such as different anomaly detection algorithms and natural language processing, in various audit tasks. The organizational and behavioral aspects of AI adoption, such as auditor interaction with AI recommendations, are also underexplored. Moreover, a lack of regulatory and standardization frameworks also acts as a barrier to AI integration, and there is a need for audit standard-setting bodies to create guidelines for AI validation and documentation to overcome this barrier.

9. Conclusion and Future Work

This research synthesizes ten years of studies on AI-assisted auditing and offers a systems-level view of how AI can be used in the audit process. By relating AI methods to the different phases of the audit process and formulating a proposed reference architecture from empirical evidence, this study goes beyond isolated cases to propose a holistic design of audit systems. The findings reveal that the efficacy of AI in auditing is much more dependent on governance, information standards, and human supervision than on algorithms. The proposed reference architecture includes important components such as data integration, feature engineering, and human-in-the-loop supervision. It contextualizes these components as key to the responsible adoption of AI in auditing processes. Successful integration of AI requires bypassing various hurdles, such as technical, organizational, regulatory, and governance issues. To achieve this, organizations must ensure alignment by prioritizing importance on building the establishment of robust data governance foundations, implementing comprehensive AI frameworks, and providing auditors with targeted AI training. Additionally, upskilling auditors so that they can effectively utilize AI tools is essential to ensure that the use of AI in audit processes is seamless, ensuring improved audit quality and efficiency.
Therefore, future studies must address some of the gaps presented in this study to enable further development and implementation of AI in auditing. One of the areas of possible future work is that of Explainable AI (XAI), which is essential for building trust among auditors and stakeholders while preserving high levels of accuracy. Techniques such as few-shot learning, transfer learning, and federated learning should be investigated to make AI models more flexible and facilitate data privacy. These methods would help AI systems perform well with limited data and in decentralized environments. In addition, using reinforcement learning and causal inference as part of audit processes could help improve decision-making processes and process optimization, resulting in more efficient audits. This study also calls for the development of specified governance frameworks for AI in auditing, if they are in accordance with professional standards and ethical guidelines to ensure the transparency and fairness of AI systems. Furthermore, there is a need to develop AI literacy for auditors and amend professional responsibility frameworks to facilitate ethical, legal, and effective deployment of AI. Future studies should also focus on longitudinal research to examine the long-term effects of AI on auditing, comparative research to determine the methods of AI that work best, and the development of more advanced XAI methods. By targeting these areas, AI has the potential to contribute significantly to improving the quality and efficiency of audits and adding value to stakeholders in a data-rich and digital business context.

Author Contributions

Conceptualization, A.A.; methodology, A.A and M.O.A.; validation, A.A and M.O.A.; formal analysis, A.A and M.O.A.; investigation, A.A and M.O.A.; writing—original draft preparation, A.A and M.O.A.; writing—review and editing, A.A and M.O.A.; visualization, A.A.; supervision, M.O.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

All Figures/ Tables are created by the authors, no GenAI was used. The authors have reviewed and edited the output and take full responsibility for the content of this publication.”.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

Term Full Form Term Full Form
AI Artificial Intelligence GDPR General Data Protection Regulation
ML Machine Learning PCAOB Public Company Accounting Oversight Board
NLP Natural Language Processing IAASB International Auditing and Assurance Standards Board
RPA Robotic Process Automation LSTM Long Short-Term Memory
HITL Human-in-the-loop XGBoost Extreme Gradient Boosting
XAI Explainable AI RF Random Forest
ERP Enterprise Resource Planning Autoencoders A type of artificial neural network used for unsupervised learning
RPA Bots Robotic Process Automation Bots KPI Key Performance Indicators
CLV Customer Lifetime Value PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses
SLAs Service Level Agreements AML Anti-Money Laundering

References

  1. Adelakun, B.O.; Fatogun, D.T.; Majekodunmi, T.G.; Adediran, G.A. Integrating machine learning algorithms into audit processes: Benefits and challenges. Finance Account. Res. J. 2024, 6, 1000–1016. [Google Scholar] [CrossRef]
  2. Kokina, J.; Blanchette, S.; Davenport, T.H.; Pachamanova, D. Challenges and opportunities for artificial intelligence in auditing: Evidence from the field. Int. J. Account. Inf. Syst. 2025, 56, 100734. [Google Scholar] [CrossRef]
  3. Pérez-Calderón, E.; Alrahamneh, S.A.; Montero, P.M. Impact of artificial intelligence on auditing: an evaluation from the profession in Jordan. Discov. Sustain. 2025, 6, 1–18. [Google Scholar] [CrossRef]
  4. Chen, S.; Yang, J. Intelligent manufacturing, auditor selection and audit quality. Manag. Decis. 2024, 63, 964–997. [Google Scholar] [CrossRef]
  5. Al-Omush, A.; Almasarwah, A.; Al-Wreikat, A. Artificial intelligence in financial auditing: redefining accuracy and transparency in assurance services. EDPACS 2025, 70, 1–20. [Google Scholar] [CrossRef]
  6. Leocádio, D.; Reis, J.; Malheiro, L. Trends and Challenges in Auditing and Internal Control: A Systematic Literature Review. 2024. [Google Scholar] [CrossRef]
  7. Almufadda, G.; Almezeini, N.A. Artificial Intelligence Applications in the Auditing Profession: A Literature Review. J. Emerg. Technol. Account. 2021, 19, 29–42. [Google Scholar] [CrossRef]
  8. Saly, J.; Ektik, D. Auditors’ opinion about AI and the impact of AI on audit quality: A study on qualified auditors in Africa. Siirt Sosyal Araştırmalar Dergisi 2025, 4, 1–17. [Google Scholar]
  9. Ham, C.; Hann, R.N.; Rabier, M.; Wang, W. Auditor Skill Demands and Audit Quality: Evidence from Job Postings. Manag. Sci. 2025, 71, 5805–5829. [Google Scholar] [CrossRef]
  10. Shazly, M.A.; AbdElAlim, K.; Zakaria, H. The Impact of Artificial Intelligence on Audit Quality. In Technological Horizons; Emerald Publishing Limited, 2025; pp. 1–10. [Google Scholar] [CrossRef]
  11. Khayoon, D.F.; Hawi, H.T.; Abdullh, L.Q. The Impact of Using Artificial Intelligence Technology (Expert Systems) on Audit Quality. In Proceedings of the 2025 XXVIII International Conference on Soft Computing and Measurements (SCM), May 2025; IEEE; pp. 266–273. [Google Scholar] [CrossRef]
  12. Awad, K.A.; Ali, W.A.M. Utilizing Robotic Process Automation and Artificial Intelligence in Auditing to Mitigate Audit Risks. Tech. Soc. Sci. J. 2024, 66, 1–14. [Google Scholar] [CrossRef]
  13. Afroze, D.; Aulad, A. Perception of Professional Accountants about the Application of Artificial Intelligence (AI) in Auditing Industry of Bangladesh. J. Soc. Econ. Res. 2020, 7, 51–61. [Google Scholar] [CrossRef]
  14. Chen, X.; Guo, Y. Does the executive pay gap affect audit risk? BCP Bus. Manag. 2023, 49, 308–317. [Google Scholar] [CrossRef]
  15. Suyono, W.P.; Puspa, E.S.; Anugrah, S.; Firnanda, R. Artificial Intelligence in Auditing: A Systematic Review of Tools, Applications, and Challenges. RIGGS: J. Artif. Intell. Digit. Bus. 2025, 4, 3393–3401. [Google Scholar] [CrossRef]
  16. X.Z., T; Yuan, X.C. Machine learning based enterprise financial audit framework and high-risk identification. 2025. [Google Scholar]
  17. Saidi, J.; Sari, R.N.; Hendriani, S.; Machasin, M.; Savitri, E.; Efni, Y.; Iznillah, M.L. The Role of Audit Report Readability in Linking Management Characteristics to Corporate Sustainability Performance: A Study of Indonesian SOEs. Qubahan Acad. J. 2025, 5, 61–83. [Google Scholar] [CrossRef]
  18. Abu Huson, Y.; García, L.S.; Benau, M.A.G.; Aljawarneh, N.M. Cloud-based artificial intelligence and audit report: the mediating role of the auditor. VINE J. Inf. Knowl. Manag. Syst. 2025, 55, 1553–1574. [Google Scholar] [CrossRef]
  19. Abu Huson, Y.; Sierra-García, L.; Garcia-Benau, M.A. A bibliometric review of information technology, artificial intelligence, and blockchain on auditing. Total. Qual. Manag. Bus. Excel. 2023, 35, 91–113. [Google Scholar] [CrossRef]
  20. Alareeni, B.; Hamdan, A. The Impact of Artificial Intelligence on Accounting and Auditing in Light of the COVID-19 Pandemic. 2022; 3–7. [Google Scholar] [CrossRef]
  21. Albawwat, I.; Al Frijat, Y. An analysis of auditors’ perceptions towards artificial intelligence and its contribution to audit quality. Accounting 2021, 755–762. [Google Scholar] [CrossRef]
  22. Aslan, L. The evolving competencies of the public auditor and the future of public sector auditing. Contemporary Studies in Economic and Financial Analysis 2021, 113–129. [Google Scholar]
  23. Benhayoun, I.; Bougrine, S.; Sassioui, A. Readiness for artificial intelligence adoption by auditors in emerging countries – a PLS-SEM analysis of Moroccan firms. J. Financial Rep. Account. 2025, 23, 1486–1508. [Google Scholar] [CrossRef]
  24. Choi, S.U.; Lee, K.C.; Na, H.J. Exploring the deep neural network model’s potential to estimate abnormal audit fees. Manag. Decis. 2022, 60, 3304–3323. [Google Scholar] [CrossRef]
  25. Fedyk, A.; Hodson, J.; Khimich, N.; Fedyk, T. Is artificial intelligence improving the audit process? Rev. Account. Stud. 2022, 27, 938–985. [Google Scholar] [CrossRef]
  26. Goryunova, T.; Goryunova, V.; Kukhtevich, I. Modeling of Complexly Structured Reporting Forms and Requests in the Tasks of Automated Provision of Public Services. In Proceedings of the 2020 2nd International Conference on Control Systems, Mathematical Modeling, Automation and Energy Efficiency (SUMMA), Nov. 2020; IEEE; pp. 318–322. [Google Scholar] [CrossRef]
  27. Hu, K.-H.; Chen, F.-H.; Hsu, M.-F.; Tzeng, G.-H. Identifying key Factors for Adopting Artificial Intelligence-Enabled Auditing Techniques by Joint Utilization of Fuzzy-Rough Set Theory and MRDM Technique. Technol. Econ. Dev. Econ. 2020, 27, 459–492. [Google Scholar] [CrossRef]
  28. Manita, R.; Elommal, N.; Baudier, P.; Hikkerova, L. The digital transformation of external audit and its impact on corporate governance. Technol. Forecast. Soc. Chang. 2020, 150, 119751. [Google Scholar] [CrossRef]
  29. Melnychenko, O. Application of artificial intelligence in control systems of economic activity. Virtual Econ. 2019, 2, 30–40. [Google Scholar] [CrossRef] [PubMed]
  30. Noordin, N.A.; Hussainey, K.; Hayek, A.F. The Use of Artificial Intelligence and Audit Quality: An Analysis from the Perspectives of External Auditors in the UAE. J. Risk Financial Manag. 2022, 15, 339. [Google Scholar] [CrossRef]
  31. Rahman, J.; Ziru, A. Clients’ digitalization, audit firms’ digital expertise, and audit quality: evidence from China. Int. J. Account. Inf. Manag. 2022, 31, 221–246. [Google Scholar] [CrossRef]
  32. Sachan, S.; Almaghrabi, F.; Yang, J.-B.; Xu, D.-L. Human-AI collaboration to mitigate decision noise in financial underwriting: A study on FinTech innovation in a lending firm. Int. Rev. Financial Anal. 2024, 93, 103149. [Google Scholar] [CrossRef]
  33. Sun, T.; Vasarhelyi, M. Embracing Textual Data Analytics in Auditing with Deep Learning. Int. J. Digit. Account. Res. 2018, 49–67. [Google Scholar] [CrossRef] [PubMed]
  34. Wijaya, J.R.T.; Prasetyo, I.; Rahmatika, D.N.; Indriasih, D. ARTIFICIAL INTELLIGENCE AND AUDIT QUALITY: AN EMPIRICAL LITERATURE REVIEW FROM SCOPUS DATABASE. Fokus Èkon. J. Ilm. Èkon. 2025, 20, 61–76. [Google Scholar] [CrossRef]
  35. Özbaltan, N. APPLYING MACHINE LEARNING TO AUDIT DATA: ENHANCING FRAUD DETECTION, RISK ASSESSMENT AND AUDIT EFFICIENCY. EDPACS 2024, 69, 70–86. [Google Scholar] [CrossRef]
  36. Puthukulam, G.; Ravikumar, A.; Sharma, R.V.K.; Meesaala, K.M. Auditors' Perception on the Impact of Artificial Intelligence on Professional Skepticism and Judgment in Oman. Univers. J. Account. Finance 2021, 9, 1184–1190. [Google Scholar] [CrossRef]
  37. Boskou, G.; Kirkos, E.; Spathis, C. Classifying internal audit quality using textual analysis: the case of auditor selection. Manag. Audit. J. 2019, 34, 924–950. [Google Scholar] [CrossRef]
  38. Sun, T. (. Applying Deep Learning to Audit Procedures: An Illustrative Framework. Account. Horizons 2019, 33, 89–109. [Google Scholar] [CrossRef]
  39. Tiron-Tudor, D.D. Reflections on the human-algorithm complex duality perspectives in the auditing process. Qualitative Research in Accounting & Management 2021, 19. [Google Scholar]
  40. Zemankova, A. Artificial Intelligence in Audit and Accounting: Development, Current Trends, Opportunities and Threats - Literature Review. In Proceedings of the 2019 International Conference on Control, Artificial Intelligence, Robotics & Optimization (ICCAIRO), May 2019; IEEE; pp. 148–154. [Google Scholar] [CrossRef]
  41. Alles, M.; Gray, G.L. Incorporating big data in audits: Identifying inhibitors and a research agenda to address those inhibitors. Int. J. Account. Inf. Syst. 2016, 22, 44–59. [Google Scholar] [CrossRef]
  42. Tang, J.; Karim, K.E. Financial fraud detection and big data analytics – implications on auditors’ use of fraud brainstorming session. Manag. Audit. J. 2019, 34, 324–337. [Google Scholar] [CrossRef]
  43. Luo, J.; Hu, Z.; Wang, L. Research on CPA Auditing Reform Strategy Under the Background of Artificial Intelligence. In Proceedings of the 2018 2nd International Conference on Management, Education and Social Science (ICMESS 2018); Atlantis Press: Paris, France, 2018. [Google Scholar] [CrossRef]
  44. Kunwar, M. Artificial intelligence in finance : Understanding how automation and machine learning is transforming the financial industry. Theseus 2019. [Google Scholar]
  45. Byrnes, P.E.; et al. Evolution of Auditing: From the Traditional Approach to the Future Audit. In Continuous Auditing; Emerald Publishing Limited, 2018; pp. 285–297. [Google Scholar] [CrossRef]
  46. Omar, N.; Johari, Z.A.; Smith, M. Predicting fraudulent financial reporting using artificial neural network. J. Financ. Crime 2017, 24, 362–387. [Google Scholar] [CrossRef]
  47. Mihret, D.G.; Grant, B. The role of internal auditing in corporate governance: a Foucauldian analysis. Accounting, Audit. Account. J. 2017, 30, 699–719. [Google Scholar] [CrossRef]
  48. Nurhidayati, F.; Sensuse, D.I.; Noprisson, H. Factors influencing accounting information system implementation. 2017 International Conference on Information Technology Systems and Innovation (ICITSI), Oct. 2017; IEEE; pp. 279–284. [Google Scholar] [CrossRef]
  49. Stergiaki, E.; Vazakidis, A.; Stavropoulos, A. Development and Evaluation of a Prototype web XBRL-Enabled Financial Platform for the Generation and Presentation of Financial Statements according to IFRS. Int. J. Account. Tax. 2015, 3. [Google Scholar] [CrossRef]
  50. Cunningham, L.M.; Stein, S.E. Using Visualization Software in the Audit of Revenue Transactions to Identify Anomalies. Issues Account. Educ. 2018, 33, 33–46. [Google Scholar] [CrossRef]
  51. Torfi, A.; Shirvani, R.A.; Keneshloo, Y.; Tavaf, N.; Fox, E.A. Natural language processing advancements by deep learning: A survey. ArXiv 2020, arXiv:2003.01200. [Google Scholar]
  52. Grandstaff, J.L.; Solsma, L.L. An Analysis of Information Systems Literature: Contributions to Fraud Research. Account. Finance Res. 2019, 8, 219. [Google Scholar] [CrossRef]
  53. Nuritdinovich, M.A.; Bokhodirovna, K.M.; Kavitha, V.O.; Ugli, S.A.O. Advanced AI algorithms in accounting: Redefining accuracy and speed in financial auditing 2025, 050008. [CrossRef]
  54. Ganapathy, V. AI in Auditing: A Comprehensive Review of Applications, Benefits and Challenges. Shodh Sari-An Int. Multidiscip. J. 2023, 02, 328–343. [Google Scholar] [CrossRef]
  55. Ghafar, I.; Perwitasari, W.; Kurnia, R. The Role of Artificial Intelligence in Enhancing Global Internal Audit Efficiency: An Analysis. Asian J. Logist. Manag. 2024, 3, 64–89. [Google Scholar] [CrossRef]
  56. Bolanos, F.; Salatino, A.; Osborne, F.; Motta, E. Artificial intelligence for literature reviews: Opportunities and challenges. ArXiv 2024, arXiv:2402.08565. [Google Scholar] [CrossRef]
  57. Appelbaum, D.; Nehmer, R.A. Auditing Cloud-Based Blockchain Accounting Systems. Journal of Information Systems 2020, 34, 5–21. [Google Scholar] [CrossRef]
  58. Peres, R.S.; Jia, X.; Lee, J.; Sun, K.; Colombo, A.W.; Barata, J. Industrial Artificial Intelligence in Industry 4.0 - Systematic Review, Challenges and Outlook. IEEE Access 2020, 8, 220121–220139. [Google Scholar] [CrossRef]
  59. Patrício, L.; Varela, L.; Silveira, Z. Integration of Artificial Intelligence and Robotic Process Automation: Literature Review and Proposal for a Sustainable Model. Appl. Sci. 2024, 14, 9648. [Google Scholar] [CrossRef]
  60. Zhong, H.; Yang, D.; Shi, S.; Wei, L.; Wang, Y. From data to insights: the application and challenges of knowledge graphs in intelligent audit. J. Cloud Comput. 2024, 13, 114. [Google Scholar] [CrossRef]
  61. Issa, H.; Sun, T.; Vasarhelyi, M.A. Research Ideas for Artificial Intelligence in Auditing: The Formalization of Audit and Workforce Supplementation. J. Emerg. Technol. Account. 2016, 13, 1–20. [Google Scholar] [CrossRef]
  62. Mamoshina, P.; Ojomoko, L.; Yanovich, Y.; Ostrovski, A.; Botezatu, A.; Prikhodko, P.; Izumchenko, E.; Aliper, A.; Romantsov, K.; Zhebrak, A.; et al. Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare. Oncotarget 2017, 9, 5665–5690. [Google Scholar] [CrossRef] [PubMed]
  63. Saltz, J.S.; Shamshurin, I. Big data team process methodologies: A literature review and the identification of key factors for a project's success. 2016 IEEE International Conference on Big Data (Big Data), Dec. 2016; IEEE; pp. 2872–2879. [Google Scholar] [CrossRef]
  64. Elf, M.; Nordin, S.; Wijk, H.; Mckee, K.J. A systematic review of the psychometric properties of instruments for assessing the quality of the physical environment in healthcare. J. Adv. Nurs. 2017, 73, 2796–2816. [Google Scholar] [CrossRef]
  65. Li, Z.; Zheng, L. The Impact of Artificial Intelligence on Accounting. In Proceedings of the 2018 4th International Conference on Social Science and Higher Education (ICSSHE 2018); Atlantis Press: Paris, France, 2018. [Google Scholar] [CrossRef]
  66. Eulerich, M.; Kalinichenko, A. The Current State and Future Directions of Continuous Auditing Research: An Analysis of the Existing Literature. J. Inf. Syst. 2017, 32, 31–51. [Google Scholar] [CrossRef]
  67. Schroeder, J.H. The Impact of Audit Completeness and Quality on Earnings Announcement GAAP Disclosures. Account. Rev. 2015, 91, 677–705. [Google Scholar] [CrossRef]
  68. Bruno, A.; Lapsley, I. The emergence of an accounting practice. Accounting, Audit. Account. J. 2018, 31, 1045–1066. [Google Scholar] [CrossRef]
  69. Hartmann, B.; Marton, J.; Söderström, R. The Improbability of Fraud in Accounting for Derivatives: A Case Study on the Boundaries of Financial Reporting Compliance. Eur. Account. Rev. 2018, 27, 845–873. [Google Scholar] [CrossRef]
  70. Swan, M. Blockchain for Business: Next-Generation Enterprise Artificial Intelligence Systems. Adv. Comput. 2018, 111, 121–162. [Google Scholar] [CrossRef]
  71. Badmus, O.; Ikumapayi, O.J.; Toromade, R.O.; Adebayo, A.S. Integrating AI-powered knowledge graphs and NLP for intelligent interpretation, summarization, and cross-border financial reporting harmonization. World J. Adv. Res. Rev. 2025, 27, 042–062. [Google Scholar] [CrossRef]
  72. Mitan, J. Enhancing audit quality through artificial intelligence: An external auditing perspective. Accounting Undergraduate Honors Theses, 2024. [Google Scholar]
  73. Dashkevich, N.; Counsell, S.; Destefanis, G. Blockchain Financial Statements: Innovating Financial Reporting, Accounting, and Liquidity Management. Futur. Internet 2024, 16, 244. [Google Scholar] [CrossRef]
  74. Damar, M.; Aydın, Ö.; Özoğuz, E.; Aydın, Ü.; Özen, A. TURKISH COURT OF ACCOUNTS: ANALYZING FINANCIAL AUDIT, DIGITALIZATION, AI IMPACT. EDPACS 2024, 69, 16–40. [Google Scholar] [CrossRef]
  75. Xiong, F.; Han, Q.; Zhang, C. Performance Improvement for Large Language Models: Retrieval-Augmented Generation AI Agent for Tabular Data Processing in Auditing Procedures. 2025. [Google Scholar] [CrossRef]
  76. Liu, R.; Wang, Y.; Zou, J. Research on the Transformation from Financial Accounting to Management Accounting Based on Drools Rule Engine. Comput. Intell. Neurosci. 2022, 2022, 1–8. [Google Scholar] [CrossRef]
  77. Alkan, B.Ş. How Blockchain and Artificial Intelligence Will Effect the Cloud-Based Accounting Information Systems? 2022; 107–119. [CrossRef]
  78. Zhang, Y.; Xiong, F.; Xie, Y.; Fan, X.; Gu, H. The Impact of Artificial Intelligence and Blockchain on the Accounting Profession. IEEE Access 2020, 8, 110461–110477. [Google Scholar] [CrossRef]
  79. Sarwar, M.I.; Iqbal, M.W.; Alyas, T.; Namoun, A.; Alrehaili, A.; Tufail, A.; Tabassum, N. Data Vaults for Blockchain-Empowered Accounting Information Systems. IEEE Access 2021, 9, 117306–117324. [Google Scholar] [CrossRef]
  80. Cohen, J.R.; Joe, J.R.; Thibodeau, J.C.; Trompeter, G.M. Audit Partners' Judgments and Challenges in the Audits of Internal Control over Financial Reporting. Audit. A J. Pr. Theory 2020, 39, 57–85. [Google Scholar] [CrossRef]
  81. Dai, J.; Vasarhelyi, M.A. Continuous Audit Intelligence as a Service (CAIaaS) and Intelligent App Recommendations. J. Emerg. Technol. Account. 2020, 17, 1–15. [Google Scholar] [CrossRef]
  82. V., N.; Chukwuani, M.A.E. Automation of accounting processes: Impact of artificial intelligence. International Journal of Research and Innovation in Social Science (IJRISS) 2020, 4, 444–449. [Google Scholar]
  83. Ozlanski, M.E.; Negangard, E.M.; Fay, R.G. Kabbage: A Fresh Approach to Understanding Fundamental Auditing Concepts and the Effects of Disruptive Technology. Issues Account. Educ. 2020, 35, 26–38. [Google Scholar] [CrossRef]
  84. Milana, C.; Ashta, A. Artificial intelligence techniques in finance and financial markets: A survey of the literature. Strat. Chang. 2021, 30, 189–209. [Google Scholar] [CrossRef]
  85. Janvrin, D.J.; Mascha, M.F.; Lamboy-Ruiz, M.A. SOX 404(b) Audits: Evidence from Auditing the Financial Close Process of the Accounting System. J. Inf. Syst. 2019, 34, 77–103. [Google Scholar] [CrossRef]
  86. Albu, C.N.; Albu, N.; Hoffmann, S. The Westernisation of a financial reporting enforcement system in an emerging economy. Account. Bus. Res. 2020, 51, 271–297. [Google Scholar] [CrossRef]
  87. Adekunle, B.I.; Chukwuma-Eke, E.C.; Balogun, E.D.; Ogunsola, K.O. Machine Learning for Automation: Developing Data-Driven Solutions for Process Optimization and Accuracy Improvement. Int. J. Multidiscip. Res. Growth Evaluation 2021, 3. [Google Scholar] [CrossRef]
  88. Bonsón, E.; Lavorato, D.; Lamboglia, R.; Mancini, D. Artificial intelligence activities and ethical approaches in leading listed companies in the European Union. Int. J. Account. Inf. Syst. 2021, 43. [Google Scholar] [CrossRef]
  89. King, T.C.; Aggarwal, N.; Taddeo, M.; Floridi, L. Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions. Sci. Eng. Ethic- 2019, 26, 89–120. [Google Scholar] [CrossRef] [PubMed]
  90. Gierbl, A.S. Data analytics in external auditing; University of St. Gallen, 2021. [Google Scholar]
  91. Boritz, J.E.; Kochetova, N.V.; Robinson, L.A.; Wong, C. Auditors' and Specialists' Views About the Use of Specialists During an Audit. Behav. Res. Account. 2020, 32, 15–40. [Google Scholar] [CrossRef]
  92. Alonge, E.O.; Eyo-Udo, N.L.; Ubanadu, B.C.; Daraojimba, A.I.; Balogun, E.D.; Ogunsola, K.O. Enhancing Data Security with Machine Learning: A Study on Fraud Detection Algorithms. J. Front. Multidiscip. Res. 2021, 2, 19–31. [Google Scholar] [CrossRef]
  93. Gepp, A.; Kumar, K.; Bhattacharya, S. Lifting the numbers game: identifying key input variables and a best-performing model to detect financial statement fraud. Account. Finance 2020, 61, 4601–4638. [Google Scholar] [CrossRef]
  94. Jabarulla, M.Y.; Lee, H.-N. A Blockchain and Artificial Intelligence-Based, Patient-Centric Healthcare System for Combating the COVID-19 Pandemic: Opportunities and Applications. Healthcare 2021, 9, 1019. [Google Scholar] [CrossRef]
  95. Billah, M.M. Accounting and Auditing Standards for Islamic Financial Institutions; Routledge: London, United Kingdom, 2021. [Google Scholar] [CrossRef]
  96. DeZoort, T.; Doxey, M.; Pollard, T. Root Cause Analysis and Its Effect on Auditors' Judgments and Decisions in an Integrated Audit*. Contemp. Account. Res. 2020, 38, 1204–1230. [Google Scholar] [CrossRef]
  97. Järvinen, J.T. Role of management accounting in applying new institutional logics. Accounting, Audit. Account. J. 2016, 29, 861–886. [Google Scholar] [CrossRef]
  98. Khatri, S.; Alzahrani, F.A.; Ansari, T.J.; Agrawal, A.; Kumar, R.; Khan, R.A. A Systematic Analysis on Blockchain Integration With Healthcare Domain: Scope and Challenges. IEEE Access 2021, 9, 84666–84687. [Google Scholar] [CrossRef]
  99. Nazar, M.; Alam, M.M.; Yafi, E.; Su'UD, M.M. A Systematic Review of Human–Computer Interaction and Explainable Artificial Intelligence in Healthcare With Artificial Intelligence Techniques. IEEE Access 2021, 9, 153316–153348. [Google Scholar] [CrossRef]
  100. Juluru, K.; Shih, H.-H.; Murthy, K.N.K.; Elnajjar, P.; El-Rowmeim, A.; Roth, C.; Genereaux, B.; Fox, J.; Siegel, E.; Rubin, D.L. Integrating Al Algorithms into the Clinical Workflow. Radiol. Artif. Intell. 2021, 3. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PRISMA Flow Diagram for Study Selection.
Figure 1. PRISMA Flow Diagram for Study Selection.
Preprints 196195 g001
Figure 2. AI Techniques by Audit Workflow Stage.
Figure 2. AI Techniques by Audit Workflow Stage.
Preprints 196195 g002
Figure 3. Reference Architecture For AI-Enabled Audit Workflow.
Figure 3. Reference Architecture For AI-Enabled Audit Workflow.
Preprints 196195 g003
Table 1. Characteristics of Included Studies (N=100).
Table 1. Characteristics of Included Studies (N=100).
Characteristic Frequency Percentage
Study Type
Empirical studies 45 45%
Design-science and development 28 28%
Conceptual and review 17 17%
Practice-oriented reports 10 10%
Publication Year
2015–2017 12 12%
2018–2019 18 18%
2020–2021 31 31%
2022–2025 39 39%
Audit Context
Internal audit 42 42%
External/financial statement audit 35 35%
Public sector and government audit 12 12%
Tax and compliance audit 8 8%
Forensic and fraud audit 3 3%
Geographic Origin
Europe 38 38%
North America 35 35%
Asia-Pacific 20 20%
Africa/Middle East 7 7%
Primary AI Techniques
Machine learning 58 58%
Natural language processing 31 31%
Robotic process automation 24 24%
Other (expert systems, computer vision, etc.) 15 15%
Note: Studies may employ multiple techniques; percentages exceed 100%.
Table 2. AI Techniques Mapped to Audit Workflow Stages.
Table 2. AI Techniques Mapped to Audit Workflow Stages.
Audit Stage Primary AI Techniques Key Use Cases Number of Studies
Planning & Risk Assessment ML, predictive analytics, NLP Entity-level risk scoring, account prioritization, management commentary analysis, and emerging risk identification 42
Tests of Controls ML, process mining, RPA, rules engines Control monitoring, exception detection, segregation-of-duties testing, and continuous control monitoring 38
Substantive Procedures ML, deep learning, NLP, computer vision Transaction anomaly detection, testing of journal entries, document matching, invoice verification, and inventory observation 48
Reporting & Continuous Assurance Dashboards, ML, predictive models Risk visualization, anomaly profiles, control status dashboards, and continuous monitoring alerts 31
Note: Studies may be covering several stages; totals are over 100.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated