Submitted:
21 January 2026
Posted:
22 January 2026
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Background
2.1. Cybersecurity
‘Prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and nonrepudiation’ [2].
‘The organization and collection of resources, processes, and structures used to protect cyberspace and cyberspace-enabled systems from occurrences that misalign de jure from de facto property rights’ [1].
2.2. Information System Audit
2.3. Cybersecurity Auditing
2.4. Artificial Intelligence in Cybersecurity
3. Methodology
4. Model Presentation
4.1. High-Level Anti-Sherif AI-Driven Cybersecurity Model
4.1.1. Cybersecurity (Controls)
4.1.2. Conventional Cybersecurity Auditing (Binary Check)
4.1.3. Anti-Sherif Cybersecurity Audit Components
4.2 Detailed Model Representation
4.2.1. Cybersecurity Controls (a Detailed Description)
4.2.2. Cybersecurity Auditing (Binary Check for Compliance)
4.2.3. AI Integration and Expert Judgment in Cybersecurity Auditing
| CMM level → | Description | Interpretation in the cybersecurity audit context | Assigned base score (Hᵢ) |
|---|---|---|---|
| Level 1 Initial (Ad-hoc) | At this level, organizational processes are chaotic and disorganized, and the initiatives are conducted haphazardly. | Cybersecurity controls are not defined, highly dependent on manual measures. | 0.2 |
| Level 2 Repeatable / managed | Basic processes are established, however, at an entry level. | Basic cybersecurity controls exist; however, no enforcement processes are implemented. | 0.4 |
| Level 3 Defined | At this level process is standardized, documented, and integrated throughout the organizational structure. | Cybersecurity controls are implemented and enforced in a reasonable manner across the ICT environment. | 0.6 |
| Level 4 Quantitatively managed | Metrics are key performance indicators (KPIs) defined against the processes for qualitative and quantitative measurements of success factors. | Cybersecurity controls are defined, and the effectiveness of each control can easily be measured using a defined matrix. | 0.8 |
| Level 5 Optimizing | Processes are grounded, and the organization focuses on continuous improvements. | Most cybersecurity controls are automated and advanced, such as the implementation of continuous control monitoring (CCM) and CCA. | 1 |
4.3. Empirical Research
4.3.1. Fictional Case Scenario
- strengthen business alignment in cybersecurity audits;
- improve accuracy and consistency in audit scoring; and
- retain the application of human judgment.
4.3.2. Experimental Demonstration
- N=10=number of total cybersecurity audits.
| CVE ID | EPSS (Exploit probability) |
|---|---|
| CVE-2024-23692 | 0.9430 |
| CVE-2014-6287 | 0.94316 |

4.4. Model Critical Evaluation
| Evaluation criteria | Rational | Results | Explanation |
|---|---|---|---|
| The proposed model should be validated using various other relevant models | The proposed model should align logically with international standards and benchmarks, such as CIS, NIST CSF, and ISO 27001. | 🗸 | In the present study, the author cited international standards, such as the CIS. The cybersecurity controls adopted from the model are derived from the NIST CSF. This was conducted to ensure that the model is not interpreted as an alternative or replacement for the current recognized cybersecurity standards, but rather as a supplement to them |
| The proposed model data sources should be dependable | The data source used should be reliable and easily accessible to support study repeatability and future research. | – | The data used in the proposed model were obtained from an experimental setup meant specifically for the model demonstration. In an ideal world, a simulated environment with more than one server can provide a more realistic representation of the perception behind the proposed model |
| The model should be centered around the human judgment-influencing factor | Evaluate the proportionality of human judgment toward the improvement of the model. | 🗸 | The model clearly demonstrated the importance of human-centered AI capabilities, in which the input from the expert played a vital role in the final risk score |
| The model should integrate AI enhancement | Evaluate the extent the which AI factors contribute to the model improvement. | – | Although the model integrates the AI capabilities through OSINT, more influential factors can be built using real-life threat detection AI capabilities, and user-centric input to enhance the AI insights |
| The proposed model should be scalable and automated | Examine if the model can be generalized, expanded, or automated for large-scale audits with a larger dataset. | – | The proposed model automated most of the entailed steps; however, the existence of cybersecurity for the binary check exercise was conducted using a manual approach. More automation can be considered for the concept of cybersecurity auditing |
| The model should be explainable, interpretable, and transparent | Measures the clarity of model outputs for non-technical stakeholders. | 🗸 | The proposed model was presented in a way that could be understood by both technical and non-technical personnel |
| The applicability of the proposed model should be tested using empirical validation | Examine the adaptability of the model’s outcomes against real-world audit or incident data, reflecting on the case scenario application adopted. | 🗸 | A fictional case scenario was used to test and validate the applicability of the proposed model in a real-life scenario |
5. Model Limitations and Future Work
5.1. Study Limitations
5.2. Future Direction
6. Conclusion
7. Disclaimer
Abbreviations
| AI | Artificial intelligence |
| APT | Advanced persistent threats |
| CCM | Continuous control monitoring |
| CIA | Confidentiality, integrity, and availability |
| CIS | Center for Internet Security |
| CISA | Certified Information Systems Auditor |
| CMM | Capability maturity model |
| CSF | Cybersecurity Framework |
| CVE | Common vulnerabilities and exposures |
| DLP | Data loss prevention |
| DR | Disaster recovery |
| DSR | Design science research |
| DSRM | Design science research methodology |
| EDR | Endpoint detection and response |
| EPSS | Exploit prediction scoring system |
| ICT | Information and communication technology |
| IDS | Intrusion detection systems |
| IoT | Internet of Things |
| IPS | Intrusion Prevention Systems |
| IR | Incident response |
| IS | Information systems |
| ISACA | Information Systems Audit and Control Association |
| KPI | Key performance indicators |
| MFA | Multi-factor authentication |
| MI | Material irregularities |
| NIST | National Institute of Standards and Technology |
| NVD | National Vulnerability Database |
| OSINT | Open-source intelligence |
| PoC | Proof-of-concept |
| POPIA | Protection of Personal Information Act |
| PR | Pull requests |
| RPO | Recovery point objective |
| RTO | Recovery time objective |
| SIEM | Security information and event management |
| SME | Small and medium-sized enterprises |
| SOAR | Security orchestration, automation, and response |
| UEBA | User and entity behavior analytics |
References
- Craigen, D.; Diakun-Thibault, N.; Purse, R. Defining Cybersecurity. Technol. Innov. Manag. Rev. 2014, vol. 4(no. 10), 13–21. [Google Scholar] [CrossRef]
- NIST, Computer Security Resource Center. 12 Aug 2024. Available online: https://csrc.nist.gov/glossary/term/cybersecurity.
- vom Brocke, J.; Hevner, A.; Maedche, A. Introduction to Design Science Research; 2020; Volume no. November, pp. 1–13. [Google Scholar] [CrossRef]
- Hevner, A. R. “Design science research,” Comput. In Handb. Two-Volume Set; 2022; pp. 1–23. [Google Scholar] [CrossRef]
- Cai, M.; Yang, J.; Gao, J.; Lee, Y. J. Matryoshka Multimodal Models. 2024, no. 2, pp. 1–16. Available online: http://arxiv.org/abs/2405.17430.




| DRS model steps (Adopted from [3]) | Description (What the step means) [3,4] | Application in the present study | Output description |
|---|---|---|---|
|
This step provides the foundational motivation for the study by identifying the problem. This step presents the gaps and the reason a need exists to resolve those gaps through a proposed solution. | In the present study, the authors have observed an over-reliance on the process of binary checks in the cybersecurity audit landscape. More often, this exercise i.e., binary control check, falls short as it does not address risk at a business objective level. As a result, the effectiveness and relevancy of the cybersecurity audit become questionable to the stakeholders and can be perceived as a policing exercise rather than a risk-oriented process. | Clearly defined cybersecurity audit problem |
|
This step narrows the study objective toward specific end goals and the new artifacts the study aims to present to the body of knowledge. | The proposed model introduces a multi-layered cybersecurity audit approach that surpasses a traditional means of binary check. These layers include components, such as AI-driven insights, human judgment, government maturity scoring, risk modeling, and audit outcomes aligned with business objectives. | Model functionalities and components |
|
This is the first main contribution of the study, in which the proposed model is presented through a conceptual model and/or a systematic framework. | The proposed model is designed through components meant to strike a balance between binary check, AI integration, and human or expert judgment. | Functional Anti-Sherif Model |
|
The practical applicability of a solution should be applied through a scientific approach, such as experiments. This step demonstrates the applicability of the solution in practice. | Apply the model to the fictional Nany FinTrust MicroBank to generate governance and business insights. | Demonstrated model solving real cybersecurity audit challenges using fictional scenarios |
|
This step ensures that gaps in the proposed solution are identified and resolved. The step also guards against limitations of the proposed solution. | Quantitative evaluation (AI model accuracy, anomaly detection performance). Qualitative evaluation by cybersecurity experts and auditors. Refinements made based on feedback. | Model critical evaluation outcome aimed at confirming effectiveness and areas of improvement |
|
Finally, the output of the study is made available to the intended society. | Dissertation chapters and conference submissions are published. | Publishable research outputs and communicated findings |
| Function(s) | Cybersecurity audit check | Control example | Standard reference | |
|---|---|---|---|---|
| Detect | Compliance | SIEM |
NIST CSF: DE.AE-1 (Anomalous activity detected); DE.CM-1 (Monitoring for events) CIS: Control 8 – Audit Log Management ISO/IEC 27001: A.12.4 (Logging and monitoring) |
Is logging/alerting enabled and retained as per compliance requirements? (1 = Yes, 0 = No) |
| Prevent | Effectiveness | Firewall |
NIST CSF: PR.AC-5 (Network integrity maintained); PR.IP-1 (Baseline configuration) CIS: Control 4 – Secure Configuration of Enterprise Assets and Software ISO/IEC 27001: A.13.1 (Network security management) |
Do firewall rules effectively reduce unauthorized access risks? (1 = Yes, 0 = No) |
| Respond | Resilience | Incident response (IR) plan |
NIST CSF: RS.RP-1 (Incident response plan executed); RS.CO-1 (Response coordination) CIS: Control 17 – IR Management ISO/IEC 27001: A.16.1 (Information security incident management) |
Can the IR Plan be executed and remain effective during outages? (1 = Yes, 0 = No) |
| Recover | Predictive | Backups/ DR Strategy |
NIST CSF: RC.IM-1 (Recovery planning); RC.RP-1 (Recovery plan tested) CIS: Control 11 – Data Recovery ISO/IEC 27001: A.17.1 (Information security continuity); A.17.2 (Redundancies) |
Is the RTO accurately forecasted and tested? (1 = Yes, 0 = No) |
| Function(s) | Cybersecurity audit check | Control example | AI integration (Aᵢ) | Human judgment (Hᵢ) | |
|---|---|---|---|---|---|
| Detect | Compliance | SIEM | Is logging/alerting enabled and retained as per compliance requirements? (1 = Yes, 0 = No) | AI validates log coverage & anomaly detection patterns | Auditor checks alignment with business needs & regulatory context |
| Prevent | Effectiveness | Firewall | Do firewall rules effectively reduce unauthorized access risks? (1 = Yes, 0 = No) | AI tests rules against known & emerging threat signatures (threat intel, ML-based traffic analysis) | The auditor evaluates the adequacy of rules against entity-specific risks |
| Respond | Resilience | IR Plan | Can the IR Plan be executed and remain effective during outages? (1 = Yes, 0 = No) | AI simulates IR scenarios, MTTR forecasting | Auditor verifies practicality, governance fit, stakeholder readiness |
| Recover | Predictive | Backups/ DR Strategy | Is the RTO accurately forecasted and tested? (1 = Yes, 0 = No) | AI models downtime costs & predicts the likelihood of successful restores | Auditor validates business continuity alignment & real-life test results |
| Construct | Symbol | Measurement Approach | Data Type |
|---|---|---|---|
| Cybersecurity Technical Control Binary Check | 0 = not implemented, 1 = implemented | Nominal | |
| AI integration | ML-based risk probability (normalized 0,1]) | Predefined | |
| Human Judgment Integration (Using CMM levels) | Derived from Table 3 maturity scale (0.2–1.0) | Predefined | |
| Controlled weights | Predefined | ||
| AI-influenced | Continuous | ||
| Probability of attack success | Continuous |
| Control ID | Audit Function (NIST CRF) | Description |
|---|---|---|
| C1 | Prevent | The implementation of the MFA |
| C2 | Detect | Implemented and functional EDR/Antivirus capabilities |
| C3 | Prevent | Regular firewall rule review |
| C4 | Prevent | Patch management processes are implemented |
| C5 | Prevent | Admin privilege hardening processes |
| C6 | Detect | Audit logging is enabled on critical applications |
| C7 | Prevent |
Network segmentation is implemented |
| C8 |
Detect |
Regular vulnerability scanning |
| C9 | Detect | Database logging is enabled for all critical applications |
| C10 | Recover | Backup testing process implemented |
| Symbol | Audit result (s) |
|---|---|
| C1 | 1 |
| C2 | 1 |
| C3 | 1 |
| C4 | 1 |
| C5 | 1 |
| C6 | 1 |
| C7 | 1 |
| C8 | 1 |
| C9 | 0 |
| C10 | 1 |
| CVE ID | Exploit source | EPSS | Code search count | Issues/ PRs count | Total GitHub mentions | Normalized M score |
|---|---|---|---|---|---|---|
| CVE-2024-23692 | GitHub | 0.9430 | 338 | 30 | 368 | 0.80 |
| CVE-2014-6287 | GitHub | 0.94316 | 780 | 8 | 788 | 1.00 |
| Assessment dimension | Binary compliance model | Hybrid Anti-Sheriff Model |
|---|---|---|
| Scoring logic | Pass / Fail (0 or 1) | Overall risk inheritance |
| Inputs considered | Control presence only | Risk-based approach |
| Computation method | Mean of binary indicators | A combination of binary score, AI integration, and human judgment |
| Final score | , 90% score, indicating that the tested controls were highly effective. | , 87% score, indicating that the inherent risks of the assessed cybersecurity control are very high. |
| Interpretation | Controls largely compliant | High residual risk remains |
| Sensitivity to threat landscape | None | High |
| Control maturity awareness | Ignored | Explicitly included |
| Risk differentiation | Low (coarse-grained) | High (fine-grained) |
| Decision accuracy | Superficial compliance | Risk-informed decision-making |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
