Submitted:
10 November 2025
Posted:
11 November 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
- Model Strategic Decisions: Capture the objectives and constraints of both attackers and defenders [13].
- Conduct Risk Analysis: Elucidate payoffs and tactics to identify critical vulnerabilities and optimal defensive strategies [14].
- Enable Adaptive Defense: Capture the dynamic nature of cyber threats, including those augmented by AI, to inform adaptive countermeasures [15].
- Optimize Resource Allocation: Evaluate strategy effectiveness to guide efficient investment of limited defensive resources [16].
- A novel game-theoretic model for DFR that quantifies strategic attacker-defender interactions.
- The integration of MITRE ATT&CK and D3FEND with AHP-weighted metrics to ground utilities in real-world tactics and techniques.
- An equilibrium analysis that yields actionable resource allocation guidance for SMBs/SMEs.
- An evaluation demonstrating the framework’s efficacy in reducing attacker success rates, even in complex, multi-vector APT scenarios influenced by modern AI-powered tools.
2. Related Works
2.1. Game Theory in Digital Forensics
2.2. Digital Forensics Readiness and Techniques
2.3. Advancement in Cybersecurity Modeling
2.4. Innovative Tools and Methodologies
2.5. Digital Forensics in Emerging Domains
2.6. Advanced Persistent Threats and Cybercrime
2.7. Novelty
3. Materials and Methods
3.1. Problem Statement
- : Strategies available to attackers, corresponding to MITRE ATT&CK tactics (e.g., Reconnaissance, Resource Development, Initial Access, Execution, Persistence, etc.).
- : Strategies available to defenders, corresponding to MITRE D3FEND countermeasures (e.g., Model, Detect, Harden, Isolate, Deceive, etc.)
- P: Parameters influencing game models, such as attack severity, defense effectiveness, and forensic capability.
- : Utility function for attackers, representing the payoff based on their strategy and the defenders’ strategy .
- : Utility function for defenders, representing the payoff based on their strategy and the attackers’ strategy .
- Model Construction: Construct game models to represent the interactions between A and D.
- Equilibrium Analysis: Identify Nash equilibria such that:
3.2. Methodology
3.2.1. Notation and Symbols
3.3. Game Theory Background
3.3.1. Players and Actions
3.3.2. Payoff Functions and Utility
3.3.3. Scenario Analysis
3.3.4. Formalizing the Game
- Players: Defender (D), Attacker (A)
-
Actions:
- –
- Defender: (Set of defender’s investment choices)
- –
- Attacker: (Set of attacker’s attack choices)
-
Payoff Functions:
- –
- Defender’s Payoff Function: (Maps a combination of defender’s investment (D) and attacker’s attack (A) to a real number representing the defender’s utility)
- –
- Attacker’s Payoff Function: (Maps a combination of defender’s investment (D) and attacker’s attack (A) to a real number representing the attacker’s utility)
3.3.5. Payoff Analysis
-
Defender’s Payoffs:
- –
- HI FT: High investment in forensic tools leads to high readiness for a sophisticated attack (SA), resulting in low losses (high utility) for the defender. However, if the attacker chooses a simpler attack (SI), the high investment might be unnecessary, leading to very low losses (moderate utility) but potentially wasted resources.
- –
- LI FT: Low investment translates to lower readiness, making the defender more vulnerable to a sophisticated attack (SA), resulting in high losses (low utility). While sufficient for a simpler attack (SI), it might not provide a complete picture for forensic analysis, leading to moderate losses (moderate utility).
-
Attacker’s Payoffs:
- –
- SA: A sophisticated attack offers the potential for higher gains (data exfiltration) but requires more effort and resources to bypass advanced forensic tools (HI FT) implemented by the defender. If the defender has low investment (LI FT), the attack is easier to conduct, resulting in higher gains (higher utility).
- –
- SI: This requires less effort but might yield lower gains (lower utility). If the defender has high investment (HI FT), the attacker might face challenges in extracting data, resulting in very low gains (low utility).
3.3.6. Advanced Persistent Threats (APTs) and Equilibrium Concepts
3.4. Proposed Approach
-
Players:
- –
- Attacker: 14 strategies ()
- –
- Defender: 6 strategies ()
- Rationality: Both players are presumed rational, seeking to maximize their individual payoffs given knowledge of the opponent’s strategy. The game is simultaneous and non-zero-sum.
3.4.1. PNE Analysis
3.4.2. MNE Analysis
Main equilibrium (non-zero-sum).

3.4.3. Payoff Construction from ATT&CK→D3FEND Coverage
3.4.4. Payoff Matrices
| Defender Strategies | ATT&CK Tactics |
|---|---|
| t1 | t2 | t3 | t4 | t5 | t6 | |
|---|---|---|---|---|---|---|
| s1 | 5 | 6 | 7 | 8 | 9 | 10 |
| s2 | 0 | 0 | 1 | 2 | 3 | 4 |
| s3 | 14 | 13 | 12 | 11 | 0 | 0 |
| s4 | 16 | 17 | 18 | 18 | 0 | 0 |
| s5 | 19 | 20 | 20 | 18 | 0 | 0 |
| s6 | 23 | 22 | 21 | 7 | 6 | 5 |
| s7 | 24 | 25 | 26 | 24 | 25 | 26 |
| s8 | 32 | 28 | 29 | 30 | 31 | 27 |
| s9 | 33 | 34 | 35 | 30 | 33 | 32 |
| s10 | 32 | 35 | 36 | 6 | 7 | 5 |
| s11 | 36 | 37 | 38 | 6 | 35 | 30 |
| s12 | 37 | 38 | 39 | 39 | 0 | 0 |
| s13 | 38 | 39 | 40 | 0 | 0 | 0 |
| s14 | 39 | 40 | 41 | 0 | 0 | 0 |
| t1 | t2 | t3 | t4 | t5 | t6 | |
|---|---|---|---|---|---|---|
| s1 | 5 | 7 | 1 | 1 | 7 | 5 |
| s2 | 6 | 8 | 10 | 2 | 6 | 6 |
| s3 | 7 | 9 | 11 | 5 | 8 | 11 |
| s4 | 8 | 10 | 25 | 25 | 9 | 12 |
| s5 | 9 | 11 | 24 | 8 | 10 | 13 |
| s6 | 10 | 12 | 24 | 8 | 11 | 10 |
| s7 | 11 | 21 | 20 | 10 | 12 | 7 |
| s8 | 18 | 14 | 25 | 9 | 5 | 25 |
| s9 | 13 | 15 | 23 | 12 | 4 | 8 |
| s10 | 14 | 16 | 22 | 11 | 14 | 9 |
| s11 | 15 | 17 | 20 | 12 | 13 | 14 |
| s12 | 16 | 18 | 21 | 13 | 15 | 25 |
| s13 | 17 | 20 | 20 | 10 | 16 | 17 |
| s14 | 12 | 19 | 29 | 16 | 17 | 16 |
3.4.5. Mixed Nash Equilibrium Computation
3.4.6. Dynamics Illustration (Zero-sum Variant)
Methodological transparency statement.
3.5. Utility Function
3.5.1. Attacker Utility Function
| Metric | Description |
|---|---|
| Attack Success Rate (ASR) | Likelihood of successful attack execution |
| Resource Efficiency (RE) | Ratio of attack payoff to resource expenditure |
| Stealthiness (ST) | Ability to avoid detection and attribution |
| Data Exfiltration Effectiveness (DEE) | Success rate of data exfiltration attempts |
| Time-to-Exploit (TTE) | Speed of vulnerability exploitation before patching |
| Evasion of Countermeasures (EC) | Ability to bypass defensive measures |
| Attribution Resistance (AR) | Difficulty in identifying the attacker |
| Reusability of Attack Techniques (RT) | Extent to which attack techniques can be reused |
| Impact of Attacks (IA) | Magnitude of disruption or loss caused |
| Persistence (P) | Ability to maintain control over compromised systems |
| Adaptability (AD) | Capacity to adjust strategies in response to defenses |
| Deniability (DN) | Ability to deny involvement in attacks |
| Longevity (LG) | Duration of operations before disruption |
| Collaboration (CB) | Extent of collaboration with other attackers |
| Financial Gain (FG) | Monetary profit from attacks |
| Reputation and Prestige (RP) | Enhancement of attacker reputation |
3.5.2. Defender Utility Function
| Metric | Description |
|---|---|
| Logging and Audit Trail Capabilities (L) | Extent of logging and audit trail coverage |
| Integrity and Preservation of Digital Evidence (I) | Ability to preserve evidence integrity and backups |
| Documentation and Compliance with Digital Forensic Standards (D) | Adherence to forensic standards and documentation quality |
| Volatile Data Capture Capabilities (VDCC) | Effectiveness of volatile data capture |
| Encryption and Decryption Capabilities (E) | Strength of encryption/decryption capabilities |
| Incident Response Preparedness (IR) | Quality of incident response plans and team readiness |
| Data Recovery Capabilities (DR) | Effectiveness of data recovery tools and processes |
| Network Forensics Capabilities (NF) | Sophistication of network forensic analysis |
| Staff Training and Expertise (STd) | Level of staff training and certifications |
| Legal & Regulatory Compliance (LR) | Compliance with legal and regulatory requirements |
| Accuracy (A) | Consistency and correctness of forensic analysis |
| Completeness (C) | Extent of comprehensive data collection and analysis |
| Timeliness (T) | Speed and efficiency of forensic investigation process |
| Reliability (R) | Consistency and repeatability of forensic techniques |
| Validity (V) | Adherence to legal and scientific standards |
| Preservation (Pd) | Effectiveness of evidence preservation procedures |
3.5.3. Expert-Driven Weight Calculation
- Identify relevant security experts with domain-specific ATT&CK knowledge.
- Analyze the threat landscape and associated TTPs.
- Establish weighting criteria such as Likelihood, Impact, Detectability, and Effort.
- Present tactics and criteria simultaneously to experts for independent evaluation.
- Aggregate weights (average or weighted average depending on expertise level).
- Normalize aggregated weights to ensure comparability.
- Output a set of normalized tactic weights representing collective expert judgment.

3.5.4. Utility Calculation Algorithms
| Algorithm 1 Computing the Utility Function |
|
| Algorithm 2 Analyzing Utility Outcomes |
Note: Algorithm 3 should be invoked for detailed metric review when .
|
3.6. Identify Areas of Improvement
| Algorithm 3 Identify Areas of Improvement |
|
3.7. Prioritizing DFR Improvements
3.7.1. AHP Methodology for Weight Determination
| Algorithm 4 AHP Weight Determination via Eigenvector Method |
|
- Expert Pairwise Judgments: Ten domain experts completed two pairwise comparison matrices (PCMs), one each for attacker and defender metrics. Entries were scored on the Saaty scale (1/9–9), with reciprocity enforced via . Element-wise geometric means across all expert inputs were computed:
- Eigenvector-Based Weight Derivation: For each consensus matrix , we solved and normalized w such that . These normalized weights are visualized in Figure 4.
- Weight Consolidation: Consensus weights were tabulated in Table 9 to integrate directly into the utility functions.
-
Consistency Validation: We calculated the Consistency Index (CI) and Consistency Ratio (CR) using with and (standard AHP Random Index for [55]). Both attacker and defender PCMs achieved :
- Attacker PCM: , ,
- Defender PCM: , ,
Expert panel procedures and transparency.
Reporting precision and repeated weights.
Plausibility of small and similar CR values.
Additional AHP diagnostics and robustness.
| Metric (Attacker) | Weight | Metric (Defender) | Weight | |
|---|---|---|---|---|
| ASR | 0.1094 | L | 0.0881 | |
| RE | 0.0476 | I | 0.0881 | |
| ST | 0.0921 | D | 0.0423 | |
| DEE | 0.0887 | VDCC | 0.0642 | |
| TTE | 0.0476 | E | 0.0461 | |
| EC | 0.0887 | IR | 0.0881 | |
| AR | 0.0814 | DR | 0.0481 | |
| RT | 0.0476 | NF | 0.0819 | |
| IA | 0.0921 | STd | 0.0819 | |
| P | 0.0814 | LR | 0.0481 | |
| AD | 0.0571 | A | 0.0557 | |
| DN | 0.0264 | C | 0.0460 | |
| LG | 0.0433 | T | 0.0693 | |
| CB | 0.0262 | R | 0.0531 | |
| FG | 0.0210 | V | 0.0423 | |
| RP | 0.0487 | Pd | 0.0557 |
3.7.2. Prioritization Process
- Identify metrics with high weight but low scores.
- Assess potential readiness gains from targeted improvement.
- Develop tailored enhancement strategies considering cost, time, and resource constraints.
- Implement, monitor, and iteratively refine improvements.
3.7.3. DFR Improvement Algorithm
| Algorithm 5 DFR Improvement Plan |
|
3.8. Reevaluating the DFR
4. Results
4.1. Data Collection and Methodology
Extraction and mapping objects.
4.2. Analysis of Tactics and Techniques
Named metrics.
How figures are computed.
Notes and limitations.


4.3. DFR Metrics Overview and Impact Quantification
4.3.1. Methods: Calibration-Based Synthetic Attacker Profiles
Notation disambiguation.
Limitations.
Connection to Sun Tzu’s Strategic Wisdom.
4.4. Attackers vs. Defenders: A Comparative Study
4.5. Game Dynamics and Strategy Analysis
Non-zero-sum .
Zero-sum .
4.6. Synthetic, Calibration-Based Case Profiles
4.7. Sensitivity Analysis
4.7.1. Local Perturbation Sensitivity
4.7.2. Monte Carlo Simulation
4.8. Distribution of Readiness Score
- Central Peak at 0.0: A high frequency around 0.0 indicates balanced readiness in most systems.
- Symmetrical Spread: Even tapering on both sides suggests system stability across environments.
- Low-Frequency Extremes: Outliers at the tails (−0.3 and +0.3) denote rare but critical deviations requiring targeted intervention.
5. Discussion
- Implementation Challenges: Real-world adoption may encounter barriers such as limited resources, integration costs, and the need for game theory expertise. Organizational resistance to change and adaptation to new analytical frameworks are additional challenges.
- Integration with Existing Tools: The framework can align synergistically with existing platforms such as threat intelligence systems, SIEM, and EDR tools. These integrations can enhance decision-making and optimize forensic investigation response times.
- Decision Support Systems: Game-theoretic models can augment decision support processes by helping security teams prioritize investments, allocate resources, and optimize incident response based on adaptive risk modeling.
- Training and Awareness Programs: Building internal capability is crucial. Training programs integrating game-theoretic principles into cybersecurity curricula can strengthen decision-making under adversarial uncertainty.
- Collaborative Defense Strategies: The framework supports collective defense through shared intelligence and coordinated responses. Collaborative action can improve deterrence and resilience against complex, multi-organizational threats.
- Policy Implications: Incorporating game theory into cybersecurity has policy ramifications, including regulatory alignment, responsible behavior standards, and ethical considerations regarding autonomous or strategic decision models.
- Case Studies and Use Cases: Documented implementations of game-theoretic approaches demonstrate measurable improvements in risk response and forensic readiness. Future research can expand these to varied industry sectors.
- Future Directions: Continued innovation in game model development, integration with AI-driven threat analysis, and tackling emerging cyber challenges remain promising directions.
5.1. Forensicability and Non-Forensicability
5.2. Evolutionary Game Theory Analysis
- Evolutionary Dynamics: Attackers and defenders co-adapt in continuous feedback cycles; the success of one influences the next strategic shift in the other.
- Replication and Mutation: Successful tactics replicate, while mutations introduce strategic diversity critical for both exploration and adaptation.
- Equilibrium and Stability: Evolutionary Stable Strategies (ESS) represent steady states where neither party benefits from deviation.
- Co-evolutionary Context: The model exposes the perpetual nature of cyber escalation, showing that proactive defense and continuous readiness optimization are essential to remain resilient.
5.3. Attack Impact on Readiness and Investigation Phases
5.4. Readiness and Training Level of the Defender
5.5. Attack Success and Evidence Collection Rates
5.6. Comparative Analysis in SMB and SME Organizations
5.6.1. Irrational Attacker Behavior Analysis
5.7. Limitations and Future Work
- Model Complexity: Real-world human elements and deep organizational dynamics may extend beyond current model parameters.
- Data Availability: Reliance on open-source ATT&CK and D3FEND datasets limits coverage of emerging threat behaviors.
- Computational Needs: Evolutionary modeling and large-scale simulations require high-performance computing resources.
- Expert Bias: AHP-based weighting depends on expert judgment, introducing potential subjective bias despite structured controls.
- Real-time Adaptive Models: Integrating continuous learning to instantly adapt to threat changes.
- AI/ML Integration: Employing predictive modeling for attacker intent recognition and defense automation.
- Cross-Organizational Collaboration: Expanding to cooperative game structures for shared threat response.
- Empirical Validation: Conducting large-scale quantitative studies to reinforce and generalize model applicability.
6. Conclusion
6.1. Limitations
6.2. Future Research Directions
- Extended Environmental Applications: Adapting the framework to cloud-native, IoT, and blockchain ecosystems where architectural differences create distinct forensic challenges.
- Dynamic Threat Intelligence Integration: Employing real-time data feeds and AI-based analytics to enable adaptive recalibration of utilities and strategy distributions.
- Standardized Readiness Benchmarks: Developing comparative industry baselines for forensic maturity that support cross-organizational evaluation and improvement.
- Automated Response Coupling: Integrating automated incident response and orchestration tools to bridge the gap between detection and remediation.
- Enhanced Evolutionary Models: Expanding evolutionary game formulations to capture longer-term strategic co-adaptations between attackers and defenders.
- Large-Scale Empirical Validation: Conducting multi-sector, empirical measurement campaigns to statistically validate and refine equilibrium predictions.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| AHP | Analytic Hierarchy Process |
| APT | Advanced Persistent Threat |
| ATT&CK | MITRE Adversarial Tactics, Techniques, and Common Knowledge |
| CASE | Cyber-investigation Analysis Standard Expression |
| CIA | Confidentiality, Integrity, Availability (triad) |
| CSIRT | Computer Security Incident Response Team |
| CVSS | Common Vulnerability Scoring System |
| D3FEND | MITRE Defensive Countermeasures Knowledge Graph |
| DFIR | Digital Forensics and Incident Response |
| DFR | Digital Forensic Readiness |
| DDoS | Distributed Denial of Service |
| EDR | Endpoint Detection and Response |
| EGT | Evolutionary Game Theory |
| ESS | Evolutionarily Stable Strategy |
| IDPS | Intrusion Detection and Prevention System |
| JCP | Journal of Cybersecurity and Privacy |
| MCDA | Multi-Criteria Decision Analysis |
| MNE | Mixed Nash Equilibrium |
| NDR | Network Detection and Response |
| NE | Nash Equilibrium |
| PNE | Pure Nash Equilibrium |
| SIEM | Security Information and Event Management |
| SMB | Small and Medium Business |
| SME | Small and Medium Enterprise |
| SQLi | Structured Query Language injection |
| TTP | Tactics, Techniques, and Procedures |
| UCO | Unified Cyber Ontology |
| XDR | Extended Detection and Response |
Appendix A. Simulation Model and Settings
Readiness components.
Outcome probabilities.
Sampling and maturity regimes.
- Junior+Mid+Senior: , ;
- Mid+Senior: , ;
- Senior: , .
Experiment size and uncertainty.
Appendix B. Supplementary Materials
Appendix B.1. Notation Table
| Symbol | Type | Description |
|---|---|---|
| Game Structure | ||
| Set | Attacker strategy set: where ATT&CK tactics (see Table 4) | |
| Set | Defender strategy set: where D3FEND control families (see Table 4) | |
| s | Element | Pure attacker strategy: (e.g., , ) |
| t | Element | Pure defender strategy: (e.g., , ) |
| Matrix | Attacker payoff matrix: , where entry is the attacker’s utility for strategy pair | |
| Matrix | Defender payoff matrix: , where entry is the defender’s utility for strategy pair | |
| Scalar | Attacker scalar payoff: (unitless utility, higher is better for attacker) | |
| Scalar | Defender scalar payoff: (unitless utility, higher is better for defender) | |
| Pair | Pure Nash equilibrium: strategy profile where and are mutual best responses | |
| Mixed Strategies and Equilibria | ||
| Vector | Attacker mixed strategy: where (probability distribution over ) | |
| Vector | Defender mixed strategy: where (probability distribution over ) | |
| Vector | Attacker equilibrium mixed strategy: (optimal probability vector) | |
| Vector | Defender equilibrium mixed strategy: (optimal probability vector) | |
| Scalar | i-th component of : (probability that attacker plays strategy ) | |
| Scalar | j-th component of : (probability that defender plays strategy ) | |
| Set | Probability simplex: | |
| Set | Support of : (indices of strategies played with positive probability) | |
| Set | Support of : | |
| Scalar | Attacker expected payoff: (expected utility when attacker plays and defender plays ) | |
| Scalar | Defender expected payoff: (expected utility when defender plays and attacker plays ) | |
| Scalar | Attacker equilibrium value: for all (constant expected payoff at equilibrium) | |
| Scalar | Defender equilibrium value: for all (constant expected payoff at equilibrium) | |
| DFR Metrics and Utilities | ||
| Scalar | Normalized DFR metric i: for strategy pair (see Section 3.4.3) | |
| Scalar | AHP weight for attacker metric i: , (see Table 9) | |
| Scalar | AHP weight for defender metric j: , (see Table 9) | |
| Scalar | Normalized attacker utility: | |
| Scalar | Normalized defender utility: | |
| Scalar | Defender metric k value: (see Section 4.3.1) | |
| Scalar | Attacker metric ℓ value: (see Section 4.3.1) | |
| Matrix | Coupling matrix: linking defender capabilities to attacker metric suppression (see Section 4.3.1) | |
- vs. : is a scalar (single entry in the matrix), while is the entire matrix (bold uppercase). Similarly, is a scalar, is the matrix.
- x, y vs. , : Lowercase x, y denote individual elements or indices; bold , denote mixed-strategy vectors (probability distributions over pure strategies).
- Strategy indices: Pure strategies and are elements (not vectors); mixed strategies and are probability vectors over these sets.
Appendix B.2. Repository and Data Access
- Main repository (reproducibility package): https://github.com/Mehrn0ush/gtDFR. This repository contains all Python scripts, configuration files (config/), processed datasets (data/), analysis scripts for equilibrium computation and profile generation (scripts/), and supplementary artefacts (supplementary/, including AHP weight tables, expert consistency reports, equilibrium exports, and manifest files). The repository is versioned and a tagged release matching this manuscript is provided.
- Historical mapping snapshots (archival only): https://gist.github.com/M3hrnoush. These gists archive earlier CSV/JSON exports used during initial ATT&CK/D3FEND data exploration and MITRE STIX parsing. They are not required to reproduce the main results but are provided for transparency.
Framework versions and snapshots.
Appendix B.3. ATT&CK/D3FEND Snapshot Comparison Report (Robustness Check)
Purpose.
Pipeline.
Versions compared.
| Snapshot | ATT&CK Version | Bundle Modified (UTC) |
| Baseline (closest to analysis window) | v13.1 | 2023-05-09T14:00:00.188Z |
| Intermediate | v14.1 | 2023-11-14T14:00:00.188Z |
| Latest | v18.0 | 2025-10-28T14:00:00.188Z |
Executive summary.
Per-APT deltas (v13.1 → v18.0).
| APT Group | v13.1 | v14.1 | v18.0 | Added | Removed |
| APT33 | 32 | 32 | 31 | 1 | 2 |
| APT39 | 52 | 52 | 53 | 2 | 1 |
| AjaxSecurityTeam | 6 | 6 | 6 | 0 | 0 |
| Cleaver | 5 | 5 | 5 | 0 | 0 |
| CopyKittens | 8 | 8 | 8 | 0 | 0 |
| LeafMiner | 17 | 17 | 17 | 0 | 0 |
| MosesStaff | 12 | 12 | 12 | 1 | 1 |
| MuddyWater | 59 | 59 | 58 | 1 | 2 |
| OilRig | 56 | 56 | 76 | 21 | 1 |
| SilentLibrarian | 13 | 13 | 13 | 0 | 0 |
Provenance (console excerpt).
ATT&CK/D3FEND Robustness Check — Three-Version Comparison: v13.1 → v14.1 → v18.0 Loaded March 2023 bundle: v13.1 (Modified: 2023-05-09T14:00:00.188Z) Loaded v14.1 bundle (Modified: 2023-11-14T14:00:00.188Z) Loaded latest bundle (Modified: 2025-10-28T14:00:00.188Z) D3FEND CSV not found (coverage deltas omitted this run). Per-APT counts (v13.1, v14.1, v18.0): APT33 32,32,31 | APT39 52,52,53 | AjaxSecurityTeam 6,6,6 | Cleaver 5,5,5 | CopyKittens 8,8,8 | LeafMiner 17,17,17 | MosesStaff 12,12,12 | MuddyWater 59,59,58 | OilRig 56,56,76 | SilentLibrarian 13,13,13 Summary: groups with any change (v13.1→v18.0): 5/10; largest change: OilRig (+21, -1) Totals (v13.1→v18.0): Added 26, Removed 7
Provenance (console excerpt).
Note on D3FEND coverage in this run.
Reproducibility.
Appendix B.4. STIX Data Extraction Methodology
- STIX object types: intrusion-set (APT groups), relationship (with relationship_type="uses"), attack-pattern (techniques)
- Scope: Enterprise ATT&CK only (excluded Mobile, ICS, PRE-ATT&CK, which is historical and no longer maintained)
- Relationship path: intrusion-set.idattack-pattern.id, using only direct relationships (not transitive via malware/tools); we extracted external_references with source_name="mitre-attack" to obtain technique IDs
- Filtering: excluded all objects and relationships with revoked==true or x_mitre_deprecated==true
- Normalization: ATT&CK technique IDs normalized to uppercase (e.g., t1110.003→T1110.003)
- Sub-technique handling: counted exact sub-techniques (e.g., T1027.013) as distinct from parent techniques (e.g., T1027); no rollup performed
Appendix B.5. Synthetic Profile Generation for Case Studies
Configuration files.
- Metric definitions configuration: Canonical definitions of all 32 DFR metrics (16 defender + 16 attacker), including metric IDs, names, descriptions, and framework targeting flags. This configuration serves as the single source of truth for metric ordering and naming across all scripts, tables, and visualizations (available in the repository, Section B).
- Coupling matrix configuration: Sparse coupling matrix (16×16) encoding ATT&CK↔D3FEND linkages. Nonzero entries (coefficients typically 0.05–0.20) specify which defender metrics suppress which attacker utilities, operationalizing the game-theoretic framework (available in the repository, Section B).
Generator script.
- Defender before profiles: Beta-distributed samples with mild correlation structure (L↔D↔LR, IR↔STd↔NF, I↔Pd).
- Defender after profiles: For metrics with targeted_by_framework: true, add uplift in [0.10, 0.30]; for others, allow small drift [–0.02, +0.05].
- Attacker before profiles: Weakly correlated Beta priors with semantic blocks (ASR–IA, ST–AR–DN, RE–RT–FG–RP).
- Attacker after profiles: Coupled update via Eq. 9 with case-wise scaling and noise .
Output files.
- RNG seed and hyperparameters
- Nonzero pattern and values of
- SHA-256 hashes of all output files
- QC metrics: mean Defender, Attacker, Readiness with 95% confidence intervals; fraction of unchanged attacker metrics per case; median decrease for linked metrics; and verification that attacker mean does not collapse to zero.
Visualizations.
- Coupling heatmap (Figure A1): 16×16 heatmap of showing nonzero entries (attacker rows × defender columns). Darker cells denote stronger suppression coefficients.
- Before/after ridges (Figure A2): Density distributions for selected attacker metrics (, DEE, AR as linked; RE, CB as controls) across C cases, comparing before vs. after profiles. Linked metrics shift modestly downward; controls remain stable.
- Bipartite linkage graph (Figure A3): Bipartite graph with attacker metrics (left) and defender metrics (right); edge set with widths proportional to . This visualizes the structural prior behind the calibration.




Appendix B.6. Optional Fuzzy Robustness (Not Used for Tables 4–5)
Scope.
Specification
Appendix B.7. Pure Nash Equilibrium Verification
Appendix B.8. Equilibrium Computation Details
Sign convention.
Main results (non-zero-sum).
Zero-sum variant.
Non-degeneracy and ε-perturbation.
Appendix B.9. Zero-sum Sensitivity Variant (A,-D)

| MNE | Attacker Strategy | Defender Strategy |
|---|---|---|
| MNE 1 | : Command and Control (0.5714) | : Model (0.9512) |
| (mixed) | : Impact (0.4286) | : Isolate (0.0488) |
| MNE 2 | : Impact (1.0000) | : Model (1.0000) |
| (pure) | ||
| MNE 3 | : Discovery (0.2000) | : Isolate (0.7857) |
| (mixed) | : Command and Control (0.8000) | : Deceive (0.2143) |
| MNE 4 | : Command and Control (1.0000) | : Isolate (1.0000) |
| (pure) | ||
| MNE 5 | : Discovery (0.1111) | : Isolate (0.0769) |
| (mixed) | : Collection (0.8889) | : Deceive (0.9231) |



| Matrix | n | λmax | CI | CR | GCI | Koczkodaj |
| Attacker | 16 | 16.452 800 | 0.030 200 | 0.019 000 | 0.066 600 | 0.875 000 |
| Defender | 16 | 16.315 700 | 0.021 000 | 0.013 200 | 0.047 700 | 0.500 000 |


| ATT&CK Tactic | Model | Harden | Detect | Isolate | Deceive | Evict |
|---|---|---|---|---|---|---|
| Collection | 31 | 21 | 28 | 15 | 18 | 18 |
| Command and Control | 41 | 15 | 41 | 15 | 15 | 41 |
| Credential Access | 58 | 52 | 58 | 43 | 49 | 49 |
| Defense Evasion | 47 | 42 | 45 | 33 | 33 | 40 |
| Discovery | 27 | 18 | 24 | 13 | 15 | 18 |
| Execution | 30 | 22 | 27 | 18 | 19 | 21 |
| Exfiltration | 23 | 15 | 20 | 12 | 13 | 16 |
| Initial Access | 35 | 26 | 32 | 21 | 23 | 26 |
| Lateral Movement | 36 | 28 | 33 | 22 | 24 | 27 |
| Persistence | 38 | 29 | 35 | 24 | 26 | 28 |
| Privilege Escalation | 39 | 30 | 36 | 25 | 27 | 29 |
| Reconnaissance | 0 | 0 | 0 | 0 | 0 | 0 |
| Resource Development | 0 | 0 | 0 | 0 | 0 | 0 |
| Impact | 29 | 22 | 26 | 18 | 19 | 21 |
| Total | 371 | 274 | 360 | 253 | 228 | 288 |
| ATT&CK Tactic | Frequency |
|---|---|
| Collection | 8 |
| Command and Control | 9 |
| Credential Access | 10 |
| Defense Evasion | 10 |
| Discovery | 8 |
| Execution | 9 |
| Exfiltration | 7 |
| Initial Access | 9 |
| Lateral Movement | 9 |
| Persistence | 9 |
| Privilege Escalation | 9 |
| Reconnaissance | 0 |
| Resource Development | 0 |
| Impact | 8 |
| Total | 104 |
| D3FEND Control Family | Frequency |
|---|---|
| Model | 371 |
| Harden | 274 |
| Detect | 360 |
| Isolate | 253 |
| Deceive | 228 |
| Evict | 288 |
| Total | 1,774 |
Appendix B.10. Attacker Utility Metrics and Scoring Preferences (Full Details)
| Metric | Description | Score |
|---|---|---|
| Attack Success Rate (ASR) | Attack success rate is nearly nonexistent | 0 |
| Attacks are occasionally successful | 0.1–0.3 | |
| Attacks are successful about half of the time | 0.4–0.6 | |
| Attacks are usually successful | 0.7–0.9 | |
| Attacks are always successful | 1 | |
| Resource Efficiency (RE) | Attacks require considerable resources with low payoff | 0 |
| Attacks require significant resources but have a moderate payoff | 0.1–0.3 | |
| Attacks are somewhat resource efficient | 0.4–0.6 | |
| Attacks are quite resource efficient | 0.7–0.9 | |
| Attacks are exceptionally resource efficient | 1 | |
| Stealthiness (ST) | Attacks are always detected and attributed | 0 |
| Attacks are usually detected and often attributed | 0.1–0.3 | |
| Attacks are sometimes detected and occasionally attributed | 0.4–0.6 | |
| Attacks are seldom detected and rarely attributed | 0.7–0.9 | |
| Attacks are never detected nor attributed | 1 | |
| Data Exfiltration Effectiveness (DEE) | Data exfiltration attempts always fail | 0 |
| Data exfiltration attempts succeed only occasionally | 0.1–0.3 | |
| Data exfiltration attempts often succeed | 0.4–0.6 | |
| Data exfiltration attempts usually succeed | 0.7–0.9 | |
| Data exfiltration attempts always succeed | 1 | |
| Time-to-Exploit (TTE) | Vulnerabilities are never successfully exploited before patching | 0 |
| Vulnerabilities are exploited before patching only occasionally | 0.1–0.3 | |
| Vulnerabilities are often exploited before patching | 0.4–0.6 | |
| Vulnerabilities are usually exploited before patching | 0.7–0.9 | |
| Vulnerabilities are always exploited before patching | 1 | |
| Evasion of Countermeasures (EC) | Countermeasures always successfully thwart attacks | 0 |
| Countermeasures often successfully thwart attacks | 0.1–0.3 | |
| Countermeasures sometimes fail to thwart attacks | 0.4–0.6 | |
| Countermeasures often fail to thwart attacks | 0.7–0.9 | |
| Countermeasures never successfully thwart attacks | 1 | |
| Attribution Resistance (AR) | The attacker is always accurately identified | 0 |
| The attacker is often accurately identified | 0.1–0.3 | |
| The attacker is sometimes accurately identified | 0.4–0.6 | |
| The attacker is seldom accurately identified | 0.7–0.9 | |
| The attacker is never accurately identified | 1 | |
| Reusability of Attack Techniques (RT) | Attack techniques are always one-off, never reusable | 0 |
| Attack techniques are occasionally reusable | 0.1–0.3 | |
| Attack techniques are often reusable | 0.4–0.6 | |
| Attack techniques are usually reusable | 0.7–0.9 | |
| Attack techniques are always reusable | 1 | |
| Impact of Attacks (IA) | Attacks cause no notable disruption or loss | 0 |
| Attacks cause minor disruption or loss | 0.1–0.3 | |
| Attacks cause moderate disruption or loss | 0.4–0.6 | |
| Attacks cause major disruption or loss | 0.7–0.9 | |
| Attacks cause catastrophic disruption or loss | 1 | |
| Persistence (P) | The attacker cannot maintain control over compromised systems | 0 |
| The attacker occasionally maintains control over compromised systems | 0.1–0.3 | |
| The attacker often maintains control over compromised systems | 0.4–0.6 | |
| The attacker usually maintains control over compromised systems | 0.7–0.9 | |
| The attacker always maintains control over compromised systems | 1 | |
| Adaptability (AD) | The attacker is unable to adjust strategies in response to changing defenses | 0 |
| The attacker occasionally adjusts strategies in response to changing defenses | 0.1–0.3 | |
| The attacker often adjusts strategies in response to changing defenses | 0.4–0.6 | |
| The attacker usually adjusts strategies in response to changing defenses | 0.7–0.9 | |
| The attacker always adjusts strategies in response to changing defenses | 1 | |
| Deniability (DN) | The attacker cannot deny involvement in attacks | 0 |
| The attacker can occasionally deny involvement in attacks | 0.1–0.3 | |
| The attacker can often deny involvement in attacks | 0.4–0.6 | |
| The attacker can usually deny involvement in attacks | 0.7–0.9 | |
| The attacker can always deny involvement in attacks | 1 | |
| Longevity (LG) | The attacker’s operations are quickly disrupted | 0 |
| The attacker’s operations are often disrupted | 0.1–0.3 | |
| The attacker’s operations are occasionally disrupted | 0.4–0.6 | |
| The attacker’s operations are rarely disrupted | 0.7–0.9 | |
| The attacker’s operations are never disrupted | 1 | |
| Collaboration (CB) | The attacker never collaborates with others | 0 |
| The attacker occasionally collaborates with others | 0.1–0.3 | |
| The attacker often collaborates with others | 0.4–0.6 | |
| The attacker usually collaborates with others | 0.7–0.9 | |
| The attacker always collaborates with others | 1 | |
| Financial Gain (FG) | The attacker never profits from attacks | 0 |
| The attacker occasionally profits from attacks | 0.1–0.3 | |
| The attacker often profits from attacks | 0.4–0.6 | |
| The attacker usually profits from attacks | 0.7–0.9 | |
| The attacker always profits from attacks | 1 | |
| Reputation and Prestige (RP) | The attacker gains no reputation or prestige from attacks | 0 |
| The attacker gains little reputation or prestige from attacks | 0.1–0.3 | |
| The attacker gains some reputation or prestige from attacks | 0.4–0.6 | |
| The attacker gains considerable reputation or prestige from attacks | 0.7–0.9 | |
| The attacker’s reputation or prestige is greatly enhanced by each attack | 1 |
Appendix B.11. Defender Utility Metrics and Scoring Preferences (Full Details)
| Metric | Description | Score |
|---|---|---|
| Logging and Audit Trail Capabilities (L) | No logging or audit trail capabilities | 0 |
| Minimal or ineffective logging and audit trail capabilities | 0.1–0.3 | |
| Moderate logging and audit trail capabilities | 0.4–0.6 | |
| Robust logging and audit trail capabilities with some limitations | 0.7–0.9 | |
| Comprehensive and highly effective logging and audit trail capabilities | 1 | |
| Integrity and Preservation of Digital Evidence (I) | Complete loss of all digital evidence, including backups | 0 |
| Severe damage or compromised backups with limited recoverability | 0.1–0.3 | |
| Partial loss of digital evidence, with some recoverable data | 0.4–0.6 | |
| Reasonable integrity and preservation of digital evidence, with recoverable backups | 0.7–0.9 | |
| Full integrity and preservation of all digital evidence, including secure and accessible backups | 1 | |
| Documentation and Compliance with Digital Forensic Standards (D) | No documentation or non-compliance with digital forensic standards | 0 |
| Incomplete or inadequate documentation and limited adherence to digital forensic standards | 0.1–0.3 | |
| Basic documentation and partial compliance with digital forensic standards | 0.4–0.6 | |
| Well-documented processes and good adherence to digital forensic standards | 0.7–0.9 | |
| Comprehensive documentation and strict compliance with recognized digital forensic standards | 1 | |
| Volatile Data Capture Capabilities (VDCC) | No volatile data capture capabilities | 0 |
| Limited or unreliable volatile data capture capabilities | 0.1–0.3 | |
| Moderate volatile data capture capabilities | 0.4–0.6 | |
| Effective volatile data capture capabilities with some limitations | 0.7–0.9 | |
| Robust and reliable volatile data capture capabilities | 1 | |
| Encryption and Decryption Capabilities (E) | No encryption or decryption capabilities | 0 |
| Weak or limited encryption and decryption capabilities | 0.1–0.3 | |
| Moderate encryption and decryption capabilities | 0.4–0.6 | |
| Strong encryption and decryption capabilities with some limitations | 0.7–0.9 | |
| Highly secure encryption and decryption capabilities | 1 | |
| Incident Response Preparedness (IR) | No incident response plan or team in place | 0 |
| Initial incident response plan, not regularly tested or updated, with limited team capability | 0.1–0.3 | |
| Developed incident response plan, periodically tested, with trained team | 0.4–0.6 | |
| Comprehensive incident response plan, regularly tested and updated, with a well-coordinated team | 0.7–0.9 | |
| Advanced incident response plan, continuously tested and optimized, with a dedicated, experienced team | 1 | |
| Data Recovery Capabilities (DR) | No data recovery processes or tools in place | 0 |
| Basic data recovery tools, with limited effectiveness | 0.1–0.3 | |
| Advanced data recovery tools, with some limitations in terms of capabilities | 0.4–0.6 | |
| Sophisticated data recovery tools, with high success rates | 0.7–0.9 | |
| Comprehensive data recovery tools and processes, with excellent success rates | 1 | |
| Network Forensics Capabilities (NF) | No network forensic capabilities | 0 |
| Basic network forensic capabilities, limited to capturing packets or logs | 0.1–0.3 | |
| Developed network forensic capabilities, with ability to analyze traffic and detect anomalies | 0.4–0.6 | |
| Advanced network forensic capabilities, with proactive threat detection | 0.7–0.9 | |
| Comprehensive network forensic capabilities, with full spectrum threat detection and automated responses | 1 | |
| Staff Training and Expertise (STd) | No trained staff or expertise in digital forensics | 0 |
| Few staff members with basic training in digital forensics | 0.1–0.3 | |
| Several staff members with intermediate-level training, some with certifications | 0.4–0.6 | |
| Most staff members with advanced-level training, many with certifications | 0.7–0.9 | |
| All staff members are experts in digital forensics, with relevant certifications | 1 | |
| Legal & Regulatory Compliance (LR) | Non-compliance with applicable legal and regulatory requirements | 0 |
| Partial compliance with significant shortcomings | 0.1–0.3 | |
| Compliance with most requirements, some minor issues | 0.4–0.6 | |
| High compliance with only minor issues | 0.7–0.9 | |
| Full compliance with all relevant legal and regulatory requirements | 1 | |
| Accuracy (A) | No consistency in results, many errors and inaccuracies in digital forensic analysis | 0 |
| Frequent errors in analysis, high level of inaccuracy | 0.1–0.3 | |
| Some inaccuracies in results, needs further improvement | 0.4–0.6 | |
| High level of accuracy, few inconsistencies or errors | 0.7–0.9 | |
| Extremely accurate, consistent results with virtually no errors | 1 | |
| Completeness (C) | Significant data overlooked, very incomplete analysis | 0 |
| Some relevant data collected, but analysis remains substantially incomplete | 0.1–0.3 | |
| Most of the relevant data collected and analyzed, but some gaps remain | 0.4–0.6 | |
| High degree of completeness in data collection and analysis, minor gaps | 0.7–0.9 | |
| Comprehensive data collection and analysis, virtually no information overlooked | 1 | |
| Timeliness (T) | Extensive delays in digital forensic investigation process, no urgency | 0 |
| Frequent delays, slow response time | 0.1–0.3 | |
| Reasonable response time, occasional delays | 0.4–0.6 | |
| Quick response time, infrequent delays | 0.7–0.9 | |
| Immediate response, efficient process, no delays | 1 | |
| Reliability (R) | Unreliable techniques, inconsistent and unrepeatable results | 0 |
| Some reliability in techniques, but results are often inconsistent | 0.1–0.3 | |
| Mostly reliable techniques, occasional inconsistencies in results | 0.4–0.6 | |
| High reliability in techniques, few inconsistencies | 0.7–0.9 | |
| Highly reliable and consistent techniques, results are dependable and repeatable | 1 | |
| Validity (V) | No adherence to standards, methods not legally or scientifically acceptable | 0 |
| Minimal adherence to standards, many methods not acceptable | 0.1–0.3 | |
| Moderate adherence to standards, some methods not acceptable | 0.4–0.6 | |
| High adherence to standards, majority of methods are acceptable | 0.7–0.9 | |
| Strict adherence to standards, all methods used are legally and scientifically acceptable | 1 | |
| Preservation (Pd) | No procedures in place for evidence preservation, evidence frequently damaged or lost | 0 |
| Minimal preservation procedures, evidence sometimes damaged or lost | 0.1–0.3 | |
| Moderate preservation procedures, occasional evidence damage or loss | 0.4–0.6 | |
| Robust preservation procedures, rare instances of evidence damage or loss | 0.7–0.9 | |
| Comprehensive preservation procedures, virtually no damage or loss of evidence | 1 |
References
- Chen, P.; Desmet, L.; Huygens, C. A study on advanced persistent threats. In Proceedings of the Communications and Multimedia Security: 15th IFIP TC 6/TC 11 International Conference, CMS 2014, Aveiro, Portugal, 2014. Proceedings 15. Springer, 2014, September 25-26; pp. 63–72.
- Scott, J.S.R. Advanced Persistent Threats: Recognizing the Danger and Arming Your Organization. IT Professional 2015, 17. [Google Scholar]
- Rowlingson, R. A ten step process for forensic readiness. International Journal of Digital Evidence 2004, 2, 1–28. [Google Scholar]
- IBM. Cost of a Data Breach Report 2025: The AI Oversight Gap, 2025. Based on IBM analysis of research data independently compiled by Ponemon Institute.
- Bonderud, D. Cost of a data breach 2025: Financial industry, 2025.
- Johnson, R. 60 percent of small companies close within 6 months of being hacked,, 2019.
- Baker, P. The SolarWinds hack timeline: Who knew what, and when?, 2021.
- Batool, A.; Zowghi, D.; Bano, M. AI governance: a systematic literature review. AI and Ethics, 2025; 1–15. [Google Scholar]
- Wrightson, T. Advanced persistent threat hacking: the art and science of hacking any organization; McGraw-Hill Education Group, 2014.
- Årnes, A. Digital forensics; John Wiley & Sons, 2017.
- Griffith, S.B. Sun Tzu: The art of war; Vol. 39, Oxford University Press London, 1963.
- Myerson, R.B. Game theory; Harvard university press, 2013.
- Belton, V.; Stewart, T. Multiple criteria decision analysis: an integrated approach; Springer Science & Business Media, 2002.
- Lye, K.w.; Wing, J.M. Game strategies in network security. International Journal of Information Security 2005, 4, 71–86. [Google Scholar] [CrossRef]
- Roy, S.; Ellis, C.; Shiva, S.; Dasgupta, D.; Shandilya, V.; Wu, Q. A survey of game theory as applied to network security. In Proceedings of the 2010 43rd Hawaii International Conference on System Sciences. IEEE; 2010; pp. 1–10. [Google Scholar]
- Zhu, Q.; Basar, T. Game-theoretic methods for robustness, security, and resilience of cyberphysical control systems: games-in-games principle for optimal cross-layer resilient control systems. IEEE Control Systems Magazine 2015, 35, 46–65. [Google Scholar]
- Kent, K.; Chevalier, S.; Grance, T. Guide to integrating forensic techniques into incident. Tech. Rep. 800-86.
- Alpcan, T.; Başar, T. Network security: A decision and game-theoretic approach. Cambridge University Press 2010. [Google Scholar]
- Casey, E. Digital evidence and computer crime: Forensic science, computers, and the internet; Academic press, 2011.
- Manshaei, M.H.; Zhu, Q.; Alpcan, T.; Bacşar, T.; Hubaux, J.P. Game theory meets network security and privacy. Acm Computing Surveys (Csur) 2013, 45, 1–39. [Google Scholar] [CrossRef]
- Nisioti, A.; Loukas, G.; Rass, S.; Panaousis, E. Game-theoretic decision support for cyber forensic investigations. Sensors 2021, 21, 5300. [Google Scholar] [CrossRef] [PubMed]
- Hasanabadi, S.S.; Lashkari, A.H.; Ghorbani, A.A. A game-theoretic defensive approach for forensic investigators against rootkits. Forensic Science International: Digital Investigation 2020, 33, 200909. [Google Scholar] [CrossRef]
- Karabiyik, U.; Karabiyik, T. A game theoretic approach for digital forensic tool selection. Mathematics 2020, 8, 774. [Google Scholar] [CrossRef]
- Hasanabadi, S.S.; Lashkari, A.H.; Ghorbani, A.A. A memory-based game-theoretic defensive approach for digital forensic investigators. Forensic Science International: Digital Investigation 2021, 38, 301214. [Google Scholar] [CrossRef]
- Caporusso, N.; Chea, S.; Abukhaled, R. A game-theoretical model of ransomware. In Proceedings of the Advances in Human Factors in Cybersecurity: Proceedings of the AHFE 2018 International Conference on Human Factors in Cybersecurity, July 21-25, 2018, Loews Sapphire Falls Resort at Universal Studios, Orlando, Florida, USA 9. Springer, 2018, pp. 69–78.
- Kebande, V.R.; Venter, H.S. Novel digital forensic readiness technique in the cloud environment. Australian Journal of Forensic Sciences 2018, 50, 552–591. [Google Scholar] [CrossRef]
- Kebande, V.R.; Karie, N.M.; Choo, K.R.; Alawadi, S. Digital forensic readiness intelligence crime repository. Security and Privacy 2021, 4, e151. [Google Scholar] [CrossRef]
- Englbrecht, L.; Meier, S.; Pernul, G. Towards a capability maturity model for digital forensic readiness. Wireless Networks 2020, 26, 4895–4907. [Google Scholar] [CrossRef]
- Reddy, K.; Venter, H.S. The architecture of a digital forensic readiness management system. Computers & security 2013, 32, 73–89. [Google Scholar]
- Grobler, C.P.; Louwrens, C. Digital forensic readiness as a component of information security best practice. In Proceedings of the IFIP International Information Security Conference. Springer; 2007; pp. 13–24. [Google Scholar]
- Lakhdhar, Y.; Rekhis, S.; Sabir, E. A Game Theoretic Approach For Deploying Forensic Ready Systems. In Proceedings of the 2020 International Conference on Software, Telecommunications and Computer Networks (SoftCOM). IEEE; 2020; pp. 1–6. [Google Scholar]
- Elyas, M.; Ahmad, A.; Maynard, S.B.; Lonie, A. Digital forensic readiness: Expert perspectives on a theoretical framework. Computers & Security 2015, 52, 70–89. [Google Scholar] [CrossRef]
- Baiquni, I.Z.; Amiruddin, A. A case study of digital forensic readiness level measurement using DiFRI model. In Proceedings of the 2022 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS). IEEE. 2022; pp. 184–189. [Google Scholar]
- Rawindaran, N.; Jayal, A.; Prakash, E. Cybersecurity Framework: Addressing Resiliency in Welsh SMEs for Digital Transformation and Industry 5.0. Journal of Cybersecurity and Privacy 2025, 5, 17. [Google Scholar] [CrossRef]
- Trenwith, P.M.; Venter, H.S. Digital forensic readiness in the cloud. In Proceedings of the 2013 Information Security for South Africa. IEEE; 2013; pp. 1–5. [Google Scholar]
- Monteiro, D.; Yu, Y.; Zisman, A.; Nuseibeh, B. Adaptive Observability for Forensic-Ready Microservice Systems. IEEE Transactions on Services Computing 2023. [Google Scholar] [CrossRef]
- Xiong, W.; Legrand, E.; Åberg, O.; Lagerström, R. Cyber security threat modeling based on the MITRE Enterprise ATT&CK Matrix. Software and Systems Modeling 2022, 21, 157–177. [Google Scholar]
- Wang, J.; Neil, M. A Bayesian-network-based cybersecurity adversarial risk analysis framework with numerical examples. arXiv preprint, 2021; arXiv:2106.00471 2021. [Google Scholar]
- Usman, N.; Usman, S.; Khan, F.; Jan, M.A.; Sajid, A.; Alazab, M.; Watters, P. Intelligent dynamic malware detection using machine learning in IP reputation for forensics data analytics. Future Generation Computer Systems 2021, 118, 124–141. [Google Scholar] [CrossRef]
- Li, M.; Lal, C.; Conti, M.; Hu, D. LEChain: A blockchain-based lawful evidence management scheme for digital forensics. Future Generation Computer Systems 2021, 115, 406–420. [Google Scholar] [CrossRef]
- Soltani, S.; Seno, S.A.H. Detecting the software usage on a compromised system: A triage solution for digital forensics. Forensic Science International: Digital Investigation 2023, 44, 301484. [Google Scholar] [CrossRef]
- Rother, C.; Chen, B. Reversing File Access Control Using Disk Forensics on Low-Level Flash Memory. Journal of Cybersecurity and Privacy 2024, 4, 805–822. [Google Scholar] [CrossRef]
- Nikkel, B. Registration Data Access Protocol (RDAP) for digital forensic investigators. Digital Investigation 2017, 22, 133–141. [Google Scholar] [CrossRef]
- Nikkel, B. Fintech forensics: Criminal investigation and digital evidence in financial technologies. Forensic Science International: Digital Investigation 2020, 33, 200908. [Google Scholar] [CrossRef]
- Seo, S.; Seok, B.; Lee, C. Digital forensic investigation framework for the metaverse. The Journal of Supercomputing 2023, 79, 9467–9485. [Google Scholar] [CrossRef]
- Malhotra, S. Digital forensics meets ai: A game-changer for the 4th industrial revolution. In Artificial Intelligence and Blockchain in Digital Forensics; River Publishers, 2023; pp. 1–20.
- Tok, Y.C.; Chattopadhyay, S. Identifying threats, cybercrime and digital forensic opportunities in Smart City Infrastructure via threat modeling. Forensic Science International: Digital Investigation 2023, 45, 301540. [Google Scholar] [CrossRef]
- Han, K.; Choi, J.H.; Choi, Y.; Lee, G.M.; Whinston, A.B. Security defense against long-term and stealthy cyberattacks. Decision Support Systems 2023, 166, 113912. [Google Scholar] [CrossRef]
- Chandra, A.; Snowe, M.J. A taxonomy of cybercrime: Theory and design. International Journal of Accounting Information Systems 2020, 38, 100467. [Google Scholar] [CrossRef]
- Casey, E.; Barnum, S.; Griffith, R.; Snyder, J.; van Beek, H.; Nelson, A. Advancing coordinated cyber-investigations and tool interoperability using a community developed specification language. Digital investigation 2017, 22, 14–45. [Google Scholar] [CrossRef]
- Boyd, S.; Vandenberghe, L. Convex optimization; Cambridge university press, 2004.
- Knight, V.; Campbell, J. Nashpy: A Python library for the computation of Nash equilibria. Journal of Open Source Software 2018, 3, 904. [Google Scholar] [CrossRef]
- Zopounidis, C.; Pardalos, P.M. Handbook of multicriteria analysis; Vol. 103, Springer Science & Business Media, 2010.
- Bipm, I.; Ifcc, I.; Iso, I. IUPaP, and OImL. Evaluation of measurement data—Supplement 2008, 1. [Google Scholar]
- Saaty, T.L. Analytic hierarchy process. In Encyclopedia of operations research and management science; Springer, 2013; pp. 52–64.
- The MITRE Corporation. MITRE ATT&CK STIX Data, 2024. Structured Threat Information Expression (STIX 2.1) datasets for Enterprise, Mobile, and ICS ATT&CK.
















| Dimension | Our Approach | Nisioti et al. [21] | Karabiyik et al. [23] | Lakhdhar et al. [31] | Wang et al. [38] | Monteiro et al. [36] |
|---|---|---|---|---|---|---|
| Game Model | Non-zero-sum Bimatrix (MNE/PNE) | Bayesian (BNE) | 2×2 Normal-Form | Non-cooperative | ARA (Influence Diagram) | Bayesian (BNE) |
| ATT&CK | ✓Explicit (14 Tactics) | ✓Explicit | × | × | × | × |
| D3FEND | ✓Explicit (6 Families) | × | × | × | × | × |
| Knowledge Coupling | ✓ATT&CK↔D3FEND | Δ ATT&CK + CVSS | Δ Empirical (ForGe) | Δ Internal (CSLib) | Δ Probabilistic (HBN) | Δ CVSS + OpenTelemetry |
| Weighting Method | ✓AHP (10 Experts) | Δ CVSS + SME | Δ Rule-based | Δ Parametric () | Δ Implicit | Δ Scalar Parameters |
| Quantitative Utilities | ✓32 AHP Metrics | ✓Payoff Functions | ✓Payoff Matrix | ✓Parametric | ✓Utility Nodes | ✓Closed-Form |
| Equilibrium | ✓PNE & MNE | ✓BNE | ✓Pure/Mixed NE | ✓Pure/Mixed NE | × ARA | ✓BNE |
| DFR Focus | ✓DFR | Δ Post-mortem | Δ Investigation Efficiency | ✓Forensic Readiness | × Cyber Risk | ✓Forensic Readiness |
| SME/SMB | ✓Explicitly Targeted | Δ Domain-Agnostic | Δ Potential | Δ Applicable | Δ Feasible | Δ Implicit |
| Standardization | Δ ATT&CK, D3FEND, STIX | Δ ATT&CK, STIX, CVSS | Δ Open-Source | Δ CVE/US-CERT | Δ Self-Contained | Δ CVSS, OpenTelemetry |
| Reproducibility | ✓Code, Data, Seeds | Δ Public Inputs, No Code | Δ Code on Request | Δ No Public Code/Data | Δ No Code, Commercial | ✓Benchmark, Repo |
| Key Differentiator | Integrated DFR | Bayesian Anti-Forensics | Tool Selection | Provability Taxonomy | Adversarial Risk Analysis | Microservice Observability |
-
Legend:✓= Fully Addressed; Δ= Partially Addressed; ×= Not Addressed.Abbreviations: AHP (Analytic Hierarchy Process), MNE (Mixed Nash Equilibrium), PNE (Pure Nash Equilibrium), BNE(Bayesian Nash Equilibrium), ARA (Adversarial Risk Analysis), HBN (Hybrid Bayesian Network).Our framework uniquely integrates ATT&CK–D3FEND knowledge with AHP-weighted utilities and explicit SME/SMBtargeting—a combination not found in prior work.
| Symbol | Description |
|---|---|
| Game Structure | |
| Attacker strategy set: , ATT&CK tactics | |
| Defender strategy set: , D3FEND control families | |
| Attacker payoff matrix: , entry | |
| Defender payoff matrix: , entry | |
| Strategies | |
| Attacker pure strategy (ATT&CK tactic) | |
| Defender pure strategy (D3FEND control family) | |
| Attacker mixed strategy: , probability vector over | |
| Defender mixed strategy: , probability vector over | |
| , | Nash equilibrium mixed strategies |
| Utilities and Metrics | |
| Normalized attacker utility: | |
| Normalized defender utility: | |
| AHP weight for attacker metric i: , | |
| AHP weight for defender metric j: , | |
| Attacker DFR metric i value: | |
| Defender DFR metric j value: | |
| Notation Disambiguation | |
| vs. | is a scalar (single entry); is the entire matrix |
| x, y vs. , | x, y are elements/indices; , are mixed-strategy vectors |
| File No. | L | I | D | VDCC | E | IR | DR | NF | STd | LR | A | C | T | R | V | Pd |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| case1 | 0.5 | 0.6 | 0.3 | 0.4 | 0.5 | 0.6 | 0.2 | 0.5 | 0.2 | 0.6 | 0.7 | 0.2 | 0.6 | 0.1 | 0.2 | 0.4 |
| case2 | 0.1 | 0.2 | 0.7 | 0.6 | 0.1 | 0.2 | 0.6 | 0.1 | 0.6 | 0.4 | 0.2 | 0.6 | 0.2 | 0.1 | 0.6 | 0.5 |
| case3 | 0.6 | 0.1 | 0.6 | 0.5 | 0.6 | 0.4 | 0.2 | 0.2 | 0.6 | 0.1 | 0.6 | 0.1 | 0.2 | 0.6 | 0.1 | 0.6 |
| case4 | 0.7 | 0.2 | 0.2 | 0.7 | 0.2 | 0.6 | 0.4 | 0.6 | 0.2 | 0.1 | 0.2 | 0.6 | 0.1 | 0.2 | 0.6 | 0.2 |
| case5 | 0.7 | 0.6 | 0.3 | 0.5 | 0.6 | 0.7 | 0.4 | 0.2 | 0.6 | 0.3 | 0.6 | 0.2 | 0.1 | 0.6 | 0.2 | 0.3 |
| case6 | 0.5 | 0.7 | 0.5 | 0.7 | 0.5 | 0.4 | 0.6 | 0.6 | 0.3 | 0.2 | 0.6 | 0.1 | 0.6 | 0.2 | 0.4 | 0.6 |
| case7 | 0.4 | 0.6 | 0.3 | 0.6 | 0.7 | 0.6 | 0.2 | 0.2 | 0.7 | 0.6 | 0.2 | 0.7 | 0.6 | 0.2 | 0.5 | 0.4 |
| case8 | 0.1 | 0.2 | 0.6 | 0.5 | 0.6 | 0.2 | 0.5 | 0.4 | 0.2 | 0.6 | 0.1 | 0.2 | 0.6 | 0.7 | 0.6 | 0.2 |
| case9 | 0.6 | 0.3 | 0.2 | 0.6 | 0.2 | 0.3 | 0.6 | 0.6 | 0.4 | 0.2 | 0.6 | 0.3 | 0.2 | 0.6 | 0.2 | 0.5 |
| case10 | 0.5 | 0.6 | 0.3 | 0.2 | 0.6 | 0.2 | 0.7 | 0.2 | 0.5 | 0.6 | 0.2 | 0.4 | 0.2 | 0.6 | 0.5 | 0.2 |
| File No. | L | I | D | VDCC | E | IR | DR | NF | STd | LR | A | C | T | R | V | Pd |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| case1 | 0.8 | 0.8 | 0.7 | 0.9 | 0.8 | 0.8 | 0.7 | 0.9 | 0.7 | 0.6 | 0.8 | 0.7 | 0.8 | 0.7 | 0.7 | 0.7 |
| case2 | 0.9 | 0.8 | 0.9 | 0.8 | 0.7 | 0.9 | 0.7 | 0.8 | 0.6 | 0.7 | 0.7 | 0.8 | 0.7 | 0.6 | 0.6 | 0.8 |
| case3 | 0.8 | 0.7 | 0.8 | 0.9 | 0.8 | 0.9 | 0.8 | 0.9 | 0.7 | 0.8 | 0.8 | 0.7 | 0.7 | 0.7 | 0.8 | 0.7 |
| case4 | 0.8 | 0.9 | 0.9 | 0.8 | 0.7 | 0.9 | 0.9 | 0.8 | 0.7 | 0.7 | 0.7 | 0.8 | 0.7 | 0.7 | 0.6 | 0.8 |
| case5 | 0.7 | 0.7 | 0.9 | 0.7 | 0.8 | 0.9 | 0.7 | 0.9 | 0.8 | 0.8 | 0.7 | 0.7 | 0.6 | 0.8 | 0.7 | 0.7 |
| case6 | 0.7 | 0.8 | 0.8 | 0.9 | 0.7 | 0.8 | 0.6 | 0.9 | 0.6 | 0.7 | 0.6 | 0.8 | 0.7 | 0.9 | 0.7 | 0.7 |
| case7 | 0.8 | 0.7 | 0.9 | 0.7 | 0.6 | 0.9 | 0.8 | 0.9 | 0.7 | 0.8 | 0.7 | 0.7 | 0.8 | 0.7 | 0.8 | 0.8 |
| case8 | 0.7 | 0.6 | 0.9 | 0.8 | 0.8 | 0.9 | 0.8 | 0.8 | 0.8 | 0.7 | 0.7 | 0.8 | 0.7 | 0.6 | 0.8 | 0.7 |
| case9 | 0.9 | 0.7 | 0.8 | 0.7 | 0.7 | 0.9 | 0.7 | 0.8 | 0.7 | 0.8 | 0.8 | 0.7 | 0.6 | 0.7 | 0.7 | 0.7 |
| case10 | 0.8 | 0.8 | 0.9 | 0.7 | 0.7 | 0.9 | 0.8 | 0.7 | 0.7 | 0.8 | 0.7 | 0.7 | 0.8 | 0.8 | 0.6 | 0.8 |
| Resources | Defenders | Attackers | Scenario | Final Value | Avg. Attacker Strategy | Avg. Defender Strategy | Avg. Readiness |
|---|---|---|---|---|---|---|---|
| 1 | 10 | 5 | a | 0.56 | 0.84 | – | 0.00 |
| 1 | 15 | 5 | b | 0.52 | 0.94 | – | 0.00 |
| 1 | 25 | 5 | c | 0.61 | 0.69 | – | 0.00 |
| 3 | 10 | 5 | d | 0.96 | 0.58 | – | 0.00 |
| 3 | 25 | 5 | f | 1.00 | 1.00 | – | 0.00 |
| 5 | 15 | 5 | h | 0.91 | 0.75 | – | 0.03 |
| Low | Medium | High | |
|---|---|---|---|
| Attack success rate | 0.25 ± 0.0038 | 0.53 ± 0.0044 | 0.75 ± 0.0038 |
| Evidence collection rate | 0.93 ± 0.0022 | 0.96 ± 0.0017 | 0.94 ± 0.0021 |
| ID | SME | SMB | Impact metrics | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Type | Malic. | Str. | Impact | CVSS | Type | Malic. | Str. | Impact | CVSS | Workload | Avail. | Conf. | Integ. | |
| 0 | DDoS | 0.75 | 1.12 | High | 7 | DDoS | 0.75 | 1.12 | High | 7 | 1.125 | 0.8 | 0 | 0 |
| 1 | SQLI | 0.75 | 1.12 | High | 9 | SQLI | 0.75 | 1.12 | High | 9 | 2.7 | 2.58 | 7.2 | 7.2 |
| 2 | DDoS | 0.75 | 1.12 | Med | 0 | DDoS | 0.75 | 1.12 | Med | 0 | 1.125 | 0.96 | 0 | 0 |
| 3 | SQLI | 0.75 | 1.12 | High | 9 | SQLI | 0.75 | 1.12 | High | 9 | 1.125 | 1.005 | 7.2 | 7.2 |
| 4 | DDoS | 0.75 | 1.12 | Low | 0 | DDoS | 0.75 | 1.12 | Low | 0 | 1.125 | 0.96 | 0 | 0 |
| 5 | SQLI | 0.75 | 1.12 | Med | 7 | SQLI | 0.75 | 1.12 | Med | 7 | 2.7 | 2.58 | 2.8 | 2.8 |
| ID | SME | SMB | Impact metrics | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Type | Malic. | Str. | Impact | CVSS | Type | Malic. | Str. | Impact | CVSS | Workload | Avail. | Conf. | Integ. | |
| 0 | SQLI | 0.49 | 0.73 | Med | 7 | SQLI | 0.49 | 0.73 | Med | 7 | 0.73 | 0.61 | 2.8 | 2.8 |
| 1 | DDoS | 0.75 | 1.12 | High | 7 | DDoS | 0.75 | 1.12 | High | 7 | 1.12 | 0.80 | 0 | 0 |
| 2 | DDoS | 0.80 | 1.21 | High | 7 | DDoS | 0.80 | 1.21 | High | 7 | 1.21 | 0.80 | 0 | 0 |
| 3 | SQLI | 0.16 | 0.24 | High | 9 | SQLI | 0.16 | 0.24 | High | 9 | 0.24 | 0.12 | 7.2 | 7.2 |
| 4 | SQLI | 0.58 | 0.87 | High | 9 | SQLI | 0.58 | 0.87 | High | 9 | 2.45 | 2.33 | 7.2 | 7.2 |
| 5 | DDoS | 0.84 | 1.26 | High | 7 | DDoS | 0.84 | 1.26 | High | 7 | 2.84 | 0.80 | 0 | 0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
