Submitted:
01 February 2026
Posted:
02 February 2026
You are already at the latest version
Abstract
Keywords:
1. Introduction
1.1. Novelty Statement
1.2. Contributions
- 1.
- A comprehensive threat model that categorizes adversaries, attack surfaces, and failure modes specific to AI systems operating in Zero Trust environments, with explicit traceability to authoritative taxonomies (Section 3).
- 2.
- A lifecycle-aligned control mapping that assigns Zero Trust principles—identity, continuous verification, least privilege, micro-segmentation, and policy enforcement—to AI-specific trust subjects (data, models, pipelines, inference), not generic IT assets (Section 4).
- 3.
- A four-layer architecture with evidence specifications comprising Data Trust, Model Supply Chain Trust, Pipeline Trust, and Inference Trust layers, each with defined controls, Policy Enforcement Points, and the specific evidence artifacts required for audit (Section 5).
- 4.
- An AI-tailored assurance evidence framework and maturity model that establishes metrics (DAI, MIS), evidence types, compliance crosswalks, and maturity levels (L0–L3) defined by AI-specific characteristics rather than generic Zero Trust capabilities (Section 6).
1.3. Compliance Scope Note
1.4. Novelty vs. Prior Work
1.5. Paper Organization
2. Background and Related Work
2.1. Zero Trust Architecture
2.2. AI System Lifecycle
2.3. AI Security and Assurance Literature
2.4. Gaps in Current Approaches
2.5. Related Work and Gap Analysis
2.6. Methodology: Constructing the Threat-to-Control Mapping
- 1.
- IDENTIFY threat T from taxonomy (NIST AI 100-2, OWASP LLM Top 10, MITRE ATLAS)
- 2.
- SELECT control C from ZT principles (NIST SP 800-207 tenets T1–T7)
- 3.
- LOCATE enforcement point PEP (Data | Model | Pipeline | Inference Trust Layer)
- 4.
- DEFINE policy inputs for PDP (identity claims, asset attributes, context signals)
- 5.
- SPECIFY evidence artifact E (signature, attestation, log entry, CBOM field)
- 6.
- MAP to compliance framework (DoD ZT pillar, NIST AI RMF function, ISO 42001 clause)
- 7.
- RECORD in evidence package (Appendix template)
3. Threat Model for AI Systems in Zero Trust Environments
3.1. Adversary Classes
3.2. Attack Surfaces Across the AI Lifecycle
3.3. Zero Trust-Relevant Failure Modes
| Threat Class | Taxonomy IDs | Primary PEP | Zero Trust Controls | Evidence Artifacts |
|---|---|---|---|---|
| External Attackers | AML.T0040; OWASP LLM01, LLM10 | Inference | Inference PEP, API auth, rate limiting, anomaly detection | Auth logs, rate metrics, anomaly alerts |
| Insider Threats | AML.T0035; AML.T0007 | All layers | Micro-segmentation, least privilege, behavioral monitoring, ABAC | Access logs, behavioral baselines, privilege reviews |
| Supply Chain Adversaries | AML.T0010; OWASP LLM03 | Model | Model signing, CBOM validation, registry PEP, artifact isolation | Signature attestations, CBOM manifests, provenance chains |
| Nation-State Actors | AML.T0000; AML.T0010 | All layers | Defense-in-depth, TEE attestation, continuous integrity, SOC integration | TEE attestations, integrity logs, threat intel correlation |
| Data Poisoning | AML.T0020; OWASP LLM04 | Data | Data lineage, ingestion PEP, source auth, integrity hashing | Provenance chains, hash logs, source attestations |
| Model Tampering | AML.T0018; OWASP LLM04 | Model + Inference | Runtime integrity monitoring, model identity binding, HSM signing | Runtime logs, signature attestations, HSM trails |
| Prompt Injection | AML.T0051; OWASP LLM01 | Inference | Input validation, output filtering, prompt sanitization, tool-call auth | Validation logs, injection alerts, tool auth logs |
4. Zero Trust Principles Applied to AI Systems
4.1. Identity for AI Components
4.2. Continuous Verification
4.3. Least Privilege for AI Workflows
4.4. Micro-Segmentation for AI Pipelines
4.5. Policy Enforcement Points for AI
5. Proposed Zero Trust Architecture for AI Systems
5.1. Architectural Overview
5.2. Model Supply Chain Trust Layer
5.2.1. Bridging CBOM and SBOM: Toward Unified AI Supply Chain Transparency
5.3. Data Trust Layer
5.4. Pipeline Trust Layer
5.5. Inference Trust Layer
5.6. Agentic AI Security Considerations
5.7. Cross-Cutting Controls
6. Assurance Evidence Framework
6.1. Assurance Objectives
6.2. Metrics and Evidence Types
6.3. Compliance Mapping
| Trust Layer | NIST 800-207 Tenets | DoD ZT Pillars / NSA Activities | NIST AI RMF / ISO 42001 | Mapping Type | Example Evidence |
|---|---|---|---|---|---|
| Data Trust | T1, T5, T7 | Data Pillar; NSA 5.4.3, 5.4.4 | Map 2.3; ISO 42001 §6.1.2, §8.4 | Direct (ZT), Partial (AI) | Lineage record, hash manifest |
| Model Supply Chain | T4, T5, T6 | App/Workload Pillar; NSA 1.9.1, 3.1.2 | Govern 1.1; ISO 42001 §7.5, §8.2 | Partial (ZT), Direct (AI) | Signature, SLSA provenance |
| Pipeline Trust | T2, T3, T6 | Network Pillar; NSA 3.2.3, 4.1.1 | Govern 1.2; ISO 42001 §8.1, §9.1 | Direct (ZT), Enabling (AI) | CI/CD logs, attestation |
| Inference Trust | T3, T5, T7 | App/Workload Pillar; NSA 3.1.1, 6.1.1 | Measure 3.2; ISO 42001 §9.2, §9.3 | Direct (ZT), Direct (AI) | PDP logs, drift telemetry |
6.4. Maturity Model
6.5. Worked Example: Evidence Package for a RAG-Based Mission Assistant
- Model Identity Binding: Does the deployed model artifact hash match the hash recorded in the signed CBOM/ML-BOM? Can the signing certificate chain be validated to a trusted root?
- Provenance Continuity: Does the SLSA/in-toto provenance attestation link the deployed artifact to a verified build pipeline? Are all intermediate transformations (fine-tuning, quantization, containerization) documented with signed attestations?
- Policy Decision Log Integrity: Are PDP/PEP decision logs cryptographically protected (signed or stored in an append-only log)? Do log entries contain sufficient context (timestamp, identity, resource, decision, rationale) for decision reconstruction?
- Attestation Verification Outcomes: Were TEE attestation quotes validated against expected measurements? Are runtime integrity check results recorded with pass/fail status and remediation actions for failures?
- Evidence Completeness: Does the package include all required artifacts per the applicable tier (Table 4)? Are optional fields documented as N/A with a rationale rather than silently omitted?
6.6. Qualitative Evaluation Approach
6.7. Evidence Artifact Tiers
7. Scenario-Based Demonstration
7.1. Scenario Description
7.2. Applying the Architecture
7.3. Expected Outcomes
8. Discussion
8.1. Benefits
8.2. Limitations
8.3. Future Work
9. Conclusions
Abbreviations
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Minimum Evidence Package Template
Appendix A.1. Machine-Readable Template (Illustrative JSON Skeleton)
| Listing 1: Evidence Package JSON Schema (Illustrative) |
![]() |
Appendix A.2. Model Identity and Provenance — REQUIRED FIELDS
- Model artifact hash (SHA-256 or SHA-3): [hash] (REQUIRED)
- Signing key identifier: [key_id] (REQUIRED)
- Signature algorithm: [algorithm_id] (REQUIRED)
- Signature value: [base64] (REQUIRED)
- CBOM reference URI: [uri] (REQUIRED)
- Provenance document URI (SLSA): [uri] (REQUIRED for L2+ maturity)
- Optional: Base model identifier (if fine-tuned): [model_id]
- Optional: Training dataset reference: [dataset_id with classification]
Appendix A.3. Cryptographic Bill of Materials (CBOM) — REQUIRED FIELDS
- Format: CycloneDX v1.6+ JSON (REQUIRED)
- Signing algorithms in use: [algorithm_id per IANA registry] (REQUIRED)
- Hash algorithms in use: [algorithm_id] (REQUIRED)
- Key sizes: [bits] (REQUIRED)
- Certificate chain references: [uri] (REQUIRED)
- PQC migration status: [not-started | planning | hybrid-deployed | pqc-only] (REQUIRED)
- Cryptographic library versions: [name, version, CVE status] (REQUIRED)
- Optional: HSM binding reference, key ceremony documentation
Appendix A.4. Deployment Attestations — REQUIRED FIELDS
- SLSA provenance level: [0-4] (REQUIRED, minimum L2 for Target-level ZT)
- CI/CD pipeline identifier: [pipeline_id] (REQUIRED)
- Build timestamp: [ISO 8601] (REQUIRED)
- Container image digest: [digest] (REQUIRED)
- SBOM reference: [uri] (REQUIRED)
- Conditional: TEE attestation type and quote (REQUIRED if confidential computing deployed)
- Conditional: Policy-as-code approval record: [reference] (REQUIRED for L3 maturity)
Appendix A.5. Runtime Evidence (Retained Logs) — REQUIRED FIELDS
- Policy decision log location: [uri] (REQUIRED)
- Log retention period: [days, minimum 90 for compliance] (REQUIRED)
- Required fields per PDP entry: {timestamp, request_id, subject_dn, resource_id, action, decision, policy_version} (ALL REQUIRED)
- Inference PEP log location: [uri] (REQUIRED for AI workloads)
- Required fields per inference entry: {timestamp, request_id, user_identity, input_classification, decision, output_classification} (ALL REQUIRED)
- Optional: prompt_injection_score, response_time_ms, model_version_hash
- Integrity verification event log: [uri] (REQUIRED for L2+ maturity)
- Anomaly detection alert log: [uri] (REQUIRED for L3 maturity)
Appendix A.6. Compliance Mapping
- NIST AI RMF functions addressed: [Govern | Map | Measure | Manage]
- DoD ZT pillars addressed: [list]
- NSA ZIG activities satisfied: [activity_ids]
- ISO/IEC 42001 clauses addressed: [clause_ids]
- Assessment date: [date]
- Assessor: [identity]
- Next review date: [date]
References
- National Security Commission on Artificial Intelligence. Final Report. NSCAI: Washington, DC, USA, March 2021. Available online: https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf (accessed on 29 January 2026).
- Executive Office of the President. Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Federal Register 2023, Vol. 88(No. 210), 75191–75226. Available online: https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence (accessed on 29 January 2026).
- U.S. Government Accountability Office. Artificial Intelligence: Status of Developing and Acquiring Capabilities for Weapon Systems. GAO-22-104765, 2022. Available online: https://www.gao.gov/assets/gao-22-104765.pdf (accessed on 29 January 2026).
- Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index 2025 Annual Report; Stanford University: Stanford, CA, USA, 2025; Available online: https://hai.stanford.edu/ai-index/2025-ai-index-report (accessed on 29 January 2026).
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Sculley, D.; Holt, G.; Golovin, D.; Davydov, E.; Phillips, T.; Ebner, D.; Chaudhary, V.; Young, M.; Crespo, J.-F.; Dennison, D. Hidden Technical Debt in Machine Learning Systems. In Proceedings of the Advances in Neural Information Processing Systems 28 (NeurIPS 2015), Montreal, QC, Canada, 7–12 December 2015; pp. 2503–2511. [Google Scholar]
- Rose, S.; Borchert, O.; Mitchell, S.; Connelly, S. Zero Trust Architecture; NIST Special Publication 800-207; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2020. [Google Scholar] [CrossRef]
- Kindervag, J. Build Security Into Your Network’s DNA: The Zero Trust Network Architecture; Forrester Research: Cambridge, MA, USA, 2010. [Google Scholar]
- Executive Office of the President. Executive Order 14028: Improving the Nation’s Cybersecurity. Federal Register 2021, Vol. 86(No. 93), 26633–26647. [Google Scholar]
- Department of Defense. DoD Zero Trust Strategy; DoD CIO: Washington, DC, USA, November 2022; Available online: https://dodcio.defense.gov/Portals/0/Documents/Library/DoD-ZTStrategy.pdf (accessed on 29 January 2026).
- Department of Defense Chief Information Officer. Department of Defense Zero Trust Reference Architecture, Version 2.0; DoD CIO: Washington, DC, USA, September 2022. [Google Scholar]
- Department of Defense. Zero Trust Capability Execution Roadmap, Version 1.1; DoD CIO: Washington, DC, USA, November 2024. [Google Scholar]
- Chandramouli, R. A Zero Trust Architecture Model for Access Control in Cloud-Native Applications in Multi-Cloud Environments. In NIST SP 800-207A; National Institute of Standards and Technology: Gaithersburg, MD, USA, April 2023. [Google Scholar] [CrossRef]
- Biggio, B.; Roli, F. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning. Pattern Recognit. 2018, 84, 317–331. [Google Scholar] [CrossRef]
- Goldblum, M.; Tsipras, D.; Xie, C.; Chen, X.; Schwarzschild, A.; Song, D.; Madry, A.; Li, B.; Goldstein, T. Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 1563–1580. [Google Scholar] [CrossRef] [PubMed]
- Carlini, N.; Tramèr, F.; Wallace, E.; Jagielski, M.; Herbert-Voss, A.; Lee, K.; Roberts, A.; Brown, T.; Song, D.; Erlingsson, Ú.; et al. Extracting Training Data from Large Language Models. In Proceedings of the 30th USENIX Security Symposium, Virtual, 11–13 August 2021; pp. 2633–2650. [Google Scholar]
- Vassilev, A.; Oprea, A.; Fordyce, A.; Anderson, H. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. In NIST AI 100-2e2025; NIST: Gaithersburg, MD, USA, March 2025. [Google Scholar] [CrossRef]
- National Institute of Standards and Technology. AI Risk Management Framework (AI RMF 1.0). In NIST AI 100-1; NIST: Gaithersburg, MD, USA, January 2023. [Google Scholar] [CrossRef]
- National Institute of Standards and Technology. NIST AI 600-1; Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. NIST: Gaithersburg, MD, USA, July 2024.
- Cunningham, C. Zero Trust Architecture; O’Reilly Media, 2021. [Google Scholar]
- Cybersecurity and Infrastructure Security Agency. Zero Trust Maturity Model, Version 2.0. CISA: Washington, DC, USA, April 2023. Available online: https://www.cisa.gov/zero-trust-maturity-model (accessed on 29 January 2026).
- National Institute of Standards and Technology (NIST). NIST Special Publication (SP) 1800-35; Implementing a Zero Trust Architecture. NIST: Gaithersburg, MD, USA, 2025.
- Amershi, S.; Begel, A.; Bird, C.; DeLine, R.; Gall, H.; Kamar, E.; Nagappan, N.; Nushi, B.; Zimmermann, T. Software Engineering for Machine Learning: A Case Study. In Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), Montreal, QC, Canada, 25–31 May 2019; pp. 291–300. [Google Scholar]
- Paleyes, A.; Urma, R.-G.; Lawrence, N.D. Challenges in Deploying Machine Learning: A Survey of Case Studies. ACM Comput. Surv. 2022, 55, 114. [Google Scholar] [CrossRef]
- National Institute of Standards and Technology (NIST). TrojAI: Artificial Intelligence Security (model trojans and ML supply-chain risk). In NIST; pp. 2024–2025. Available online: https://pages.nist.gov/trojai/docs/about.html (accessed on 29 January 2026).
- Australian Cyber Security Centre. Artificial Intelligence and Machine Learning Pose New Cyber Security Risks to Supply Chains; ACSC: Canberra, Australia, 2025. [Google Scholar]
- Face, Hugging. Hugging Face reaches 1 million models (platform milestone announcement). 2024. Available online: https://huggingface.co/posts/fdaudens/300554611911292 (accessed on 29 January 2026).
- Jiang, W.; Synovic, S.; Sethi, R.; Indarapu, A.; Hyatt, D.; Schorlemmer, M.; Thiruvathukal, G.K. An Empirical Study of Artifacts and Security Risks in the Pre-trained Model Supply Chain. In Proceedings of the ACM Conference on Computer and Communications Security (CCS), 2022. [Google Scholar]
- Australian Cyber Security Centre. Engaging with Artificial Intelligence; ACSC: Canberra, Australia, 2024. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- MITRE Corporation. ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems). MITRE. 2025. Available online: https://atlas.mitre.org/ (accessed on 31 January 2026).
- Oprea, A.; Vassilev, A. Poisoning Attacks Against Machine Learning; National Institute of Standards and Technology (NIST), 2024. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. In Proceedings of the International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Cohen, J.; Rosenfeld, E.; Kolter, Z. Certified Adversarial Robustness via Randomized Smoothing. In Proceedings of the 36th International Conference on Machine Learning (ICML), Long Beach, CA, USA, 9–15 June 2019; pp. 1310–1320. [Google Scholar]
- Kumar, R.S.S.; Nyström, M.; Lambert, J.; Marshall, A.; Goertzel, M.; Comber, A.; Swann, M.; Xia, S. Adversarial Machine Learning—Industry Perspectives. In Proceedings of the 2020 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 21 May 2020; pp. 69–75. [Google Scholar]
- Linux Foundation. SPDX Specification v3.0.1 (including AI and Dataset profiles). SPDX Workgroup, 2024. Available online: https://spdx.dev/wp-content/uploads/sites/31/2024/12/SPDX-3.0.1-1.pdf (accessed on 29 January 2026).
- OWASP Foundation. CycloneDX Bill of Materials Standard, Version 1.6. OWASP CycloneDX Project, 2024. Available online: https://cyclonedx.org/docs/1.6/json/ (accessed on 29 January 2026).
- Workgroup, SPDX. SPDX Specification v3.0.1—AI Profile Compliance Point (Conformance). SPDX 2024. [Google Scholar]
- CycloneDX. Specification Overview (CycloneDX supports describing machine learning models as components); CycloneDX, 2025. [Google Scholar]
- German Federal Office for Information Security (BSI). A Shared G7 Vision on Software Bill of Materials for AI; BSI: Bonn, Germany, 2025. [Google Scholar]
- International Organization for Standardization. ISO/IEC 42001:2023 Information Technology—Artificial Intelligence—Management System; ISO. Geneva, Switzerland, 2023.
- Verizon. 2024 Data Breach Investigations Report; Verizon Enterprise: New York, NY, USA, 2024. [Google Scholar]
- IBM Security. Cost of a Data Breach Report 2024; IBM: Armonk, NY, USA, 2024. [Google Scholar]
- European Union Agency for Cybersecurity (ENISA). ENISA Threat Landscape 2024; ENISA, 2024. [Google Scholar]
- OWASP Foundation. OWASP Top 10 for Large Language Model Applications, Version 2025. OWASP LLM AI Security Project, 2025. Available online: https://genai.owasp.org/llm-top-10/ (accessed on 29 January 2026).
- OpenAI. Disrupting malicious uses of AI: October 2025. OpenAI Threat Intelligence Report. 2025. [Google Scholar]
- Chen, X.; Liu, C.; Li, B.; Lu, K.; Song, D. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv 2017, arXiv:1712.05526. [Google Scholar] [CrossRef]
- CERT Coordination Center. VU#534320: XZ Utils data compression library contains a backdoor affecting downstream software supply chains. CERT/CC, 2024–2025. [Google Scholar]
- National Telecommunications and Information Administration (NTIA). The Minimum Elements for a Software Bill of Materials (SBOM). NTIA, 2021. [Google Scholar]
- Cybersecurity and Infrastructure Security Agency (CISA). 2025 Minimum Elements for a Software Bill of Materials (SBOM): Draft for Comment. CISA 2025. [Google Scholar]
- Department of Defense Chief Information Officer (DoD CIO). Directive-Type Memorandum (DTM) 25-003, Implementing the DoD Zero Trust Strategy; Effective; DoD: Washington, DC, USA, 2025. [Google Scholar]
- Executive Office of the President. Executive Order 14148: Initial Rescissions of Harmful Executive Orders and Actions; Federal Register, 2025. [Google Scholar]
- Sonatype. 10th Annual State of the Software Supply Chain Report. Sonatype, 2024. [Google Scholar]
- Kellas, A.D.; Christou, N.; Jiang, W.; Li, P.; Simon, L.; David, Y.; Kemerlis, V.P.; Davis, J.C.; Yang, J. PickleBall: Secure Deserialization of Pickle-based Machine Learning Models. In Proceedings of the ACM Conference on Computer and Communications Security (CCS), 2025. [Google Scholar]
- National Institute of Standards and Technology. Secure Software Development Framework (SSDF) Version 1.1. NIST Special Publication 800-218, 2022. [Google Scholar]
- National Security Agency (NSA) Cybersecurity Directorate. Zero Trust Implementation Guidelines (document set: Primer; Discovery; Phase One; Phase Two); NSA: Fort Meade, MD, USA, January 2026. [Google Scholar]
- SPIFFE Project. SPIFFE Specifications (Secure Production Identity Framework for Everyone): Standards and Rendered Specification Documents. Available online: https://spiffe.io/docs/latest/spiffe-specs/ (accessed on 29 January 2026).
- SLSA. SLSA Specification v1.0; Supply-chain Levels for Software Artifacts. Available online: https://slsa.dev/spec/v1.0/ (accessed on 31 January 2026).
- Sigstore. cosign Documentation; Sigstore Project. Available online: https://docs.sigstore.dev/cosign/ (accessed on 31 January 2026).
- SPDX Workgroup. SPDX Specifications (ISO/IEC 5962:2021) and Current Versions. Available online: https://spdx.dev/use/specifications/ (accessed on 31 January 2026).
- OWASP Foundation. CycloneDX Specification Overview (ECMA-424). Available online: https://cyclonedx.org/specification/overview/ (accessed on 31 January 2026).
- European Parliament and Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). OJ L 2024, 2024/1689, 12.7. [Google Scholar]
- Warshavski, D. Zero Trust Discovery Challenges: Why Many Programs Stall Before Implementation. CSO Online. January 2026.
- Anthropic. Building Effective Agents: Security Considerations for Autonomous AI Systems. Anthropic Research, December 2024. [Google Scholar]
- U.S. Government Accountability Office. GAO-25-106348; IT Asset Management: Agencies Need to Improve Implementation of Leading Practices. December 2024.




| Category | Representative Works | AI Trust Decomp. | Evidence Specif. | Compliance Mapping |
|---|---|---|---|---|
| ZTA Guidance (Enterprise IT) | [7,8,10,11,21,22] | No | Generic | Partial |
| AI Threat Taxonomies | [14,15,16,17,30,31,35,45] | Implicit | No | No |
| Secure MLOps | [6,23,24,28,36,37,58,59] | Partial | Point solutions | No |
| This Work | — | Yes (4 layers) | Yes (per layer) | Yes (4 frameworks) |
| Tier | Evidence Artifacts | Use Case | Maturity |
|---|---|---|---|
| Discovery | AI asset inventory; trust boundary map; data flow documentation; assurance debt assessment; Shadow AI catalog | Prerequisite for all implementations; organizations with unknown AI footprint; post-merger integration | Phase 0 |
| Minimum Viable | Model signature + hash; basic SBOM; access logs; manual provenance documentation | Initial compliance; limited-scope pilots; resource-constrained environments | L1 (Initial) |
| Recommended | SLSA L2+ provenance; full CBOM; ML-BOM; policy decision logs; automated integrity checks | Production deployments; federal compliance; enterprise AI governance | L2 (Advanced) |
| Optimal | SLSA L3+; TEE attestation; runtime integrity events; anomaly detection logs; PQC-ready signatures; full audit chain | High-assurance missions; classified environments; adversarial threat contexts | L3 (Optimal) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

