Preprint
Article

This version is not peer-reviewed.

Calibrated Trust in AI for Security Operations: A Conceptual Framework for Analyst–AI Collaboration

Submitted:

23 December 2025

Posted:

23 December 2025

You are already at the latest version

Abstract
Artificial intelligence (AI) is increasingly integrated into security operations to support threat detection, alert triage, and incident response. However, miscalibrated trust in AI systems—manifesting as either over-reliance or undue skepticism—can undermine both operational effectiveness and human oversight. This paper presents a conceptual framework for calibrated trust in AI-driven security operations, emphasizing analyst–AI collaboration rather than fully autonomous decision-making. The framework synthesizes key dimensions including transparency, uncertainty communication, explainability, and human-in-the-loop controls to support informed analyst judgment. We discuss how calibrated trust can mitigate automation bias, reduce operational risk, and enhance analyst confidence across common security workflows. The proposed framework is intended to guide the design, deployment, and evaluation of trustworthy AI systems in security operations and to serve as a foundation for future empirical validation.
Keywords: 
;  ;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated