Artificial intelligence (AI) is increasingly integrated into security operations to support threat detection, alert triage, and incident response. However, miscalibrated trust in AI systems—manifesting as either over-reliance or undue skepticism—can undermine both operational effectiveness and human oversight. This paper presents a conceptual framework for calibrated trust in AI-driven security operations, emphasizing analyst–AI collaboration rather than fully autonomous decision-making. The framework synthesizes key dimensions including transparency, uncertainty communication, explainability, and human-in-the-loop controls to support informed analyst judgment. We discuss how calibrated trust can mitigate automation bias, reduce operational risk, and enhance analyst confidence across common security workflows. The proposed framework is intended to guide the design, deployment, and evaluation of trustworthy AI systems in security operations and to serve as a foundation for future empirical validation.