Preprint
Concept Paper

This version is not peer-reviewed.

BAI (Believable AI Imagery): The Verification Problem of Low-Salience Synthetic Images

Submitted:

04 May 2026

Posted:

05 May 2026

You are already at the latest version

Abstract
This conceptual short paper introduces Believable AI Imagery (BAI) as an operational concept for fully synthetic images that are visually ordinary, context-compatible, and unlikely to be escalated for verification. Existing frameworks, including deepfakes, cheapfakes, false context, manipulated content, provenance, and detector accuracy, remain essential, but they do not fully capture mundane documentary, workplace, administrative, or evidentiary-looking images that may pass before detection is considered. BAI names a verification problem rather than a new generation technique: a fully synthetic image with no underlying photographic record may still be accepted as an ordinary record, reference photograph, or supporting document. A preliminary observation using more than 100 fully AI-generated, low-salience images illustrates how ordinary verification interactions can produce mixed and unstable outcomes. Some images are classified as AI-generated, while others are treated as likely real photographs even after AI generation is explicitly raised. The core issue is not only detector accuracy, but suspicion, triage, and verification economics: whether an image will be selected for review before it is accepted as an ordinary record.
Keywords: 
;  ;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated