Preprint
Article

This version is not peer-reviewed.

AI as a Credence Good: Quality Competition, Limited Verification, and the Industrial Organization of Disclosure

Submitted:

10 March 2026

Posted:

11 March 2026

You are already at the latest version

Abstract
In this paper we study competition between AI providers when users cannot fully verify model quality. Many AI services are not well described as standard search goods, and they are not pure experience goods either. Price, interface quality, and latency are usually observable, but reliability, hallucination risk, and the downstream cost of error are often only imperfectly observable even after use. Building on the emerging view that algorithmic advice can exhibit credence-good features, we embed that insight in a static industrial-organization model of vertical quality differentiation, costly certification, and limited user comprehension. Two firms choose whether to offer a low-quality or high-quality model. High-quality AI reduces error risk, but it is slower and costlier. A high-quality firm can purchase credible certification or disclosure, yet only a subset of users can interpret it. We characterize the pure-strategy equilibrium set, show how low-quality pooling can arise even when superior technology exists, and identify a quality-trap region in which the unique market equilibrium is low-quality pooling although an allocation with one high-quality provider is welfare superior. We then analyze policy. Standardized certification works through the demand side by increasing the fraction of users who can reward quality; minimum quality standards work directly but bluntly; liability shifts firms' cost incentives and weakly shrinks the region in which a low-quality industry outcome can be sustained. Contrary to a common rhetorical move in AI policy debates, these instruments are not interchangeable. The model also clarifies that our framework is a certification model rather than a full Grossman-Milgrom unraveling game: the key distortion comes from limited user comprehension of costly, truthful quality communication. The results offer a tractable industrial-organization foundation for current debates over hallucinations, model evaluation, AI documentation, and governance.
Keywords: 
;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated