Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Intermediate Judgments and Trust in Artificial Intelligence-Supported Decision-Making

Version 1 : Received: 15 April 2024 / Approved: 17 April 2024 / Online: 18 April 2024 (02:43:02 CEST)

How to cite: Humr, S.; Canan, M. Intermediate Judgments and Trust in Artificial Intelligence-Supported Decision-Making. Preprints 2024, 2024041107. https://doi.org/10.20944/preprints202404.1107.v1 Humr, S.; Canan, M. Intermediate Judgments and Trust in Artificial Intelligence-Supported Decision-Making. Preprints 2024, 2024041107. https://doi.org/10.20944/preprints202404.1107.v1

Abstract

Human decision-making is increasingly supported by artificial intelligence (AI) systems. From medical imaging analysis to self-driving vehicles, AI systems are becoming organically embedded in a host of different technologies. However, incorporating such advice into decision-making entails a human rationalization of AI outputs for supporting beneficial outcomes. Recent research suggests intermediate judgments in the first stage of a decision process can interfere with decisions in subsequent stages. For this reason, we extend this research to AI-supported decision-making to investigate how intermediate judgments on AI-provided advice may influence subsequent decisions. In an online experiment (N=192), we found a consistent bolstering effect in trust for those who made intermediate judgments and over those who did not. Furthermore, violations of total probability were observed at all timing intervals throughout the study. We further analyzed the results by demonstrating how quantum probability theory can model these types of behaviors in human-AI decision-making and ameliorate the understanding the interaction dynamics at the confluence of human factors and information features.

Keywords

artificial intelligence; decision-making; trust; quantum decision theory; quantum open systems modeling

Subject

Computer Science and Mathematics, Information Systems

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.