Submitted:
20 April 2026
Posted:
21 April 2026
You are already at the latest version
Abstract

Keywords:
1. Introduction
2. The Architecture of the Mirror: What LLMs Are and What They Are Not
3. The Human Side of the Interface: Cognitive Biases and the Psychology of Interacting with AI
4. The Feedback Loop: How Users Train Their Mirror
5. The Information Paradox: Why More Data Makes Us Worse
5.1. More Information, Worse Judgment: The Selective-Processing Failure
5.2. Less Information, Worse Judgment: The Aversion Failure
5.3. The Informational-Relevance Framework
6. Artificial Confidence
6.1. Beyond Calibration: The Difference Between Artificial Confidence and Overconfidence
6.2. The Erosion of Epistemic Vigilance
6.3. The Convergence of User, Developer, and System on the Premise of Near-Perfection
6.4. From Episodic Error to Dispositional Erosion
6.5. From Individual Disposition to Institutional Practice
7. Conclusions
7.1. Limitations and Future Directions of Research
7.2. Summary
Funding
Data Availability Statement
Conflicts of Interest
References
- Kunda, Z. The case for motivated reasoning. Psychol. Bull. 1990, 108, 480–498. [Google Scholar] [CrossRef] [PubMed]
- Mercier, H.; Sperber, D. Why do humans reason? Arguments for an argumentative theory. Behav. Brain Sci. 2011, 34, 57–74. [Google Scholar] [CrossRef]
- Hochman, G. Rationalization as Cognitive Homeostasis: A Homobiasos Theory of Adaptive Self-Regulation. PsyArxiv 2025, preprint. Available online: https://osf.io/w5cqr_v1 (accessed on 13 April 2026).
- Lerner, J.S.; Tetlock, P.E. Accounting for the effects of accountability. Psychol. Bull. 1999, 125, 255–275. [Google Scholar] [CrossRef] [PubMed]
- Skitka, L.J.; Mosier, K.L.; Burdick, M. Does automation bias decision-making? Int. J. Hum.-Comput. Stud. 1999, 51, 991–1006. [Google Scholar] [CrossRef]
- Epstein, S. Integration of the cognitive and the psychodynamic unconscious. Am. Psychol. 1994, 49, 709–724. [Google Scholar] [CrossRef]
- Kahneman, D. Thinking, Fast and Slow; Farrar, Straus and Giroux: New York, NY, USA, 2011. [Google Scholar]
- Sloman, S.A. The empirical case for two systems of reasoning. Psychol. Bull. 1996, 119, 3–22. [Google Scholar] [CrossRef]
- Tversky, A.; Kahneman, D. Judgment under uncertainty: Heuristics and biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef]
- Frenkenberg, A.; Hochman, G. It’s scary to use it, it’s scary to refuse it: The psychological dimensions of AI adoption—anxiety, motives, and dependency. Systems 2025, 13, 82. [Google Scholar] [CrossRef]
- Grassini, S. Shaping the future of education: Exploring the potential and consequences of AI and ChatGPT in educational settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
- Sharma, M.; Tong, M.; Korbak, T.; Duvenaud, D.; Askell, A.; Bowman, S.R.; Cheng, N.; Durmus, E.; Hatfield-Dodds, Z.; Johnston, S.R.; et al. Towards understanding sycophancy in language models. arXiv 2023. [Google Scholar] [CrossRef]
- Ibrahim, L.; Hafner, F.S.; Rocher, L. Training language models to be warm and empathetic makes them less reliable and more sycophantic. arXiv 2025. [Google Scholar] [CrossRef]
- Cheng, M.; Lee, C.; Khadpe, P.; Yu, S.; Han, D.; Jurafsky, D. Sycophantic AI decreases prosocial intentions and promotes dependence. arXiv 2025. [Google Scholar] [CrossRef] [PubMed]
- Brunyé, T.T.; Mitroff, S.R.; Elmore, J.G. Artificial intelligence and computer-aided diagnosis in diagnostic decisions: 5 questions for medical informatics and human-computer interface research. J. Am. Med. Inform. Assoc. 2025. [Google Scholar] [CrossRef]
- Echterhoff, J.M.; Liu, Y.; Alessa, A.; McAuley, J.; He, Z. Cognitive bias in decision-making with LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2024; Association for Computational Linguistics: Stroudsburg, PA, USA, 2024; pp. 12640–12653. Available online: https://aclanthology.org/2024.findings-emnlp.739/.
- Jussupow, E.; Spohrer, K.; Heinzl, A.; Gawlitza, J. Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Inf. Syst. Res. 2021, 32, 713–735. [Google Scholar] [CrossRef]
- Simon, H.A. Theories of bounded rationality. In Decision and Organization; McGuire, C.B., Radner, R., Eds.; North-Holland: Amsterdam, The Netherlands, 1972; pp. 161–176. [Google Scholar]
- Kahan, D.M.; Peters, E.; Wittlin, M.; Slovic, P.; Ouellette, L.L.; Braman, D.; Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nat. Clim. Change 2012, 2, 732–735. [Google Scholar] [CrossRef]
- Lord, C.G.; Ross, L.; Lepper, M.R. Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. J. Pers. Soc. Psychol. 1979, 37, 2098–2109. [Google Scholar] [CrossRef]
- Taber, C.S.; Lodge, M. Motivated skepticism in the evaluation of political beliefs. Am. J. Polit. Sci. 2006, 50, 755–769. [Google Scholar] [CrossRef]
- Hochman, G. Beyond the surface: A new perspective on dual-system theories in decision-making. Behav. Sci. 2024, 14, 1028. [Google Scholar] [CrossRef]
- Gigerenzer, G. On narrow norms and vague heuristics: A reply to Kahneman and Tversky. Psychol. Rev. 1996, 103, 592–596. [Google Scholar] [CrossRef]
- Hill, K.; Valentino-DeVries, J. What OpenAI Did When ChatGPT Users Lost Touch With Reality. The New York Times. 23 November 2025. Available online: https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html (accessed on 13 April 2026).
- Chandra, K.; Kleiman-Weiner, M.; Ragan-Kelley, J.; Tenenbaum, J.B. Sycophantic chatbots cause delusional spiraling, even in ideal Bayesians. arXiv 2026. [Google Scholar] [CrossRef]
- Hudon, A.; Stip, E. Delusional experiences emerging from AI chatbot interactions or “AI Psychosis”. JMIR Ment. Health 2025, 12(1), e85799. [Google Scholar] [CrossRef] [PubMed]
- Bellahsen-Harrar, Y.; Lubrano, M.; Lépine, C.; Beaufrère, A.; Bocciarelli, C.; Brunet, A.; Decroix, E.; El-Sissy, F.N.; Fabiani, B.; Morini, A.; et al. Exploring the risks of over-reliance on AI in diagnostic pathology. What lessons can be learned to support the training of young pathologists? PLoS ONE 2025, 20, e0323270. [Google Scholar] [CrossRef] [PubMed]
- Berner, E.S.; Graber, M.L. Overconfidence as a cause of diagnostic error in medicine. Am. J. Med. 2008, 121 (Suppl. 5), S2–S23. [Google Scholar] [CrossRef]
- Tavares, T.; Sangalli, L.; Menon, R.S.; Miller, C.S.; Hochman, G. Educational frameworks for diagnostic decision-making in AI-enhanced head and neck pathology. In Head Neck Pathol; in press.
- Bail, C.A.; Argyle, L.P.; Brown, T.W.; Bumpus, J.P.; Chen, H.; Hunzaker, M.B.F.; Lee, J.; Mann, M.; Merhout, F.; Volfovsky, A. Exposure to opposing views on social media can increase political polarization. Proc. Natl. Acad. Sci. USA 2018, 115, 9216–9221. [Google Scholar] [CrossRef]
- Cook, J.; Lewandowsky, S. Rational irrationality: Modeling climate change belief polarization using Bayesian networks. Top. Cogn. Sci. 2016, 8, 160–179. [Google Scholar] [CrossRef]
- Madsen, J.K.; Bailey, R.M.; Pilditch, T.D. Large networks of rational agents form persistent echo chambers. Sci. Rep. 2018, 8, 12391. [Google Scholar] [CrossRef]
- Grote, T.; Berens, P. On the ethics of algorithmic decision-making in healthcare. J. Med. Ethics 2020, 46, 205–211. [Google Scholar] [CrossRef]
- Patil, S.V.; Myers, C.G.; Lu-Myers, Y. Calibrating AI reliance—A physician’s superhuman dilemma. JAMA Health Forum 2025, 6, e250106. [Google Scholar] [CrossRef]
- Meyer, A.N.D.; Payne, V.L.; Meeks, D.W.; Rao, R.; Singh, H. Physicians’ diagnostic accuracy, confidence, and resource requests: A vignette study. JAMA Intern. Med. 2013, 173, 1952–1958. [Google Scholar] [CrossRef]
- Moore, D.A.; Healy, P.J. The trouble with overconfidence. Psychol. Rev. 2008, 115, 502–517. [Google Scholar] [CrossRef]
- Hochman, G. Epistemic Humility and Epistemic Confidence: Competing Ethical Forces in Clinical Medicine. Available online. 2026. (accessed on 13 April 2026). [CrossRef]
- Croskerry, P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad. Med. 2003, 78, 775–780. [Google Scholar] [CrossRef]
- Schwab, A. Epistemic humility and medical practice: Translating epistemic categories into ethical obligations. J. Med. Philos. 2012, 37, 28–48. [Google Scholar] [CrossRef]
- Brynjolfsson, E.; McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies; W.W. Norton: New York, NY, USA, 2014. [Google Scholar]
- Achiam, J.; Adler, S.; Agarwal, S.; et al. GPT-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar] [CrossRef]
- Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
- Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 2022, 35, 27730–27744. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
- Bender, E.M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, 3–10 March 2021; pp. 610–623. [Google Scholar] [CrossRef]
- Shanahan, M. Talking about large language models. Commun. ACM 2024, 67, 68–79. [Google Scholar] [CrossRef]
- Frankfurt, H.G. On Bullshit; Princeton University Press: Princeton, NJ, USA, 2005. [Google Scholar]
- Marcus, G.; Davis, E. Rebooting AI: Building Artificial Intelligence We Can Trust; Pantheon Books: New York, NY, USA, 2019. [Google Scholar]
- Feltovich, P.J.; Prietula, M.J.; Ericsson, K.A. Studies of expertise from psychological perspectives. In The Cambridge Handbook of Expertise and Expert Performance; Ericsson, K.A., Charness, N., Feltovich, P.J., Hoffman, R.R., Eds.; Cambridge University Press: Cambridge, UK, 2006; pp. 41–67. [Google Scholar]
- Cheung, V.; Maier, M.; Lieder, F. Large language models show amplified cognitive biases in moral decision-making. Proc. Natl. Acad. Sci. USA 2025, 122, e2412015122. [Google Scholar] [CrossRef]
- Jung, J.; Lee, H.; Jung, H.; Kim, H. Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review. Heliyon 2023, 9, e16110. [Google Scholar] [CrossRef]
- Suri, G.; Slater, L.R.; Ziaee, A.; Nguyen, M. Do large language models show decision heuristics similar to humans? A case study using GPT-3.5. J. Exp. Psychol. Gen. 2024, 153, 1066. [Google Scholar] [CrossRef] [PubMed]
- Ayal, S.; Rusou, Z.R.; Zakay, D.; Hochman, G. Determinants of judgment and decision making quality: The interplay between information processing style and situational factors. Front. Psychol. 2015, 6, 1088. [Google Scholar] [CrossRef] [PubMed]
- Mialon, G.; Dessì, R.; Lomeli, M.; Nalmpantis, C.; Pasunuru, R.; Raileanu, R.; Scialom, T. Augmented language models: A survey. arXiv 2023. [Google Scholar] [CrossRef]
- Gigerenzer, G.; Gaissmaier, W. Heuristic decision making. Annu. Rev. Psychol. 2011, 62, 451–482. [Google Scholar] [CrossRef]
- Christiano, P.F.; Leike, J.; Brown, T.; Martic, M.; Legg, S.; Amodei, D. Deep reinforcement learning from human preferences. Adv. Neural Inf. Process. Syst. 2017, 30, 4299–4307. [Google Scholar]
- Malmqvist, L. Sycophancy in large language models: Causes and mitigations. In Intelligent Computing; Arai, K., Ed.; Springer: Cham, Switzerland, 2025; pp. 61–74. [Google Scholar] [CrossRef]
- Wang, C.; Wang, H.; Li, Y.; Dai, J.; Gu, X.; Yu, T. Factors influencing university students’ behavioral intention to use generative artificial intelligence: Integrating the theory of planned behavior and AI literacy. Int. J. Hum.-Comput. Interact. 2025, 41, 6649–6671. [Google Scholar] [CrossRef]
- Clark, A.; Chalmers, D. The extended mind. Analysis 1998, 58, 7–19. [Google Scholar] [CrossRef]
- Heersmink, R. Dimensions of integration in embedded and extended cognitive systems. Phenomenol. Cogn. Sci. 2015, 14, 577–598. [Google Scholar] [CrossRef]
- Nickerson, R.S. Confirmation bias: A ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 1998, 2, 175–220. [Google Scholar] [CrossRef]
- Findley, K.A.; Scott, M.S. The multiple dimensions of tunnel vision in criminal cases. Wis. Law Rev. 2006, 2006, 291–397. Available online: https://api.semanticscholar.org/CorpusID:147146926.
- Mynatt, C.R.; Doherty, M.E.; Tweney, R.D. Confirmation bias in a simulated research environment: An experimental study of scientific inference. Q. J. Exp. Psychol. 1977, 29, 85–95. [Google Scholar] [CrossRef]
- Stanovich, K.E.; West, R.F. Natural myside bias is independent of cognitive ability. Think. Reason. 2007, 13, 225–247. [Google Scholar] [CrossRef]
- Parasuraman, R.; Riley, V. Humans and automation: Use, misuse, disuse, abuse. Hum. Factors 1997, 39, 230–253. [Google Scholar] [CrossRef]
- Goddard, K.; Roudsari, A.; Wyatt, J.C. Automation bias: A systematic review of frequency, effect mediators, and mitigators. J. Am. Med. Inform. Assoc. 2012, 19, 121–127. [Google Scholar] [CrossRef]
- Mosier, K.L.; Skitka, L.J.; Heers, S.; Burdick, M. Automation bias: Decision making and performance in high-tech cockpits. Int. J. Aviat. Psychol. 1998, 8, 47–63. [Google Scholar] [CrossRef]
- Jakesch, M.; Hancock, J.T.; Naaman, M. Human heuristics for AI-generated language are flawed. Proc. Natl. Acad. Sci. USA 2023, 120, e2208839120. [Google Scholar] [CrossRef]
- Alter, A.L.; Oppenheimer, D.M. Uniting the tribes of fluency to form a metacognitive nation. Pers. Soc. Psychol. Rev. 2009, 13, 219–235. [Google Scholar] [CrossRef]
- Oppenheimer, D.M. The secret life of fluency. Trends Cogn. Sci. 2008, 12, 237–241. [Google Scholar] [CrossRef]
- Reber, R.; Schwarz, N. Effects of perceptual fluency on judgments of truth. Conscious. Cogn. 1999, 8, 338–342. [Google Scholar] [CrossRef]
- Fazio, L.K.; Brashier, N.M.; Payne, B.K.; Marsh, E.J. Knowledge does not protect against illusory truth. J. Exp. Psychol. Gen. 2015, 144, 993–1002. [Google Scholar] [CrossRef]
- Hasher, L.; Goldstein, D.; Toppino, T. Frequency and the conference of referential validity. J. Verbal Learn. Verbal Behav. 1977, 16, 107–112. [Google Scholar] [CrossRef]
- Bénabou, R.; Tirole, J. Mindful economics: The production, consumption, and value of beliefs. J. Econ. Perspect. 2016, 30, 141–164. [Google Scholar] [CrossRef]
- Kahan, D.M.; Peters, E.; Dawson, E.C.; Slovic, P. Motivated numeracy and enlightened self-government. Behav. Public Policy 2017, 1, 54–86. [Google Scholar] [CrossRef]
- Jost, J.T. A Theory of System Justification; Harvard University Press: Cambridge, MA, USA, 2020. [Google Scholar]
- Bandura, A. Moral disengagement in the perpetration of inhumanities. Pers. Soc. Psychol. Rev. 1999, 3, 193–209. [Google Scholar] [CrossRef] [PubMed]
- Kruglanski, A.W.; Webster, D.M. Motivated closing of the mind: “Seizing” and “freezing.”. Psychol. Rev. 1996, 103, 263–283. [Google Scholar] [CrossRef]
- Glickman, M.; Sharot, T. How human–AI feedback loops alter human perceptual, emotional and social judgements. Nat. Hum. Behav. 2025, 9, 345–359. [Google Scholar] [CrossRef]
- Fanous, A.; Goldberg, J.; Agarwal, A.A.; Lin, J.; Zhou, A.; Daneshjou, R.; Koyejo, S. SycEval: Evaluating LLM sycophancy. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2025; Vol. 8, pp. 893–900. [Google Scholar]
- Bo, J.Y.; Kazemitabaar, M.; Deng, M.; Inzlicht, M.; Anderson, A. Invisible saboteurs: Sycophantic LLMs mislead novices in problem-solving tasks. arXiv 2025. [Google Scholar] [CrossRef]
- Kamenica, E.; Gentzkow, M. Bayesian persuasion. Am. Econ. Rev. 2011, 101, 2590–2615. [Google Scholar] [CrossRef]
- Hill, K. They asked ChatGPT questions. The answers sent them spiraling. The New York Times. 13 June 2025. Available online: https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html (accessed on 13 April 2026).
- Dupré, M.H. People are becoming obsessed with ChatGPT and spiraling into severe delusions. Futurism. 28 June 2025. Available online: https://futurism.com/chatgpt-mental-health-crises (accessed on 13 April 2026).
- Gold, H. They thought they were making technological breakthroughs. It was an AI-sparked delusion. CNN. 30 June 2025. Available online: https://edition.cnn.com/2025/09/05/tech/ai-sparked-delusion-chatgpt (accessed on 13 April 2026).
- Schechner, S.; Kessler, S. ‘I feel like I’m going crazy’: ChatGPT fuels delusional spirals. The Wall Street Journal. 2 June 2025. Available online: https://www.wsj.com/tech/ai/i-feel-like-im-going-crazy-chatgpt-fuels-delusional-spirals-ae5a51fc (accessed on 13 April 2026).
- Hill, K. A teen was suicidal. ChatGPT was the friend he confided in. The New York Times. 26 August 2025. Available online: https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html (accessed on 13 April 2026).
- Shumailov, I.; Shumaylov, Z.; Zhao, Y.; Papernot, N.; Anderson, R.; Gal, Y. AI models collapse when trained on recursively generated data. Nature 2024, 631, 755–759. [Google Scholar] [CrossRef]
- Dohnány, S.; Kurth-Nelson, Z.; Spens, E.; Luettgau, L.; Reid, A.; Gabriel, I.; Summerfield, C.; Nour, M.M. Technological folie à deux: Feedback loops between AI chatbots and mental illness. arXiv 2025. [Google Scholar] [CrossRef]
- Edwards, W. Conservatism in human information processing. In Formal Representation of Human Judgment; Kleinmuntz, B., Ed.; Wiley: New York, NY, USA, 1968; pp. 17–52. [Google Scholar]
- Griffiths, T.L.; Kemp, C.; Tenenbaum, J.B. Bayesian models of cognition. In The Cambridge Handbook of Computational Psychology; Sun, R., Ed.; Cambridge University Press: Cambridge, UK, 2008; pp. 59–100. [Google Scholar]
- Payne, J.W.; Bettman, J.R.; Johnson, E.J. The Adaptive Decision Maker; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
- von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
- Thaler, R. Mental accounting and consumer choice. Mark. Sci. 1985, 4, 199–214. [Google Scholar] [CrossRef]
- Kahneman, D.; Tversky, A. Prospect theory: An analysis of decision under risk. Econometrica 1979, 47, 263–291. [Google Scholar] [CrossRef]
- Hochman, G.; Kalagy, T.; Malul, S.; Yosef, R. Choosing not to know: The emotional and sociocultural architecture of pension willful ignorance. Curr. Opin. Psychol. 2025, 102181. [Google Scholar] [CrossRef]
- Munro, G.D.; Ditto, P.H. Biased assimilation, attitude polarization, and affect in reactions to stereotype-relevant scientific information. Pers. Soc. Psychol. Bull. 1997, 23, 636–653. [Google Scholar] [CrossRef]
- Sunstein, C.R. #Republic: Divided Democracy in the Age of Social Media; Princeton University Press: Princeton, NJ, USA, 2017. [Google Scholar]
- YouGov. Americans 2024 Poll: AI Top Feeling Caution. Available online: https://today.yougov.com/technology/articles/49099-americans-2024-poll-ai-top-feeling-caution (accessed on 12 April 2026).
- Mahmud, H.; Islam, A.N.; Ahmed, S.I.; Smolander, K. What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technol. Forecast. Soc. Change 2022, 175, 121390. [Google Scholar] [CrossRef]
- Gaczek, P.; Pozharliev, R.; Leszczyński, G.; Zieliński, M. Overcoming consumer resistance to AI in general health care. J. Interact. Mark. 2023, 58, 321–338. [Google Scholar] [CrossRef]
- Longoni, C.; Bonezzi, A.; Morewedge, C.K. Resistance to medical artificial intelligence. J. Consum. Res. 2019, 46, 629–650. [Google Scholar] [CrossRef]
- Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114. Available online: https://psycnet.apa.org/doi/10.1037/xge0000033. [CrossRef]
- Beckers, J.J.; Schmidt, H.G. The structure of computer anxiety: A six-factor model. Comput. Hum. Behav. 2001, 17, 35–49. [Google Scholar] [CrossRef]
- Brod, C. Technostress: The Human Cost of the Computer Revolution; Addison-Wesley: Reading, MA, USA, 1984. [Google Scholar]
- Meehl, P.E. Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence; University of Minnesota Press: Minneapolis, MN, USA, 1954. [Google Scholar]
- Grove, W.M.; Zald, D.H.; Lebow, B.S.; Snitz, B.E.; Nelson, C. Clinical versus mechanical prediction: A meta-analysis. Psychol. Assess. 2000, 12, 19–30. [Google Scholar] [CrossRef]
- Ayal, S.; Hochman, G. Ignorance or integration: The cognitive processes underlying choice behavior. J. Behav. Decis. Mak. 2009, 22, 455–474. [Google Scholar] [CrossRef]
- Todd, P.M.; Gigerenzer, G. Environments that make us smart: Ecological rationality. Curr. Dir. Psychol. Sci. 2007, 16, 167–171. [Google Scholar] [CrossRef]
- Barber, B.M.; Odean, T. Boys will be boys: Gender, overconfidence, and common stock investment. Q. J. Econ. 2001, 116, 261–292. [Google Scholar] [CrossRef]
- Blanton, H.; Pelham, B.W.; DeHart, T.; Carvallo, M. Overconfidence as dissonance reduction. J. Exp. Soc. Psychol. 2001, 37, 373–385. [Google Scholar] [CrossRef]
- Sperber, D.; Clément, F.; Heintz, C.; Mascaro, O.; Mercier, H.; Origgi, G.; Wilson, D. Epistemic vigilance. Mind Lang. 2010, 25, 359–393. [Google Scholar] [CrossRef]
- Steyvers, M.; Tejeda, H.; Kumar, A.; Belem, C.; Karny, S.; Hu, X.; Mayer, L.W.; Smyth, P. What large language models know and what people think they know. Nat. Mach. Intell. 2025, 7, 221–231. [Google Scholar] [CrossRef]
- Hochman, G.; Erev, I. The partial-reinforcement extinction effect and the contingent-sampling hypothesis. Psychon. Bull. Rev. 2013, 20, 1336–1342. [Google Scholar] [CrossRef]
- Nass, C.; Moon, Y. Machines and mindlessness: Social responses to computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
- Sundar, S.S. The MAIN model: A heuristic approach to understanding technology effects on credibility. In Digital Media, Youth, and Credibility; Metzger, M.J., Flanagin, A.J., Eds.; MIT Press: Cambridge, MA, USA, 2008; pp. 73–100. [Google Scholar]
- Metzger, M.J.; Flanagin, A.J. Credibility and trust of information in online environments: The use of cognitive heuristics. J. Pragmat. 2013, 59, 210–220. [Google Scholar] [CrossRef]
- Fernandes, D.; Villa, S.; Nicholls, S.; Haavisto, O.; Buschek, D.; Schmidt, A.; Kosch, T.; Shen, C.; Welsch, R. AI makes you smarter but none the wiser: The disconnect between performance and metacognition. Comput. Hum. Behav. 2026, 175, 108779. [Google Scholar] [CrossRef]
- Anderson, B.B.; Vance, A.; Kirwan, C.B.; Jenkins, J.L.; Eargle, D. From warning to wallpaper: Why the brain habituates to security warnings and what can be done about it. J. Manag. Inf. Syst. 2016, 33, 713–743. [Google Scholar] [CrossRef]
- Bravo-Lillo, C.; Cranor, L.F.; Komanduri, S.; Schechter, S.; Sleeper, M. Harder to ignore? Revisiting pop-up fatigue and approaches to prevent it. In Proceedings of the Tenth Symposium on Usable Privacy and Security, Menlo Park, CA, USA, 9–11 July 2014; USENIX Association: Berkeley, CA, USA, 2014; pp. 105–111. [Google Scholar]
- Obar, J.A.; Oeldorf-Hirsch, A. The biggest lie on the internet: Ignoring the privacy policies and terms of service policies of social networking services. Inf. Commun. Soc. 2020, 23, 128–147. [Google Scholar] [CrossRef]
- Ji, Z.; Lee, N.; Frieske, R.; Yu, T.; Su, D.; Xu, Y.; Ishii, E.; Bang, Y.J.; Madotto, A.; Fung, P. Survey of hallucination in natural language generation. ACM Comput. Surv. 2023, 55, 1–38. [Google Scholar] [CrossRef]
- Duan, W.; Zhou, S.; Scalia, M.J.; Yin, X.; Weng, N.; Zhang, R.; Tolston, M. Understanding the evolvement of trust over time within human-AI teams. Proc. ACM Hum.-Comput. Interact. 2024, 8(CSCW2), 1–31. [Google Scholar] [CrossRef]
- Fiske, S.T.; Taylor, S.E. Social Cognition; Addison-Wesley: Reading, MA, USA, 1984. [Google Scholar]
- Code, L. Epistemic responsibility. In Virtue Epistemology; Battaly, H., Ed.; Wiley-Blackwell: Chichester, UK, 2013; pp. 109–126. [Google Scholar]
- Kidd, I.J. Intellectual humility, confidence, and argumentation. Topoi 2016, 35, 395–402. [Google Scholar] [CrossRef]
- Krumrei-Mancuso, E.J.; Rouse, S.V. The development and validation of the Comprehensive Intellectual Humility Scale. J. Pers. Assess. 2016, 98, 209–221. [Google Scholar] [CrossRef]
- Tanesini, A. Intellectual humility as attitude. Philos. Phenomenol. Res. 2018, 96, 399–420. [Google Scholar] [CrossRef]
- Muyskens, K.; Ang, C.; Kerr, E.T. Relational epistemic humility in the clinical encounter. J. Med. Ethics 2025. [Google Scholar] [CrossRef]
- Ivry, T. AI as moral cover: How algorithmic bias exploits psychological mechanisms to perpetuate social inequality. Anal. Soc. Issues Public Policy 2025, 25, e70031. [Google Scholar] [CrossRef]
- Baer, M.; Kouchaki, M. Responsible collaboration with artificial intelligence in organizational scholarship: OBHDP’s governance framework for authors and reviewers. Organ. Behav. Hum. Decis. Process. 2026, 193, 104480. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.