Submitted:
30 April 2025
Posted:
02 May 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. The Difficulty of Speaker-Independent Claim Validity Assessments
2.1. Source Credibility and Speaker-Dependent Epistemology
2.2. Speaker-Independence and Cultural Cognition
1. The Confirmation Bias and Tribal Credentialing
2. Affective Polarization
3. Motivated Numeracy
2.3. Existing Models for Debiasing and Assess Claim Validity
3. The Adversarial Claim Resilience Diagnostics (ACRD) Framework
3.1. A Three-Phase Diagnostic Tool: mixing Game Theory and AI
- Bayesian belief updating (Corcoran et al., 2023); where adversarial challenges function as likelihood/posterior probability adjustments.
- Popperian falsification (Popper, 1963); that treats survival under counterfactual attribution as resilience and thus robustness.
- Game-theoretic equilibrium (Nash, 1950; Myerson, 1981); where validation (with the possibility of achieving truth-convergence) emerges as the stable point between opposing evaluators.
- Baseline phase –A statement P is spoken by a non-neutral speaker (here associated with group 1). Each group receives information regarding a scientific/expert consensus (made as ‘objective’ as possible), to which they each assign a level of trust(Of course this expert baseline neutrality can be challenged, and this will be reflected in the trust levels. To approximate neutrality, Cook et al. (2016) define domain experts as scientists who have published peer-reviewed research in that domain. Qu et al. (2025) propose a robust model for achieving maximum expert consensus in group decision-making (GDM) under uncertainty, integrating a dynamic feedback mechanism to improve reliability and adaptability.). Each group’s prior validity assessment of P takes into account the degree of tie they have with their respective ideologies, their trust level of the expert’s assessment, and the expert’s validity score itself. They come up with their own evaluation as a result of a strategic game exhibiting a Nash equilibrium.
- Reframing phase – Each group is presented with counterfactuals. Claim P is either framed as originating from an adversarial source or the reverse proposition ~P is assumed spoken by the original source in a “what if” thought-experiment (in that case, it is important to decide at the outset if the protocol is based on a test of P or ~P based on best experimental design considerations), or by using the Devil’s Advocate Approach (Vrij et al., 2023). Claims are adjusted via adversarial collaboration (Ceci et al., 2024).(There is also the possibility that ideological camps jointly formulate statements to minimize inherent framing biases. This process can follow Schulz-Hardt et al. (2006)’s model of structured dissent, requiring consensus on claim wording before testing.) New evaluations are then proposed under the new updated beliefs. These again are solutions to the same strategic game, under new (posterior) beliefs. Actual field studies can operationalize this phase with dynamic calibration to adjust for adversarial intensity based for instance on response latency (<500ms indicates affective rejection; Lodge & Taber, 2013). Semantic similarity scores (detecting recognition of in-group rhetoric) can also be deployed there.
- 3. AI and Dynamic Calibration Phase– When deployed in field studies, AI-driven adjustments (e.g., GPT-4 generated counterfactuals; BERT-based semantic perturbations) will test boundary conditions where claims fracture using the index developed below. These AI aids can implement neural noise injections (e.g., minor phrasing variations) to disrupt affective tagging. They can also integrate intersubjective agreement gradients and longitudinal stability checks correcting for both temporary reactance and consistency across repeated exposures.
3.2. The Claim Robustness Index
- Initial judgments by each player: for i =1,2. Baseline partisan bias is the outcome of optimized strategic behavior. A value of 1 means that statement P is accepted as 100% valid.
- Post-judgment after reframing: for i =1,2. Stress-tested beliefs et re-evaluations of these beliefs as the outcome of strategic behavior identified in the Nash equilibrium.
- Expert signal: D . Grounding claim validity.
- Agreement Level: . Rewards post-reframing consensus building.
- Expert Alignment: . Rewards final proximity towards expert consensus.
- Updating Process: This rewards movement of revised evaluations due to adversarial collaboration.
- d* = |J₁* - J₂*|: Initial disagreement.
- ΔJᵢ = |Jᵢ** - Jᵢ*|: Belief update for Player i.
- d** = |J₁** - J₂**|: Post-collaboration disagreement.
- , ∈ [0, 1]: Weight assigned to Player 1 (due to latency of the original speaker tied to Player 1, i.e., , and the relative distance from expert consensus proxying for initial bias).
| Variable | Value | Description |
| J₁* | 0.6 | Initial judgment of Player 1 |
| J₂* | 0.4 | Initial judgment of Player 2 |
| J₁** | 0.64 | Post-reframing judgment of Player 1 |
| J₂** | 0.44 | Post-reframing judgment of Player 2 |
| D | 0.8 | Expert signal |
| 0.5 | Bias adjustment parameter |
| Metric | Value | Interpretation |
| UP | 1.142 | Moderate belief updating (rewarded but not extreme) |
| CRI | 0.76 | Suboptimal robustness (remaining disagreement and imperfect expert alignment) |
4. Modeling Strategic Interactions: ACRD as a Claim Validation Game
4.1. The Game Setup
| Scenarios | Expected effects on Ji | Proximity to Expert (D) |
| High TRUSTi, Low TIEi | Ji ≈ D | Strong |
| Low TRUSTi, High TIEi | Ji ≈ TIEi | Weak |
| Moderate TIEj | Convergence to consensus | Moderate |
| High b (Dissent Cost) | Ji ≈ Xi | Depends on TRUSTi |
4.2. The Bayesian-Nash Equilibrium Solution
| J₁* = [aTIE₂²(1-TIE₂)X₂ + bTIE₁(aTIE₁(1-TIE₁) + bTIE₂)X₁] / [aTIE₂²(1-TIE₂) + aTIE₁²(1-TIE₁) + bTIE₁TIE₂] |
| J₂* = [aTIE₁²(1-TIE₁)X₁ + bTIE₂(aTIE₂(1-TIE₂) + bTIE₁)X₂] / [aTIE₁²(1-TIE₁) + aTIE₂²(1-TIE₂) + bTIE₁TIE₂] |
| Case | Condition | Equilibrium |
| No Collaboration | a = 0 | Ji* = Xi |
| No Dissent Cost | b = 0 | Ji* = Weighted average of Xj |
| Parameter | Description | Value |
| a | Collaboration weight | 0.8 |
| b | Dissent cost weight | 0.2 |
| TIE₁ | Player 1’s ideological tie | 0.6 |
| TIE₂ | Player 2’s ideological tie | 0.4 |
| TRUST1= TRUST₂ | Trust in experts | 0.5 |
| Judgment | Equation |
| J₁** | 0.24 + 0.45D |
| J₂** | 0.16 + 0.55D |

4.3. Application of the CRI Index.
| J₁* (TIE1) | 0.8 |
| J₂* (TIE2) | 0.2 |
| J₁** | 0.62 |
| J₂** | 0.38 |
| D | 0.5 |
| β | 0.6 |
- i.
- Expert Alignment: 1 - (|0.62-0.5|+|0.38-0.5|)/2 = 0.88
- ii.
- Updating Process:
- iii.
- Temporal Stability: Assumed ICC = 0.9
5. Comparative Analysis: ACRD vs. GBM and BTS Frameworks
6. AI-Augmented Adversarial Testing: Computational Implementation of ACRD
6.1. Large Language Models (LLMs) as Adversarial Simulators
1. Automated Speaker Swapping
- Generates adversarial framings to for example test how would acceptance change if [claim] were attributed to [opposing ideologue] ?.
- Uses prompt engineering to maximize ideological tension (e.g., attributing climate claims to oil lobbyists vs. environmentalists, and vice versa).
2. Semantic Shift Detection
- Quantifies framing effects via:
- Embedding similarity (e.g cosine distance in BERT/RoBERTa spaces) to detect rhetorical recognition. For instance, (cosine distance > 0.85 triggers CRI adjustment).
- Sentiment polarity shifts (e.g., VADER or LIWC lexicons) to measure affective bias. For instance, polarity shifts >1.5 SD indicate affective bias.
- Neural noise injection (Storek et al., 2023) to disrupt patterned responses and test claim stability under minor phrases perturbations such as “usually increases” vs. “always increases”.
3. Resilience Profiling
- Flags high-CRI claims (hypothetical example: “Vaccines reduce mortality” maintains CRI > 0.9 across attributions).
- Identifies fragile claims (hypothetical example: “Tax cuts raise revenues” shows CRI < 0.5 under progressive attribution).
- The approach faces some limitations: LLM-generated attributions may inherit cultural biases (Mergen et al., 2021), which necessitate:
- Demographic calibration. For example, Levay et al. (2016) control for skew in simulated responses. As Callegaro et al. (2014) explain, those who use non-probability samples (e.g., opt-in samples) “argue that the bias in samples . . . can be reduced through the use of auxiliary variables that make the results representative. These adjustments can be made with . . . [w]eight adjustments [using] a set of variables that have been measured in the survey.” (Levay et al., 2016: 13).
- Human-in-the-loop validation for politically sensitive claims.
6.2. Mitigating Epistemic Risks in AI-Assisted ACRD
| Risk | ACRD Safeguard | Technical Implementation |
| Training data bias | Adversarial debiasing (Zhang et al., 2018; González-Sendino et al., 2024) | Fine-tuning on counterfactual Q&A datasets |
| Oversimplified ideological models | Adversarial nets (Goodfellow et al., 2014) | Multi-LLM consensus (GPT-4 + Claude + Mistral) |
| Semantic fragility | Neural noise injection (Storek et al., 2023) | Paraphrase generation via T5/DALL-E |
6.3. Future Directions: Toward a Human-AI ACRD Partnership
1. Dynamic Adversarial Calibration
- Real-time adjustment of speaker attribution intensity based on respondent latency. in which, for instance, rejections under 500ms do signal knee-jerk ideological dismissal rather than considered evaluation. (Lodge & Taber, 2013).
2. Cross-Platform Deployment
- Browser plugins tagging social media posts with CRI scores (e.g., “This claim shows moderate resilience (CRI of 0.7) across partisan framings”).
3. Deliberative Democracy Integration
- Citizens’ assemblies can use ACRD to pre-test policy claims (e.g., “Universal basic income reduces poverty” under progressive vs. conservative framings).
7. Empirical Validation and Discourse Epistemology: Testing ACRD in Real-World Discourse
7.1. ACRD vs. Traditional Fact-Checking: A Comparative Analysis
| Failure Mode | Fact-Checking Approach | ACRD Solution |
| Backfire effects (Nyhan & Reifler, 2010) | Direct correction | Adversarial reframing (e.g., presenting a climate claim as if coming from an oil lobbyist) |
| False consensus (Kahan et al., 2012) | Assumes neutral arbiters exist | Measures divergence under adversarial attribution |
| Confirmation bias (Nickerson, 1998) | Relies on authority cues | Strips speaker identity, forcing content-based evaluation |
- Forcing adversarial engagement: By attributing claims to maximally oppositional sources, ACRD mimics the “veil of ignorance” (Rawls, 1971), as much as possible disrupting tribal cognition.
- Dynamic calibration: Real-time adjustment of speaker intensity (e.g., downgrading adversarial framing if response latency suggests reactance).
| Challenge | ACRD Solution | Theoretical Basis |
| False consensus | Expert-weighted CRI | Kahan et al. (2012) |
| Speaker salience overhang | Neural noise injection | Storek et al. (2023) |
| Nuance collapse | Likert-scale writing rationale | Sieck & Yates (1997) |
| Adversarial fatigue | Real-time calibration of attribution intensity | Nyhan & Reifler (2010) |
7.2. Limitations and Future Directions
Conclusions
Appendix A
| Scenario | Agrmt | Expert Aligmnt | UP Range | CRI Range | Behavior |
| Polarized Stubbornness | 0.4–0.5 | 0.3–0.4 | 1.0–1.1 | 0.12–0.20 | Players maintain entrenched positions despite expert evidence (e.g., J₁**=0.1, J₂**=0.9, D=0.5). High initial disagreement (d*=0.8) persists (d**≈0.7), with minimal updates (ΔJᵢ<0.1). |
| Partial Expert Misalignment | 0.7–0.8 | 0.5–0.6 | 1.1–1.2 | 0.35–0.50 | Moderate consensus (J₁**=0.6, J₂**=0.7) but systematic deviation from expert signal (D=0.5). Updates (ΔJᵢ≈0.2) show partial responsiveness to evidence. |
| Temporary Alignment | 0.8–0.9 | 0.6–0.7 | 1.0–1.1 | 0.45–0.60 | Surface-level agreement (J₁**=J₂**=0.7) with expert misalignment (D=0.5). High agreement masks instability (TStability=0.7) from non-evidence-driven updates. |
| Overcorrection Without Consensus | 0.5–0.6 | 0.4–0.5 | 1.3–1.4 | 0.25–0.35 | Aggressive updates (ΔJ₁=0.4, ΔJ₂=0.3) push players past expert consensus (J₁**=0.7, J₂**=0.3, D=0.5). High UP reflects reactive adjustments. |
| Expert-Driven but Divided | 0.3–0.4 | 0.7–0.8 | 1.2–1.3 | 0.25–0.35 | Strong individual alignment (J₁**=0.6, J₂**=0.4, D=0.5) but persistent disagreement (d**=0.2). Updates (ΔJᵢ≈0.3) reflect evidence adoption without reconciliation. |
| Biased Collaboration | 0.6–0.7 | 0.4–0.5 | 1.1–1.2 | 0.25–0.40 | Consensus forms around one player’s biased view (J₁**=0.6, J₂**=0.55, D=0.5) due to asymmetric α weighting (α≈0.8). Imbalanced influence in updates (ΔJ₁≈0.1, ΔJ₂≈0.3). |
| Fragile Consensus | 0.7–0.8 | 0.5–0.6 | 1.0–1.1 | 0.35–0.50 | Nominal consensus (J₁**=0.65, J₂**=0.7, D=0.5) with weak expert alignment. Low stability (TStability =0.6) makes outcomes vulnerable to minor changes. |
Appendix B
| -2k₁-2d₁ | 2k |
| 2k₂ | -2k₂-2d₂ |
References
- Adams, M.; Niker, F. Niker, F., Bhattacharya, A., Eds.; Harnessing the epistemic value of crises for just ends. In Political philosophy in a pandemic: Routes to a more just future; Bloomsbury Academic, 2021; pp. 219–232. [Google Scholar]
- Altenmüller, M. S.; Wingen, T.; Schulte, A. Explaining polarized trust in scientists: A political stereotype-approach. Science Communication 2024, 46(1), 92–115. [Google Scholar] [CrossRef]
- Argyle, L. P.; Busby, E. C.; Fulda, N.; Gubler, J. R.; Rytting, C.; Wingate, D. Out of one, many: Using language models to simulate human samples. Political Analysis 2023, 31(3), 337–351. [Google Scholar] [CrossRef]
- Austin, J. L. How to do things with words; Oxford University Press, 1962. [Google Scholar]
- Beaver, D.; Stanley, J. Neutrality. Philosophical Topics 2021, 49(1), 165–186. [Google Scholar] [CrossRef]
- Berger, P. L.; Luckmann, T. The social construction of reality: A treatise in the sociology of knowledge; Anchor Books, 1966. [Google Scholar]
- Bogaard, G.; Colwell, K.; Crans, S. Using the Reality Interview improves the accuracy of the Criteria-Based Content Analysis and Reality Monitoring. Applied Cognitive Psychology 2019, 33(6), 1018–1031. [Google Scholar] [CrossRef]
- Bohr, J. Public views on the dangers and importance of climate change: Predicting climate change beliefs in the United States through income moderated by party identification. Climatic Change 2014, 126(1-2), 217–227. [Google Scholar] [CrossRef]
- Brady, W. J.; Crockett, M. J.; Van Bavel, J. J. The MAD model of moral contagion: The role of motivation, attention, and design in the spread of moralized content online. Perspectives on Psychological Science 2021, 16(4), 978–1010. [Google Scholar] [CrossRef]
- Brenan, M. Americans’ trust in media remains near record low. Gallup. 18 October 2022. Available online: https://news.gallup.com/poll/403166/americans-trust-media-remains-near-record-low.aspx.
- Bugden, D.; Evensen, D.; Stedman, R. A drill by any other name: Social representations, framing, and legacies of natural resource extraction in the fracking industry. Energy Research & Social Science 2017, 29, 62–71. [Google Scholar] [CrossRef]
- Callegaro, M.; Baker, R.; Bethlehem, J.; Göritz, A. S.; Krosnick, J. A.; Lavrakas, P. J. Callegaro, M., Baker, R., Bethlehem, J., Göritz, A. S., Krosnick, J. A., Lavrakas, P. J., Eds.; Online panel research: History, concepts, applications, and a look at the future. In Online panel research: A data quality perspective; Wiley, 2014; pp. 1–22. [Google Scholar]
- Calvillo, D. P.; Ross, B. J.; Garcia, R. J. B.; Smelter, T. J.; Rutchick, A. M. Political ideology predicts perceptions of the threat of COVID-19. Social Psychological and Personality Science 2020, 11(8), 1119–1128. [Google Scholar] [CrossRef]
- Campbell, T. H.; Kay, A. C. Solution aversion: On the relation between ideology and motivated disbelief. Journal of Personality and Social Psychology 2014, 107(5), 809–824. [Google Scholar] [CrossRef]
- Ceci, S. J.; Clark, C. J.; Jussim, L.; Williams, W. M. Adversarial collaboration: An undervalued approach in behavioral science. In American Psychologist; Advance online publication, 2024. [Google Scholar] [CrossRef]
- Chaiken, S.; Trope, Y. (Eds.) Dual-process theories in social psychology; Guilford Press, 1999. [Google Scholar]
- Cialdini, R. B.; Kallgren, C. A.; Reno, R. R. A focus theory of normative conduct: A theoretical refinement and reevaluation of the role of norms in human behavior. Advances in Experimental Social Psychology 1991, 24, 201–234. [Google Scholar] [CrossRef]
- Clark, A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences 2013, 36(3), 181–204. [Google Scholar] [CrossRef]
- Clarke, C. E.; Hart, P. S.; Schuldt, J. P.; Evensen, D. T. N.; Boudet, H. S.; Jacquet, J. B.; Stedman, R. C. Public opinion on energy development: The interplay of issue framing, top-of-mind associations, and political ideology. Energy Policy 2015, 81, 131–140. [Google Scholar] [CrossRef]
- Cohen, G. L. Party over policy: The dominating impact of group influence on political beliefs. Journal of Personality and Social Psychology 2003, 85(5), 808–822. [Google Scholar] [CrossRef]
- Cook, J.; Oreskes, N.; Doran, P. T.; Anderegg, W. R.; Verheggen, B.; Maibach, E. W.; Carlton, J. S.; Lewandowsky, S.; Skuce, A. G.; Green, S. A.; Nuccitelli, D.; Jacobs, P.; Richardson, M.; Winkler, B.; Painting, R.; Rice, K. Consensus on consensus: A synthesis of consensus estimates on human-caused global warming. Environmental Research Letters 2016, 11(4), 048002. [Google Scholar] [CrossRef]
- Corcoran, A. W.; Hohwy, J.; Friston, K. J. Accelerating scientific progress through Bayesian adversarial collaboration. Neuron 2023, 111(22), 3505–3516. [Google Scholar] [CrossRef]
- Darke, P. R.; Chaiken, S.; Bohner, G.; Einwiller, S.; Erb, H. P.; Hazlewood, J. D. Accuracy motivation, consensus information, and the law of large numbers: Effects on attitude judgment in the absence of argumentation. Personality and Social Psychology Bulletin 1998, 24(11), 1205–1215. [Google Scholar] [CrossRef]
- Davidson, D. Truth and meaning. Synthese 1967, 17(3), 304–323. [Google Scholar] [CrossRef]
- De Martino, B.; Kumaran, D.; Seymour, B.; Dolan, R. J. Frames, biases, and rational decision-making in the human brain. Science 2006, 313(5787), 684–687. [Google Scholar] [CrossRef]
- Drummond, C.; Fischhoff, B. Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Proceedings of the National Academy of Sciences 2017, 114(36), 9587–9592. [Google Scholar] [CrossRef]
- Druckman, J. N. The implications of framing effects for citizen competence. Political Behavior 2001, 23(3), 225–256. [Google Scholar] [CrossRef]
- Druckman, J. N.; Lupia, A. Preference change in competitive political environments. Annual Review of Political Science 2016, 19, 13–31. [Google Scholar] [CrossRef]
- Feygina, I.; Jost, J. T.; Goldsmith, R. E. System justification, the denial of global warming, and the possibility of ‘system-sanctioned change’. Personality and Social Psychology Bulletin 2010, 36(3), 326–338. [Google Scholar] [CrossRef]
- Fricker, M. Epistemic injustice: Power and the ethics of knowing; Oxford University Press, 2007. [Google Scholar]
- Gier, N. R.; Krampe, C.; Kenning, P. Why it is good to communicate the bad: Understanding the influence of message framing in persuasive communication on consumer decision-making processes. Frontiers in Human Neuroscience 2023, 17, 1085810. [Google Scholar] [CrossRef]
- Goldstein, J. Record-high engagement with deceptive sites in 2020; German Marshall Fund, 2021. [Google Scholar]
- González-Sendino, R.; Serrano, E.; Bajo, J. Mitigating bias in artificial intelligence: Fair data generation via causal models for transparent and explainable decision-making. Future Generation Computer Systems 2024, 155, 384–401. [Google Scholar] [CrossRef]
- Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Advances in Neural Information Processing Systems 2014, 27, 2672–2680. [Google Scholar]
- Granhag, P. A.; Hartwig, M. Granhag, P. A., Vrij, A., Verschuere, B., Eds.; The Strategic Use of Evidence (SUE) technique: A conceptual overview. In Deception detection: Current challenges and new approaches; Wiley, 2015; pp. 231–251. [Google Scholar]
- Grice, H. P. Cole, P., J., L. Morgan, Eds.; Logic and conversation. In Syntax and semantics 3: Speech acts; Academic Press, 1975; pp. 41–58. [Google Scholar]
- Grundmann, T. The possibility of epistemic nudging. Social Epistemology 2021, 37(2), 208–218. [Google Scholar] [CrossRef]
- Habermas, J. McCarthy, T., Translator; The theory of communicative action. In Reason and the rationalization of society; Beacon Press, 1984; Vol. 1. [Google Scholar]
- Hartwig, M.; Granhag, P. A.; Luke, T. Raskin, D. C., Honts, C. R., Kircher, J. C., Eds.; Strategic use of evidence during investigative interviews: The state of the science. In Credibility assessment: Scientific research and applications; Academic Press, 2014; pp. 1–36. [Google Scholar]
- Hazboun, S. O.; Howe, P. D.; Layne Coppock, D.; Givens, J. E. The politics of decarbonization: Examining conservative partisanship and differential support for climate change science and renewable energy in Utah. Energy Research & Social Science 2020, 70, 101769. [Google Scholar] [CrossRef]
- Hintikka, J. Knowledge and belief: An introduction to the logic of the two notions; Cornell University Press., 1962. [Google Scholar]
- Huszár, F.; Ktena, S. I.; O’Brien, C.; Belli, L.; Schlaikjer, A.; Hardt, M. Algorithmic amplification of politics on Twitter. Proceedings of the National Academy of Sciences 2022, 119(1), e2025334119. [Google Scholar] [CrossRef]
- Iyengar, S.; Lelkes, Y.; Levendusky, M.; Malhotra, N.; Westwood, S. J. The origins and consequences of affective polarization. Annual Review of Political Science 2019, 22, 129–146. [Google Scholar] [CrossRef]
- Kahan, D. M. Misconceptions, misinformation, and the logic of identity-protective cognition. Cultural Cognition Project Working Paper Series 2017, 164. [Google Scholar] [CrossRef]
- Kahan, D. M.; Peters, E.; Wittlin, M.; Slovic, P.; Ouellette, L. L.; Braman, D.; Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2012, 2(10), 732–735. [Google Scholar] [CrossRef]
- Knight Foundation. American views 2022: Trust, media and democracy, 2023.
- Levay, K. E.; Freese, J.; Druckman, J. N. The demographic and political composition of Mechanical Turk samples. SAGE Open 2016, 6(1). [Google Scholar] [CrossRef]
- Lewandowsky, S.; Ecker, U. K.; Seifert, C. M.; Schwarz, N.; Cook, J. Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest 2012, 13(3), 106–131. [Google Scholar] [CrossRef]
- Lewandowsky, S.; Gignac, G. E.; Oberauer, K. The role of conspiracist ideation and worldviews in predicting rejection of science. PLoS ONE 2013a, 8(9), e75637. [Google Scholar] [CrossRef]
- Lewandowsky, S.; Gignac, G. E.; Vaughan, S. The pivotal role of perceived scientific consensus in acceptance of science. Nature Climate Change 2013b, 3(4), 399–404. [Google Scholar] [CrossRef]
- Liu, X.; Qi, L.; Wang, L.; Metzger, M. J. Checking the fact-checkers: The role of source type, perceived credibility, and individual differences in fact-checking effectiveness. Communication Research 2023. [Google Scholar] [CrossRef]
- Lodge, M.; Taber, C. S. The rationalizing voter; Cambridge University Press, 2013. [Google Scholar] [CrossRef]
- Lutzke, L.; Drummond, C.; Slovic, P.; Árvai, J. Priming critical thinking: Simple interventions limit the influence of fake news about climate change on Facebook. Global Environmental Change 2019, 58, 101964. [Google Scholar] [CrossRef]
- Mayer, A. National energy transition, local partisanship? Elite cues, community identity, and support for clean power in the United States. Energy Research & Social Science 2019, 50, 143–150. [Google Scholar] [CrossRef]
- McCright, A. M.; Marquart-Pyatt, S. T.; Shwom, R. L.; Brechin, S. R.; Allen, S. Ideology, capitalism, and climate: Explaining public views about climate change in the United States. Energy Research & Social Science 2016, 21, 180–189. [Google Scholar] [CrossRef]
- McDonald, J. Unreliable news sites saw surge in engagement in 2020. NewsGuard. 2021.
- Mellers, B.; Hertwig, R.; Kahneman, D. Do frequency representations eliminate conjunction effects? An exercise in adversarial collaboration. Psychological Science 2001, 12(4), 269–275. [Google Scholar] [CrossRef]
- Mercier, H.; Sperber, D. The enigma of reason; Harvard University Press, 2017. [Google Scholar]
- Mergen, A.; Çetin-Kılıç, N.; Özbilgin, M. F. Vassilopoulou, J., Kyriakidou, O., Eds.; Artificial intelligence and bias towards marginalised groups: Theoretical roots and challenges. In AI and diversity in a datafied world of work: Will the future of work be inclusive? Emerald Publishing, 2025; pp. 17–38. [Google Scholar] [CrossRef]
- Mitchell, A.; Gottfried, J.; Stocking, G.; Walker, M.; Fedeli, S. Many Americans say made-up news is a critical problem that needs to be fixed. Pew Research Center. 5 June 2019. Available online: https://www.pewresearch.org/journalism/2019/06/05/many-americans-say-made-up-news-is-a-critical-problem-that-needs-to-be-fixed/.
- Miyazono, K. Epistemic libertarian paternalism. Erkenn. Advance online publication. 2023. [Google Scholar] [CrossRef]
- Mutz, D. C. Impersonal influence: How perceptions of mass collectives affect political attitudes; Cambridge University Press, 1998. [Google Scholar]
- Myerson, R. B. Game theory: Analysis of conflict; Harvard University Press, 1981. [Google Scholar]
- Nahari, G. Docan-Morgan, T., Ed.; Verifiability approach: Applications in different judgmental settings. In The Palgrave handbook of deceptive communication; Palgrave Macmillan, 2019; pp. 213–225. [Google Scholar] [CrossRef]
- Nash, J. Equilibrium points in n-person games. Proceedings of the National Academy of Sciences 1950, 36(1), 48–49. [Google Scholar] [CrossRef]
- Nguyen, C. T. Echo chambers and epistemic bubbles. Episteme 2020, 17(2), 141–161. [Google Scholar] [CrossRef]
- Nickerson, R. S. Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology 1998, 2(2), 175–220. [Google Scholar] [CrossRef]
- Nyhan, B.; Reifler, J. When corrections fail: The persistence of political misperceptions. Political Behavior 2010, 32(2), 303–330. [Google Scholar] [CrossRef]
- Palena, N.; Caso, L.; Vrij, A.; Nahari, G. The Verifiability Approach: A meta-analysis. Journal of Applied Research in Memory and Cognition 2021, 10(1), 155–166. [Google Scholar] [CrossRef]
- Panagopoulos, C.; Harrison, B. Consensus cues, issue salience and policy preferences: An experimental investigation. North American Journal of Psychology 2016, 18(2), 405–417. [Google Scholar]
- Pennycook, G.; Rand, D. G. Lazy, not biased: Susceptibility to partisan fake news. Cognition 2019, 188, 39–50. [Google Scholar] [CrossRef]
- Peters, B.; Blohm, G.; Haefner, R.; Isik, L.; Kriegeskorte, N.; Lieberman, J. S.; Ponce, C. R.; Roig, G.; Peters, M. A. K. Generative adversarial collaborations: A new model of scientific discourse. Trends in Cognitive Sciences 2025, 29(1), 1–4. [Google Scholar] [CrossRef]
- Popper, K. Conjectures and refutations: The growth of scientific knowledge; Routledge, 1963. [Google Scholar]
- Prelec, D. A Bayesian truth serum for subjective data. Science 2004, 306(5695), 462–466. [Google Scholar] [CrossRef]
- Priest, G. The logic of paradox. Journal of Philosophical Logic 1979, 8(1), 219–241. [Google Scholar] [CrossRef]
- Qu, S.; Zhou, Y.; Ji, Y.; Dai, Z.; Wang, Z. Robust maximum expert consensus modeling with dynamic feedback mechanism under uncertain environments. Journal of Industrial and Management Optimization 2025, 21(1), 524–552. [Google Scholar] [CrossRef]
- Rawls, J. A theory of justice; Harvard University Press, 1971. [Google Scholar]
- Roozenbeek, J.; van der Linden, S. Fake news game confers psychological resistance against online misinformation. Palgrave Communications 2019, 5(1), 65. [Google Scholar] [CrossRef]
- Schulz-Hardt, S.; Brodbeck, F. C.; Mojzisch, A.; Kerschreiter, R.; Frey, D. Group decision making in hidden profile situations: Dissent as a facilitator for decision quality. Journal of Personality and Social Psychology 2006, 91(6), 1080–1093. [Google Scholar] [CrossRef] [PubMed]
- Shrout, P. E.; Fleiss, J. L. Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin 1979, 86(2), 420–428. [Google Scholar] [CrossRef] [PubMed]
- Sieck, W.; Yates, J. F. Exposition effects on decision making: Choice and confidence in choice. Organizational Behavior and Human Decision Processes 1997, 70(2), 207–219. [Google Scholar] [CrossRef]
- Srđević, B. Evaluating the Societal Impact of AI: A Comparative Analysis of Human and AI Platforms Using the Analytic Hierarchy Process. AI 2025, 6(4), 86. [Google Scholar] [CrossRef]
- Storek, A.; Subbiah, M.; McKeown, K. Unsupervised selective rationalization with noise injection. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics 2023, 1, 12647–12659. [Google Scholar] [CrossRef]
- Sunstein, C. R. Nudging: A very short guide. Journal of Consumer Policy 2014, 37(4), 583–588. [Google Scholar] [CrossRef]
- Sunstein, C. R. #Republic: Divided democracy in the age of social media; Princeton University Press, 2017. [Google Scholar]
- Tarski, A. The semantic conception of truth and the foundations of semantics. Philosophy and Phenomenological Research 1944, 4(3), 341–376. [Google Scholar] [CrossRef]
- Thomsen, K. AI and We in the Future in the Light of the Ouroboros Model: A Plea for Plurality. AI 2022, 3(4), 778–788. [Google Scholar] [CrossRef]
- van der Linden, S.; Leiserowitz, A.; Maibach, E. The gateway belief model: A large-scale replication. Journal of Environmental Psychology 2021, 62, 49–58. [Google Scholar] [CrossRef]
- van Prooijen, J. W. Why education predicts decreased belief in conspiracy theories. Applied Cognitive Psychology 2017, 31(1), 50–58. [Google Scholar] [CrossRef]
- Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359(6380), 1146–1151. [Google Scholar] [CrossRef]
- Vrij, A.; Fisher, R.; Blank, H. A cognitive approach to lie detection: A meta-analysis. Legal and Criminological Psychology 2017, 22(1), 1–21. [Google Scholar] [CrossRef]
- Vrij, A.; Leal, S.; Fisher, R. P. Interviewing to detect lies about opinions: The Devil’s Advocate approach. Advances in Social Sciences Research Journal 2023, 10(12), 245–252. [Google Scholar] [CrossRef]
- Vrij, A.; Mann, S.; Leal, S.; Fisher, R. P. Combining verbal veracity assessment techniques to distinguish truth tellers from lie tellers. European Journal of Psychology Applied to Legal Context 2021, 13(1), 9–19. [Google Scholar] [CrossRef]
- Waldrop, M. M. The genuine problem of fake news. Proceedings of the National Academy of Sciences 2017, 114(48), 12631–12634. [Google Scholar] [CrossRef]
- Westen, D.; Blagov, P. S.; Harenski, K.; Kilts, C.; Hamann, S. Neural bases of motivated reasoning: An fMRI study of emotional constraints on partisan political judgment in the 2004 U.S. presidential election. Journal of Cognitive Neuroscience 2006, 18(11), 1947–1958. [Google Scholar] [CrossRef]
- Zhang, B. H.; Lemoine, B.; Mitchell, M. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society; 2018; pp. 335–340. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).