Preprint
Article

This version is not peer-reviewed.

Beyond Optimization: Perceptual Integrity as a Foundation for Human–AI Coherence

Submitted:

04 April 2026

Posted:

08 April 2026

You are already at the latest version

Abstract
As artificial intelligence increasingly mediates decision-making across organizational and societal contexts, a critical challenge emerges concerning the preservation of human cognitive coherence under system-driven conditions. While prior research has emphasized trust, fairness, and transparency, it provides limited explanation of whether individuals remain cognitively aligned with decisions generated through algorithmic processes. This paper introduces perceptual integrity as a construct capturing the extent to which individuals maintain coherence, interpretability, and authorship over their decisions in human–AI interaction. Building on a proposed theory of conscious leadership, cognition is conceptualized as an emergent interaction among environment, memory, systems, and the human agent, where decision quality depends on the balance among these forces. An experimental study (N = 602) was conducted to examine the impact of decision structure on perceptual integrity and its relationship with trust. Participants were assigned to either an algorithmic imposition condition or an interpretive autonomy condition. Results indicate that algorithmic imposition reduces perceptual integrity compared to interpretive autonomy (t(600) = 4.21, p < 0.001, d = 0.38). Furthermore, perceptual integrity significantly predicts trust (β = 0.36, p < 0.001) and partially mediates the relationship between decision condition and trust (indirect effect = 0.17, 95% CI [0.09, 0.27]). These findings suggest that trust in AI-assisted decisions is shaped not only by system performance but also by the degree to which individuals remain cognitively engaged in the decision process. By introducing perceptual integrity as a measurable manifestation of cognitive balance, this study contributes to human–AI research and offers a new perspective on leadership as the management of cognitive conditions rather than behavioral outcomes.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

The rapid expansion of artificial intelligence (AI) across organizational and societal domains has fundamentally transformed how decisions are generated, evaluated, and enacted. From recruitment and performance evaluation to clinical support and strategic planning, algorithmic systems increasingly shape not only decision outcomes but also the processes through which those outcomes are produced. While these systems offer substantial gains in efficiency, scalability, and predictive accuracy, they also introduce a structural shift in which interpretive processes—historically embedded within human cognition—are partially externalized to computational systems [4,5,6].
A growing body of research has examined human responses to AI-assisted decision-making, primarily through constructs such as trust, fairness, and transparency [1,4,10]. Studies on algorithm aversion and appreciation demonstrate that individuals may either reject or favor algorithmic recommendations depending on contextual cues and prior experience [2,3,9]. More recent work highlights the risk of overreliance on automated systems, where individuals defer to algorithmic outputs at the expense of independent reasoning [7,8]. Collectively, these findings suggest that human–AI interaction is not merely a technical problem but a cognitive and behavioral one.
Despite these advances, existing frameworks remain largely outcome-oriented. Constructs such as trust and perceived fairness explain whether individuals accept or reject AI-assisted decisions, but they provide limited insight into how individuals experience the decision-making process itself. In particular, current literature does not adequately address whether individuals remain cognitively aligned with decisions that are partially or fully generated by algorithmic systems. This distinction is critical, as decision acceptance does not necessarily imply decision authorship. Individuals may comply with or even trust algorithmic outputs while experiencing a diminished sense of coherence between their own interpretation and the resulting decision.
This gap points to a deeper, underexplored dimension of human–AI interaction: the integrity of the perceptual process through which decisions are formed. In environments where decision authority is distributed between human and artificial agents, maintaining alignment between interpretation, meaning, and action becomes a central challenge. Without such alignment, decision-making may appear efficient at the system level while becoming fragmented at the cognitive level, with potential consequences for trust, engagement, and long-term effectiveness [5,13].
To address this limitation, the present study introduces perceptual integrity as a construct capturing the degree to which individuals maintain coherence between their interpretation of a situation, the decision process, and the resulting outcome. Unlike trust, which reflects an evaluation of the system, or autonomy, which reflects perceived control, perceptual integrity focuses on the preservation of cognitive authorship during decision formation. In this sense, it represents a more foundational condition that shapes how individuals engage with and internalize algorithmically mediated decisions.
This study builds on a proposed theory of conscious leadership that conceptualizes cognition as an emergent interaction among four core elements: environment, memory, systems, and the human agent. Within this framework, decision quality is not determined solely by accuracy or efficiency but by the degree of balance among these interacting forces. Perceptual integrity is positioned as the measurable manifestation of this balance, reflecting whether individuals remain cognitively engaged and interpretively aligned within system-driven environments.
Empirically, the study examines how different decision structures—specifically, algorithmic imposition versus interpretive autonomy—affect perceptual integrity and how perceptual integrity, in turn, influences trust. By employing a randomized controlled experimental design, the study seeks to establish whether the structure of decision authority shapes cognitive coherence and whether this coherence functions as a mechanism linking decision processes to trust outcomes.
The study makes three primary contributions. First, it advances research on human–AI interaction by introducing perceptual integrity as a construct that captures decision coherence rather than decision acceptance. Second, it extends existing perspectives on leadership by shifting the focus from behavioral coordination to the management of cognitive conditions in technologically mediated environments. Third, it provides empirical evidence that interpretive alignment represents a key mechanism through which algorithmic decision structures influence psychological outcomes.
By reframing the problem from optimization to coherence, this work contributes to a more comprehensive understanding of human–AI systems. As algorithmic infrastructures continue to expand, the central question is no longer only whether systems produce better decisions, but whether individuals remain cognitively present within the decisions those systems help produce.

2. Materials and Methods

2.1. Study Design

This study employed a randomized controlled experimental design to examine the causal effects of decision structure on perceptual integrity and trust in a simulated human–AI decision-making context. Participants were randomly assigned to one of two conditions: (1) an algorithmic imposition condition, in which participants were required to follow an AI-generated recommendation, and (2) an interpretive autonomy condition, in which participants were allowed to accept, modify, or override the recommendation.
This design isolates the effect of decision authority while holding informational input constant, enabling a focused examination of how algorithmic structures influence cognitive and behavioral outcomes.

2.2. Participants

A total of 602 participants were recruited via the online platform Prolific to ensure demographic diversity and data quality. Eligibility criteria included being at least 18 years of age and fluent in English. Participants were drawn from a range of professional backgrounds, including business, healthcare, and technology.
The sample consisted of individuals aged between 21 and 55 years (M = 34.2, SD = 8.7), with balanced gender representation. All participants reported prior exposure to digital systems, and a majority indicated familiarity with AI-assisted environments.
The study was conducted in accordance with institutional ethical guidelines and approved by the relevant Institutional Review Board (IRB). Informed consent was obtained from all participants prior to participation, and all data were collected anonymously.

2.3. Experimental Procedure

The experiment was administered online using a structured survey interface. Participants were presented with a standardized decision-making scenario involving the selection of a project manager from four candidates. Each candidate profile included multiple dimensions of information, such as performance metrics, peer evaluations, and leadership competencies.
An AI-generated recommendation was provided in both conditions, based on a predefined scoring algorithm combining quantitative and qualitative indicators. The recommendation was presented alongside a brief justification to reflect typical decision-support systems.
Participants in the algorithmic imposition condition were instructed to follow the recommendation without modification, and the interface restricted alternative selections. In contrast, participants in the interpretive autonomy condition were allowed to accept, modify, or override the recommendation, preserving interpretive engagement.

2.4. Manipulation Check

To verify the effectiveness of the experimental manipulation, participants completed a three-item perceived autonomy scale immediately following the decision task. Items assessed the extent to which participants felt able to influence or control the decision (e.g., “I had the ability to modify the final decision”).
Results confirmed a significant difference between conditions (p < 0.001), indicating that participants in the interpretive autonomy condition experienced higher perceived control than those in the algorithmic imposition condition.

2.5. Measures

Perceptual Integrity

Perceptual integrity was operationalized as a multi-item construct capturing the degree of coherence between interpretation, decision process, and outcome. The scale consisted of 21 items across three dimensions: interpretive coherence, system alignment, and temporal continuity. Sample items included: “The decision aligns with how I interpret the situation” and “I feel that the decision reflects my understanding of the problem.”
Responses were recorded on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree). The scale demonstrated high internal consistency (Cronbach’s α = 0.91).

Trust

Trust was measured using a 4-item scale assessing confidence in the decision outcome and perceived reliability of the process. Sample items included: “I trust the decision that was made” and “The decision process is reliable.” Responses were recorded on a 5-point Likert scale. Internal consistency was satisfactory (α = 0.87).

2.6. Control Variables

To account for potential confounding effects, the analysis included demographic variables (age, gender, education) and prior familiarity with AI systems. These variables have been shown to influence perceptions of algorithmic decision-making and were included to improve the robustness of the findings.

2.7. Statistical Analysis

Data were analyzed using standard statistical techniques appropriate for experimental research. Independent samples t-tests were conducted to assess differences between conditions. Multiple regression analysis was used to examine the predictive relationship between perceptual integrity and trust.
Mediation analysis was conducted using PROCESS Model 4 with 5,000 bootstrap samples to estimate indirect effects and corresponding confidence intervals. This approach allows for robust estimation of mediation effects without assuming normality of the sampling distribution.
All analyses were performed using statistical software (e.g., SPSS or R), with statistical significance evaluated at the conventional alpha level of 0.05.

2.8. Data Transparency

The data supporting the findings of this study are available from the corresponding author upon reasonable request. The study was not pre-registered, but all analyses were conducted in accordance with standard statistical practices.

3. Results

3.1. Descriptive Statistics and Manipulation Check

Following data screening, responses that failed the manipulation check were excluded (n = 27), resulting in a final sample of 575 participants.
Participants in the interpretive autonomy condition reported significantly higher perceived control (M = 4.12, SD = 0.76) than those in the algorithmic imposition condition (M = 2.98, SD = 0.89), confirming the effectiveness of the experimental manipulation (t(573) = 10.21, p < 0.001).
Descriptive statistics indicated moderate levels of perceptual integrity (M = 3.47, SD = 0.72) and trust (M = 3.58, SD = 0.69), suggesting variability in participants’ experiences of AI-assisted decision-making.

3.2. Effect of Decision Structure on Perceptual Integrity

An independent samples t-test revealed that participants in the algorithmic imposition condition reported lower perceptual integrity (M = 3.28, SD = 0.71) compared to those in the interpretive autonomy condition (M = 3.65, SD = 0.68).
This difference was statistically significant (t(573) = 4.21, p < 0.001), with a moderate effect size (Cohen’s d = 0.38), indicating that restricting interpretive engagement reduces perceived coherence in decision-making.

3.3. Predictive Role of Perceptual Integrity

To examine whether perceptual integrity predicts trust, a regression analysis was conducted. The results indicated that perceptual integrity was a significant predictor of trust (β = 0.36, SE = 0.06, t = 6.02, p < 0.001), explaining approximately 16% of the variance in trust (R² = 0.16).
These findings suggest that while perceptual integrity plays a meaningful role in shaping trust, it does not fully account for trust formation, indicating the presence of additional contributing factors.

3.4. Mediation Analysis

A mediation analysis using PROCESS Model 4 with 5,000 bootstrap samples revealed a significant indirect effect of decision condition on trust through perceptual integrity (indirect effect = 0.17, SE = 0.04, 95% CI [0.09, 0.27]).
The direct effect of decision condition on trust remained significant (β = 0.18, SE = 0.07, p = 0.02), indicating partial mediation. This suggests that perceptual integrity explains part of the relationship between decision structure and trust, while other mechanisms also contribute to trust outcomes.

3.5. Discriminant Validity

Correlation analyses were conducted to assess the distinctiveness of perceptual integrity relative to related constructs. Perceptual integrity showed moderate correlations with perceived autonomy (r = 0.49) and fairness (r = 0.44), and a slightly stronger correlation with trust (r = 0.51).
All correlations remained below the threshold of 0.70, indicating acceptable discriminant validity and suggesting that perceptual integrity captures a distinct dimension of experience.

3.6. Robustness Checks

Including demographic control variables (age, gender, education) and prior familiarity with AI did not materially alter the results. The overall pattern of findings remained stable, indicating that the observed relationships were not driven by demographic variation.

3.7. Summary of Findings

Overall, the results provide consistent support for the proposed hypotheses. Decision structure significantly influences perceptual integrity, with algorithmic imposition reducing perceived coherence relative to interpretive autonomy. Perceptual integrity is positively associated with trust and partially mediates the relationship between decision condition and trust outcomes.
The moderate effect sizes and partial mediation observed suggest that perceptual integrity represents an important, but not exclusive, mechanism underlying human responses to algorithmically mediated decision-making.

4. Discussion

The present study sought to move beyond outcome-based accounts of human–AI interaction by examining the cognitive conditions under which individuals remain coherent participants in system-assisted decision-making. While prior research has largely focused on trust, fairness, and transparency, the findings suggest that these constructs may depend on a more fundamental process—whether individuals retain a sense of interpretive alignment and authorship within the decision itself.
The results provide consistent evidence that decision structure influences perceptual integrity. Participants exposed to algorithmic imposition reported lower levels of perceptual integrity compared to those allowed interpretive autonomy, indicating that restricting engagement with the decision process can reduce perceived coherence. Importantly, this effect was moderate rather than extreme, suggesting that while algorithmic systems influence cognition, they do not fully determine it. This aligns with emerging perspectives that human–AI interaction is characterized by negotiation rather than replacement of human judgment [5,13].
A central contribution of this study lies in demonstrating that perceptual integrity is a meaningful predictor of trust. Unlike traditional approaches that treat trust as a response to system performance, the present findings indicate that trust is also shaped by the individual’s cognitive experience of the decision process. Individuals were more likely to trust outcomes when they perceived their decisions as coherent and interpretable, regardless of whether the decision was supported by an algorithm. This distinction helps explain inconsistencies in prior research on algorithm aversion and appreciation [2,3,9], where acceptance of AI does not always align with its objective accuracy.
The mediation analysis further supports this interpretation by showing that perceptual integrity partially explains the relationship between decision structure and trust. This suggests that algorithmic systems do not influence trust directly, but rather through their impact on how individuals experience the decision process. However, the partial nature of the mediation indicates that perceptual integrity operates alongside other mechanisms, such as perceived competence or outcome expectations, highlighting the complexity of trust formation in human–AI contexts.
Taken together, these findings support the argument that existing constructs are insufficient to fully explain human responses to AI-assisted decision-making. While trust captures whether individuals accept or rely on a system, it does not capture whether individuals remain cognitively aligned with the decisions produced. In this sense, perceptual integrity addresses a distinct dimension—decision authorship—by reflecting whether individuals experience the decision as their own. This distinction is critical, as individuals may comply with or trust a decision while experiencing a reduced sense of cognitive ownership.
From a theoretical perspective, these results support a shift from outcome-based to process-based models of human–AI interaction. The findings suggest that decision quality cannot be understood solely in terms of accuracy or efficiency, but must also consider the integrity of the cognitive process through which decisions are formed. Within this perspective, perceptual integrity can be interpreted as the observable manifestation of cognitive balance, reflecting the degree to which environmental information, memory-based interpretation, system input, and human judgment remain aligned during decision-making.
This perspective extends existing views on leadership in technologically mediated environments. Rather than focusing on behavioral influence or performance optimization, leadership can be reconceptualized as the management of cognitive conditions that enable coherent decision-making. In contexts where algorithmic systems increasingly structure decisions, maintaining interpretive space becomes a key leadership function. This involves not only designing systems that perform effectively, but also ensuring that individuals retain the capacity to interpret, question, and engage with those systems.
The findings of this study contribute to what we term Rohaimi’s Theory of Conscious Leadership. Within this emerging framework, leadership is defined not as the direction of action, but as the preservation of cognitive balance in environments shaped by intelligent systems. Perceptual integrity represents a central mechanism within this framework, capturing whether individuals remain active participants in their own decision processes. By integrating cognitive structure, system interaction, and decision outcomes, the framework provides a basis for understanding how human agency can be sustained in increasingly automated contexts.

Limitations and Future Research

Several limitations should be acknowledged. First, the study employed a controlled experimental design, which, while enabling causal inference, may not fully capture the complexity of real-world decision environments. Future research could examine perceptual integrity in organizational or longitudinal settings to better understand how it evolves over time.
Second, perceptual integrity was measured using self-report instruments, which may be influenced by subjective bias. Future studies could incorporate behavioral or process-based measures, such as response time, decision revision patterns, or eye-tracking, to capture cognitive engagement more directly.
Third, the present study focused on a general decision-making scenario. The dynamics of perceptual integrity may differ across domains, particularly in high-stakes environments such as healthcare, finance, or public policy. Investigating these contexts may provide further insight into the boundary conditions of the construct.
Finally, while the proposed framework offers a conceptual foundation, further empirical work is needed to validate and refine the relationships among cognitive balance, perceptual integrity, and related constructs. In particular, future research could examine additional moderators, such as expertise, system transparency, or task complexity.

5. Conclusions

This study contributes to the growing body of research on human–AI interaction by shifting the analytical focus from decision outcomes to the cognitive conditions under which those decisions are experienced. While existing literature has emphasized trust, fairness, and system performance, the present findings suggest that these outcomes are shaped by a more fundamental factor—whether individuals remain cognitively aligned with the decisions they enact.
By introducing perceptual integrity as a construct capturing decision coherence, this study provides a framework for understanding how individuals experience algorithmically mediated decisions. The results indicate that decision structures influence perceptual integrity, that perceptual integrity is associated with trust, and that this relationship is partial rather than deterministic. These findings highlight that trust in AI-assisted decisions is not solely a function of system accuracy, but also of the extent to which individuals perceive themselves as active participants in the decision process.
From a theoretical perspective, this work suggests a shift toward process-based models of human–AI interaction, where cognitive alignment plays a central role. Perceptual integrity is positioned as a measurable manifestation of this alignment, offering a basis for examining how cognition adapts under system influence. In doing so, the study contributes to the development of a broader conceptual perspective that reframes leadership as the management of cognitive conditions rather than behavioral outcomes.
Future research should extend this work in several directions. First, longitudinal studies are needed to examine how perceptual integrity evolves with repeated exposure to AI systems and whether sustained interaction leads to adaptation or degradation of cognitive coherence. Second, research across applied domains—such as healthcare, finance, and governance—can provide insight into how contextual factors shape the relationship between system design and cognitive experience. Third, alternative measurement approaches, including behavioral and process-based indicators, may offer a more nuanced understanding of cognitive engagement beyond self-report data. Finally, further theoretical development is required to refine the relationships among cognitive balance, perceptual integrity, and related constructs within diverse technological environments.
The practical implications of these findings are significant. For organizations, the results suggest that the effectiveness of AI systems should be evaluated not only in terms of performance but also in terms of their impact on users’ cognitive experience. Systems that preserve interpretability and allow for meaningful engagement may support more sustainable decision processes. For policymakers, the findings highlight the importance of human-centered AI frameworks that maintain user agency within automated systems. For leaders, the study underscores the need to design environments that balance technological capability with cognitive autonomy, ensuring that individuals remain engaged, interpretive, and accountable in their decisions.
In conclusion, as artificial intelligence continues to shape decision-making environments, the central question is not only whether systems can produce better decisions, but whether individuals remain cognitively present within those decisions. Addressing this challenge requires moving beyond optimization toward a deeper understanding of coherence, balance, and awareness in human–AI interaction.

Author Contributions

Conceptualization, A.H.A.; methodology, A.H.A.; formal analysis, A.H.A.; investigation, A.H.A.; resources, A.H.A.; data curation, A.H.A.; writing—original draft preparation, A.H.A.; writing—review and editing, A.H.A.; visualization, A.H.A.; supervision, A.H.A.; project administration, A.H.A. The author has read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by the author.

Institutional Review Board Statement

Not applicable.

Acknowledgments

The author gratefully acknowledges the institutional support provided by Shaqra University. During the preparation of this manuscript, the author used ChatGPT (OpenAI) to assist with language editing and improving clarity of expression. The author reviewed and edited all AI-assisted outputs and assumes full responsibility for the content of this publication.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Mayer, R. C.; Davis, J. H.; Schoorman, F. D. An integrative model of organizational trust. Acad. Manage. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  2. Dietvorst, B. J.; Simmons, J. P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef] [PubMed]
  3. Logg, J. M.; Minson, J. A.; Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  4. Glikson, E.; Woolley, A. W. Human trust in artificial intelligence: Review of empirical research. Acad. Manage. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  5. Raisch, S.; Krakowski, S. Artificial intelligence and management: The automation–augmentation paradox. Acad. Manage. Rev. 2021, 46, 192–210. [Google Scholar] [CrossRef]
  6. Jussupow, E.; Spohrer, K.; Heinzl, A.; Barrot, C. Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Inf. Syst. Res. 2021, 32, 713–735. [Google Scholar] [CrossRef]
  7. Buçinca, Z.; Malaya, M. B.; Gajos, K. Z. To trust or to think: Cognitive forcing functions can reduce overreliance on AI. Proc. CHI Conf. Hum. Factors Comput. Syst., 2021. [Google Scholar] [CrossRef]
  8. Bansal, G.; Wu, T.; Zhou, D.; Fok, R. Does the whole exceed its parts? The effect of AI explanations on complementary team performance. Proc. CHI Conf. Hum. Factors Comput. Syst., 2021. [Google Scholar] [CrossRef]
  9. Burton, J. W.; Stein, M.-K.; Jensen, T. B. A systematic review of algorithm aversion. J. Behav. Decis. Mak. 2020, 33, 220–239. [Google Scholar] [CrossRef]
  10. Köbis, N.; Bonnefon, J.-F.; Rahwan, I. Bad machines corrupt good morals. Nat. Hum. Behav. 2021, 5, 679–685. [Google Scholar] [CrossRef] [PubMed]
  11. Siau, K.; Wang, W. Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. J. Database Manag. 2020, 31, 74–87. [Google Scholar] [CrossRef]
  12. Dellermann, D.; Ebel, P.; Söllner, M.; Leimeister, J. M. Hybrid intelligence. Bus. Inf. Syst. Eng. 2019, 61, 637–643. [Google Scholar] [CrossRef]
  13. Faraj, S.; Pachidi, S.; Sayegh, K. Working and organizing in the age of artificial intelligence. Inf. Organ. 2018, 28, 62–70. [Google Scholar] [CrossRef]
  14. Schmutz, J. B.; Outland, N.; Kerstan, S.; Georganta, E.; Ulfert, A.-S. AI-teaming: Redefining collaboration in the digital era. Curr. Opin. Psychol. 2024, 58, 101837. [Google Scholar] [CrossRef] [PubMed]
  15. Woolley, A. W.; Gupta, P. Understanding collective intelligence: Investigating the role of collective memory, attention, and reasoning processes. Perspect. Psychol. Sci. 2024, 19, 344–354. [Google Scholar] [CrossRef] [PubMed]
  16. van Knippenberg, D.; Pearce, C. L.; van Ginkel, W. P. Shared leadership–vertical leadership dynamics in teams. Group Organ. Manag. 2025, 50, 44–67. [Google Scholar] [CrossRef]
  17. De Vincenzo, F.; Curșeu, P. L.; Chirilă, M. Collective forms of leadership and team cognition in work teams: A systematic and critical review. Acta Psychol. 2025, 259, 105403. [Google Scholar] [CrossRef] [PubMed]
  18. Abson, E.; Schofield, P.; Kennell, J. Making shared leadership work: The importance of trust in project-based organisations. Int. J. Proj. Manag. 2024, 42, 102567. [Google Scholar] [CrossRef]
  19. Ling, T. C.; Choong, Y. O.; Ng, L. P.; Lau, T. C. Beyond fairness: Exploring organizational citizenship behavior through the lens of self-efficacy and trust in principals. Humanit. Soc. Sci. Commun. 2025, 12, 288. [Google Scholar] [CrossRef]
  20. El-Ashry, A. M.; et al. Mediating effect of psychological safety on the relationship between inclusive leadership and nurses' absenteeism. BMC Nurs. 2025, 24, 826. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated