Preprint
Article

This version is not peer-reviewed.

Gradual Privacy Paradox in AI-Enabled Fitness: An AI Ethics Interpretation of Privacy Satisficing Under Bounded Rationality

Submitted:

02 March 2026

Posted:

03 March 2026

You are already at the latest version

Abstract
AI-enabled fitness services rely on continuous collection of activity, physiological, and location data to support monitoring and personalized feedback, which raises persistent privacy and security concerns and ethical tensions regarding data use and user autonomy. Nevertheless, sustained engagement with these services remains common, indicating a divergence between privacy concern and continued use. Using online survey data from 596 adults aged 18 years and above, this study examines AI fitness use from an AI ethics perspective grounded in bounded rationality. A Deviation index is constructed as the standardized difference between privacy concern and risk acceptance. High willingness to use AI fitness services is analyzed using a parsimonious probability-based approach. Logistic regression models examine how the likelihood of high use varies across the Deviation range, while accounting for perceived transparency and safety, measured as Information Control Level, and stated privacy trade-off attitudes. The results show that continued use varies systematically across the Deviation spectrum. Higher Deviation values are not associated with a collapse in use probability. Instead, predicted probabilities change gradually across the observed range. Privacy concern and continued AI fitness use therefore coexist within this adult user sample. This pattern supports a descriptive AI ethics interpretation of privacy satisficing under bounded rationality rather than a binary privacy paradox.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

AI enabled fitness services collect activity, physiological, and location data for monitoring and personalized guidance, which raises persistent privacy and security concerns. Nevertheless, many users continue to rely on such services while simultaneously reporting privacy worry, a pattern commonly described as a privacy paradox [1,2]. This coexistence of concern and use is not merely a behavioral inconsistency. Rather, it reflects an ethical and governance related tension inherent in data intensive AI services.
Prior research shows that the privacy paradox does not manifest uniformly across application domains. Instead, it varies with contextual constraints and service characteristics rather than reflecting a simple inconsistency between attitudes and behavior [3,4]. Studies on security awareness further indicate that increased knowledge of vulnerabilities does not necessarily lead to reduced use. In high benefit settings, perceived usefulness, convenience, and habitual reliance often sustain continued engagement despite acknowledged risks [8].
Compared with general social media platforms, AI fitness services involve more sensitive health related data while providing recurring and tangible benefits. Empirical research on health club and leisure sport participants suggests that perceived service attributes, motivation, and commitment are closely associated with continued use, satisfaction, and loyalty, rather than with service discontinuation [18,23]. As a result, privacy related decisions in this domain carry greater weight in everyday practice [5,6,7].
Because data disclosure in AI fitness services is continuous rather than episodic, treating privacy concern and service use as a binary opposition is analytically limited. A spectrum based perspective is more appropriate for capturing gradual variation in user positioning. This view is consistent with bounded rationality, under which individuals tolerate a certain level of perceived risk as long as basic safeguards and functional benefits remain sufficient for routine use [10,14,15]. Privacy outcomes are therefore not continuously optimized. Instead, acceptable configurations emerge through everyday judgment under constraint. Review studies across AI application domains similarly suggest that the coexistence of concern and continued use reflects structural dependence on service benefits and institutionalized usage conditions, rather than temporary attitudinal inconsistency [21].
From a governance perspective, security education and awareness initiatives may enhance risk recognition and protective intentions [8]. At the platform level, however, research on public service systems shows that misaligned information flows and accountability structures can suppress usage even when services are technically available [9]. AI fitness represents a contrasting configuration. Despite persistent privacy and security concerns, adoption remains widespread. This contrast raises a descriptive question regarding how users manage perceived risk and responsibility when withdrawal from AI enabled services is impractical.
Building on the Satisficing Equilibrium perspective, which emphasizes good enough configurations under bounded rationality rather than theoretical optima [10], this study examines AI fitness as a user side, single domain case of privacy satisficing. Using survey data from 596 adults aged 18 years and above, a Deviation index is constructed as standardized privacy concern minus standardized risk acceptance. The fitted probability of high AI fitness use is examined across this index. Perceived transparency and safety, measured as Information Control Level, and stated privacy trade off attitudes are included as contextual factors. The contribution of this study is descriptive. It documents how the likelihood of continued high use varies gradually across the worry risk configuration and discusses this pattern in relation to privacy satisficing and governance oriented discussions of security education and platform conditions [2,3,4,5,8,9,10].
Figure 1. Conceptual illustration of the privacy satisficing perspective in AI fitness. Note. Deviation denotes standardized privacy concern minus standardized risk acceptance. Information Control Level and privacy trade off attitudes are treated as contextual factors. The figure is conceptual. Empirical results are reported in Figure 2.
Figure 1. Conceptual illustration of the privacy satisficing perspective in AI fitness. Note. Deviation denotes standardized privacy concern minus standardized risk acceptance. Information Control Level and privacy trade off attitudes are treated as contextual factors. The figure is conceptual. Empirical results are reported in Figure 2.
Preprints 201148 g001

2. Theoretical Background

The privacy paradox refers to the observed gap between stated privacy concern and continued data disclosure or tracking acceptance [1,2]. In digital health and AI fitness contexts, this tension is particularly salient. Service operation requires continuous collection of health related data, while recurring benefits such as feedback and self management support are delivered simultaneously. Empirical studies on fitness trackers, wearable devices, and leisure sport participation consistently report that privacy concern or dissatisfaction may coexist with sustained use rather than leading to service withdrawal [5,6,7,11,12,18,23].
To interpret this pattern, this study adopts a privacy satisficing perspective grounded in bounded rationality and Satisficing Equilibrium reasoning. Under this view, privacy outcomes are not continuously optimized. Instead, users tolerate a certain level of perceived risk when expected benefits and basic safeguards remain acceptable for everyday practice [10,14,15]. Related risk oriented perspectives similarly describe behavior as adjusting around a subjectively acceptable level of risk rather than seeking full risk elimination [16].
Operationally, this study defines a Deviation index as the standardized difference between privacy concern and risk acceptance. Lower or negative Deviation values indicate lower concern or greater willingness to accept risk, whereas higher values reflect a concern dominant configuration. Treating Deviation as a continuous positioning measure allows examination of whether the fitted probability of high service use varies gradually across the worry risk configuration, rather than framing the privacy paradox as a binary condition. In the empirical analysis, perceived transparency and safety, measured as Information Control Level, and stated privacy trade off attitudes are incorporated as contextual factors [5,6,7,10].
This analytical framing is consistent with security awareness research showing that increased risk awareness does not necessarily lead to reduced use in high benefit settings. In such contexts, convenience, habitual reliance, and situational constraints often sustain continued engagement [8,10,13,17]. At the platform level, studies on platform disconnection demonstrate that misaligned information flows and responsibility structures can suppress usage even when services remain available [9]. AI fitness represents a contrasting configuration. Despite persistent privacy and security concerns, adoption remains widespread. Accordingly, this study treats AI fitness privacy as a case of continued use under acknowledged risk and examines how Deviation, perceived transparency, and privacy trade off attitudes relate to observed use patterns, without assuming a single optimal level of privacy behavior [5,6,7,8,9,10,13,17].

3. Methods

3.1. Data and Sample

This study uses data from an online survey on AI enabled fitness services administered via Tencent Questionnaire, a widely used survey platform operated by Tencent Technology Co., Ltd. The survey page recorded 645 views and 633 submissions. After data screening, 596 valid responses were retained. Responses were excluded if completion time was unrealistically short, defined as below 35 seconds, if duplicate response patterns were identified across accounts or devices, or if the respondent was under 18 years of age. The final dataset therefore consists exclusively of adult participants.
Tencent Questionnaire requires account based login through QQ, WeChat, email, or a mainland China mobile number, and restricts each account to a single submission. This design helps reduce duplicate responses. Most responses originated from mainland China, with additional responses from the Republic of Korea and a small number from other regions. The questionnaire included items on AI fitness use, perceived transparency and safety, willingness to use AI fitness services, privacy attitudes, and basic demographic characteristics. Perceptual items were measured using five point Likert scales.
Participants were informed at the beginning of the survey that the study targeted adults aged 18 years and above, that participation was voluntary, and that no personally identifiable or sensitive information would be collected. Responses were anonymous, could be discontinued at any time, and were analyzed only in aggregated form. The study was designed as an anonymous adult-only questionnaire without intervention or physical procedures and falls within the scope of minimal risk social science research. Research procedures followed generally accepted ethical principles for human subject research and were approved by the Institutional Review Board of Youngsan University (Approval No. YSUIRB-202601-HR-198-02).
The analytical design adopts a parsimonious and descriptive approach, focusing on observed association patterns rather than causal inference or scale development.

3.2. Variables and Analytical Strategy

The dependent variable is high willingness to use AI fitness services. The original five point item measuring willingness to use AI fitness applications or devices is recoded into a binary indicator. Respondents at or above the sample median are coded as high use, while others are coded as low use. This approach follows prior studies that contrast relatively higher versus lower intention rather than relying on the full ordinal scale [5,6,11,12].
The main explanatory variable is the Deviation index, defined as the standardized difference between privacy concern and risk acceptance. Both components are z scored prior to subtraction. Higher Deviation values indicate that privacy concern exceeds stated risk acceptance, whereas lower values indicate a more risk tolerant configuration. Deviation is treated as a descriptive positioning measure within the sample and is not interpreted as an objective indicator of compliance or risk [2,11,12].
Statistical calculations are conducted using standard spreadsheet based procedures. Two additional predictors are included to capture perceived service context. Perceived transparency and safety are summarized as Information Control Level, constructed as the mean of three perceptual items measuring information transparency, accountability expectation, and perceived reliability and safety. The Information Control Level index shows a Cronbach’s α of 0.72, which is acceptable for exploratory research using a multidimensional perceptual instrument. Privacy trade off is measured using a single item capturing willingness to exchange personal data for convenience or improved service. Internal consistency reliability is therefore not applicable for this measure. Both variables are treated as continuous indices derived from Likert scale items, following established guidance for composite indicator construction [7,10]. Gender, age group, and prior app use experience are included as control variables in extended specifications.
Before model estimation, variance inflation factors are calculated to assess potential multicollinearity among the main explanatory variables. As reported in Table 1, the Deviation index exhibits low variance inflation, while perception based indices show higher values. This pattern reflects shared attitudinal structure rather than statistical redundancy or dominance of any single predictor. Given the descriptive focus of the analysis and the emphasis on fitted probability patterns rather than precise marginal effects, these correlated indices are retained.
The analysis proceeds in two stages. First, descriptive statistics are reported. Second, logistic regression models are estimated with high use membership as the dependent variable and Deviation, Information Control Level, and privacy trade off as the main predictors, with optional demographic controls. Logistic regression is used to model a binary outcome and to present results in terms of fitted probabilities [19,20]. The analytical goal is to examine whether the probability of high AI fitness use varies smoothly across Deviation after accounting for perceived transparency and privacy trade off attitudes. The results are interpreted as descriptive patterns consistent with a privacy satisficing perspective rather than as causal effects [10].

4. Results

4.1. Descriptive Patterns and User Configuration

Table 2 summarizes the basic characteristics of the adult sample. The dataset includes 596 respondents, with a higher proportion of male participants. Most respondents are aged between 18 and 34 years. The majority of responses originate from mainland China, with a smaller share from the Republic of Korea and other regions. Overall willingness to use AI fitness services is moderate to high. Based on a median split of the five point willingness measure, a majority of respondents are classified into the high use group, as reported in Table 3. Privacy concern is generally reported at a medium to high level, while risk acceptance shows greater dispersion, indicating heterogeneous tolerance for perceived risk alongside continued interest in AI fitness services [2,5,6,7,11,12].
Table 3 reports descriptive statistics for the main variables. The Deviation index has a mean of zero by construction and displays substantial dispersion, reflecting variation in the configuration of privacy concern and risk acceptance across individuals. Information Control Level is above the scale midpoint on average, suggesting moderate perceived transparency and reliability of AI fitness platforms. Privacy trade off attitudes are relatively high, indicating a general willingness to exchange personal data for service convenience.
For descriptive purposes, respondents are grouped into low, medium, and high Deviation bands. The share of high use respondents varies across these bands, and willingness to continue using AI fitness services does not collapse at higher levels of privacy concern. This pattern motivates the probability based analysis reported in the following section [5,6,7,10].

4.2. Logistic Regression and the Gradual Privacy Paradox

Table 4 reports logistic regression results with high willingness to use AI fitness services as the dependent variable. Deviation, Information Control Level, and privacy trade off attitudes are included as the main predictors. The estimated coefficient for Deviation is positive, indicating that higher privacy concern relative to risk acceptance is not associated with a reduction in the likelihood of high use. Continued engagement therefore persists across the observed Deviation range.
Information Control Level and privacy trade off attitudes are also positively associated with high use, indicating higher fitted probabilities of continued engagement under greater perceived transparency and stronger acceptance of privacy and convenience exchange. Figure 2 visualizes the fitted probability of high use across the Deviation index while holding Information Control Level and privacy trade off at their sample means. The fitted curve shows a smooth and continuous pattern, indicating that the likelihood of high AI fitness use varies gradually across the worry risk configuration rather than changing discretely at specific thresholds [19,20]. An extended model including gender, age group, and app use experience yields substantively similar results. The associations between Deviation, Information Control Level, and high use remain stable, indicating that the observed gradual pattern is robust to basic demographic controls.
Figure 2. Gradual privacy paradox in AI fitness: predicted probability of high use across the Deviation index (adult only sample). Note. Fitted probabilities are plotted across the Deviation index with Information Control Level and privacy trade off held at their sample means. The shaded area represents the ninety five percent confidence band. Side panels display the distribution of Deviation values and Pearson residuals.
Figure 2. Gradual privacy paradox in AI fitness: predicted probability of high use across the Deviation index (adult only sample). Note. Fitted probabilities are plotted across the Deviation index with Information Control Level and privacy trade off held at their sample means. The shaded area represents the ninety five percent confidence band. Side panels display the distribution of Deviation values and Pearson residuals.
Preprints 201148 g002

5. Discussion and Conclusion

5.1. Gradual Privacy Paradox as Privacy Satisficing

This study documents a gradual pattern of privacy concern and continued use in AI fitness services. Across the regression results and visual summaries, the fitted probability of high AI fitness use varies systematically with the Deviation index. Within this adult user sample, higher Deviation values, indicating greater privacy concern relative to risk acceptance, are not associated with a lower likelihood of continued use. Instead, the probability of high use changes gradually across the observed Deviation range, as shown in Figure 2. Privacy concern and sustained use therefore coexist, rather than forming a strict binary opposition.
This pattern extends privacy satisficing perspectives grounded in bounded rationality to the AI fitness domain. Continued engagement with high benefit services occurs while a nonzero level of concern is tolerated, provided that perceived benefits and basic safeguards remain acceptable for everyday practice [10,14,15,16]. Optimization of privacy outcomes does not appear to guide behavior. Rather, configurations that are subjectively sufficient in balancing concern, convenience, and perceived control tend to be maintained. From a governance perspective, this configuration contrasts with platform disconnection contexts, in which services remain technically available but are underused due to misaligned information flows and accountability structures [9]. It is also consistent with security awareness research showing that increased awareness of risk does not necessarily translate into avoidance when convenience, habit, and perceived benefit sustain continued engagement [10,13,17]. The contribution of this analysis is descriptive. A gradual privacy paradox in AI fitness use is documented as a domain specific pattern, without attributing the observed configuration to a single causal mechanism [2,5,6,7,10,14,15,16].

5.2. Implications for Governance and Design

One practical implication of these findings is that awareness raising alone is unlikely to substantially alter usage behavior in high benefit service contexts. Governance and design interventions may therefore be more effective when they emphasize clear communication of data practices and provide privacy controls that are visible, usable, and auditable. Such approaches are consistent with privacy by design principles and established findings in security awareness research, which emphasize the role of usable and transparent safeguards in sustaining appropriate user reliance [17]. Under a satisficing framing, attention shifts away from encouraging full acceptance or complete rejection. Instead, emphasis is placed on maintaining transparent and manageable conditions under which continued use can be sustained, given everyday constraints on user attention and decision capacity [7,10,14,15,22]. These implications are offered as design oriented considerations rather than as direct policy prescriptions.

5.3. Limitations and Future Research

Several limitations should be acknowledged.
First, the data are cross sectional and self reported, and the observed associations should not be interpreted as causal. The analysis is designed to document patterns of coexistence between privacy concern and continued use rather than to test competing causal mechanisms. Second, the sample consists of adult users with interest or exposure to AI fitness services. The findings may not generalize to non users, clinical populations, or contexts in which AI fitness services are used under different institutional or regulatory conditions. In addition, the sample is dominated by respondents from China, and caution is required when extending the results across regions with different data governance frameworks.
Future research may apply longitudinal designs or incorporate behavioral usage data to examine whether similar Deviation related patterns emerge over time or across other AI enabled services. Comparative studies across regulatory settings may further clarify the domain specificity of privacy satisficing dynamics and the conditions under which gradual privacy paradox patterns persist or change.

References

  1. Wu, P. F. The privacy paradox in the context of online social networking: A self-identity perspective. J. Assoc. Inf. Sci. Technol. 2019, Vol. 70(No. 3), 207–217. [Google Scholar] [CrossRef]
  2. Dinev, T.; Hart, P. An extended privacy calculus model for e-commerce transactions. Inf. Syst. Res. 2006, Vol. 17(No. 1), 61–80. [Google Scholar] [CrossRef]
  3. Hirschprung, R. S. Is the privacy paradox a domain-specific phenomenon? Computers 2023, 12, (156). [Google Scholar] [CrossRef]
  4. Arzoglou, E.; Kortesniemi, Y.; Ruutu, S.; Elo, T. The role of privacy obstacles in privacy paradox: A system dynamics analysis. Systems 2023, Vol. 11(No. 4, Article No. 205). [Google Scholar] [CrossRef]
  5. Abdelhamid, M. Fitness tracker information and privacy management: Empirical study. J. Med. Internet Res. 2021, Vol. 23(No. 11), No. e23059. [Google Scholar] [CrossRef]
  6. Kang, H.; Jung, E. H. The smart wearables privacy paradox: A cluster analysis of smartwatch users. Behav. Inf. Technol. 2021, Vol. 40(No. 16), 1755–1768. [Google Scholar] [CrossRef]
  7. Zhang, P.; Boulos, M. N. K. Privacy by design environments for large-scale health research and federated learning from data. Int. J. Environ. Res. Public Health 2022, Vol. 19, Article(No. 11876). [Google Scholar] [CrossRef]
  8. So, G. A study on scenario-based web application security education method. Int. J. Internet, Broadcasting Commun. IJIBC 2023, Vol. 15(No. 3), 149–159. [Google Scholar] [CrossRef]
  9. Su, H.; So, G. Platform disconnection in rural revitalization: A multi-level analysis with reference to East Asia. Int. J. Internet, Broadcasting Commun. IJIBC 2025, Vol. 17(No. 3), 183–196. [Google Scholar] [CrossRef]
  10. Su, H.; Liao, J.; So, G. Satisficing Equilibrium and Multi-Actor Trust in Smart Tourism: Evidence from AI Governance. Preprints 2025. [Google Scholar] [CrossRef]
  11. Cho, J. Y.; Ko, D.; Lee, B. G.; KSII. Strategic approach to privacy calculus of wearable device user regarding information disclosure and continuance intention. Trans. Internet Inf. Syst. 2018, Vol. 12(No. 7), 3356–3374. [Google Scholar] [CrossRef]
  12. Reith, R.; Buck, C.; Lis, B.; Eymann, T. Integrating privacy concerns into the unified theory of acceptance and use of technology to explain the adoption of fitness trackers. Int. J. Innov. Technol. Manag. 2020, Vol. 17(No. 7), Article No. 2050049. [Google Scholar] [CrossRef]
  13. Lebek, B.; Uffen, J.; Neumann, M.; Hohler, B.; Breitner, M. H. Information security awareness and behavior: A theory-based literature review. Manag. Res. Rev. 2014, Vol. 37(No. 12), 1049–1092. [Google Scholar] [CrossRef]
  14. Simon, H. A. A behavioral model of rational choice. Q. J. Econ. 1955, Vol. 69(No. 1), 99–118. [Google Scholar] [CrossRef]
  15. Lilly, G. Bounded rationality: A Simon-like explication. J. Econ. Dyn. Control 1994, Vol. 18(No. 1), 205–230. [Google Scholar] [CrossRef]
  16. Wilde, G. J. S. The theory of risk homeostasis: Implications for safety and health. Risk Anal. 1982, Vol. 2(No. 4), 209–225. [Google Scholar] [CrossRef]
  17. Bu, F.; Ji, L. Research on privacy by design behavioural decision-making of information engineers considering perceived work risk. Systems 2024, Vol. 12, Article(No. 250). [Google Scholar] [CrossRef]
  18. Tak, E.; Park, S. The effect of exercise commitment on the quality of life according to motivation for participation in leisure sports. Int. J. Adv. Smart Converg. IJASC 2021, Vol. 10(No. 1), 125–133. [Google Scholar] [CrossRef]
  19. Hosmer, D. W.; Lemeshow, S.; Sturdivant, R. X. Applied Logistic Regression, 3rd ed.; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar] [CrossRef]
  20. Mize, T. D. Best practices for estimating, interpreting, and presenting average marginal effects. Sociol. Sci. 2019, Vol. 6, 81–117. [Google Scholar] [CrossRef]
  21. Yun, J.-R. AI Journalism for Humans: The Possibilities of Collaboration and Coexistence. J. Converg. Cult. Technol. JCCT 2025, Vol. 11(No. 3), 189–198. [Google Scholar] [CrossRef]
  22. Aquilino, L.; Di Dio, C.; Manzi, F.; Massaro, D.; Bisconti, P.; Marchetti, A. Decoding trust in artificial intelligence: A systematic review of quantitative measures and related variables. Informatics 2025, Vol. 12, Article(No. 70). [Google Scholar] [CrossRef]
  23. Chung, Y.; Park, S. A study on the relationship between health club users’ perception of service quality and use satisfaction and loyalty. Int. J. Adv. Cult. Technol. IJACT 2021, Vol. 9(No. 4), 145–153. [Google Scholar] [CrossRef]
Table 1. Variance inflation factors of key explanatory variables (N = 596).
Table 1. Variance inflation factors of key explanatory variables (N = 596).
Variable VIF
Deviation 1.11
Information Control Level 17.13
Privacy trade off 17.25
Note. Variance inflation factors are reported for the main explanatory variables included in the logistic regression models. Higher variance inflation for perception based indices reflects shared attitudinal structure rather than statistical redundancy. These indices are retained for descriptive analysis.
Table 2. Sample characteristics (N = 596).
Table 2. Sample characteristics (N = 596).
Variable Category n %
Gender Male 405 67.9
Female 191 32.1
Age group 18–24 201 33.7
25–34 280 47.0
35–44 93 15.6
45 and above 22 3.7
Country or region China 537 90.1
Republic of Korea 58 9.7
Other 1 0.2
Note. Percentages may not sum to 100 due to rounding.
Table 3. Descriptive statistics of key variables (N = 596).
Table 3. Descriptive statistics of key variables (N = 596).
Variable Mean SD
High willingness to use AI fitness (1 = high-use) 0.63 0.48
Deviation (z-privacy concern − z-risk acceptance) 0.00 1.34
ICL (perceived transparency index, 1–5) 3.37 0.79
Privacy trade-off (1–5) 3.86 0.79
Gender (1 = female) 0.32 0.47
Age group (ordinal, 1–4) 1.89 0.79
App-use experience (1 = experienced user) 0.93 0.26
Note. Variables are defined and coded as described in Section 3. High willingness to use AI fitness is a binary indicator based on a median split of the original five point item.
Table 4. Logistic regression results for high willingness to use AI fitness services (N = 596).
Table 4. Logistic regression results for high willingness to use AI fitness services (N = 596).
Variable Coefficient Std. Error
Deviation 0.18 0.08
ICL (perceived transparency) 0.78 0.13
Privacy trade-off 0.47 0.14
Constant –3.87 0.61
Note. Coefficients are reported in log odds. Deviation is defined as the standardized difference between privacy concern and risk acceptance. Information Control Level summarizes perceived transparency, accountability expectation, and perceived reliability and safety. Effects are interpreted using fitted probabilities rather than marginal changes in log odds [19,20].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated