Preprint
Article

This version is not peer-reviewed.

Good-Enough Privacy in Platform Governance: Evidence from Fujian and Busan–Gyeongnam

Submitted:

18 December 2025

Posted:

22 December 2025

You are already at the latest version

Abstract
Artificial intelligence (AI) platforms in East Asia often elicit privacy concern yet sustain user participation. This study interprets the pattern as bounded compliance—a satisficing equilibrium in which engagement persists once minimum transparency and reliability thresholds are perceived in platform governance. A symmetric adult survey in Fujian, China (N = 185) and Busan–Gyeongnam, Korea (N = 187) examines how accountability visibility and privacy concern jointly shape platform trust and use. Heat-map diagnostics and logit marginal effects show consistently high willingness (≥0.70) across conditions, with stronger accountability sensitivity in Korea and stronger continuity assurance in China. Under high concern, willingness converges to a “good-enough” zone where participation endures despite discomfort. The findings highlight governance thresholds as practical levers for trustworthy AI: enhancing feedback visibility (e.g., case tracking, resolution proofs) and maintaining institutional continuity (e.g., O&M capacity, incident-response coverage) can sustain public confidence in AI-enabled public-service platforms.
Keywords: 
;  ;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Area Studies

1. Introduction

Artificial intelligence (AI) platforms have diffused rapidly into public-service delivery across East Asia, bringing efficiency gains alongside persistent privacy anxiety [[1[3]. Users acknowledge risk yet continue to engage—an apparent “privacy paradox” reported across domains [4,5,21,28,29]. Individual-level explanations (e.g., cognitive bias, risk–benefit miscalibration, habituation) clarify micro-level tolerance [6,21,28,29], but they understate how platform-governance arrangements set the boundaries within which people balance caution and compliance on digital services [20,23,30].
In highly institutionalized settings such as China and Korea, privacy choices are mediated by governance norms—visibility of explanations and accountability channels, and the reliability of operating procedures—that shape what counts as “good-enough” protection [20,23,30]. Building on bounded rationality and satisficing, we frame continued use as bounded compliance: engagement persists once minimum transparency and reliability thresholds are perceived, even when concern remains [14,17,21].This study conducts a symmetric, subnational survey in Fujian, China (N=185) and Busan–Gyeongnam, Korea (N=187) to examine how privacy concern and accountability visibility jointly sustain participation in AI-enabled public-service platforms. Using cross-regional heat maps and binary-logit average marginal effects, we show that willingness to use remains consistently high (≥0.70) across most conditions, while sensitivity patterns differ in ways that reflect institutional pathways of trust—with accountability visibility and institutional continuity acting as practical levers [8,20,23].
Contributions
(1)
We reframe the privacy paradox as an institutional satisficing problem in platform governance rather than an individual inconsistency [14,17,21].
(2)
We operationalize a symmetric diagnostic survey that isolates accountability and concern and discloses sparse subcells for transparency (n<10), while focusing inference on adequately populated cells [11,15].
(3)
We identify a “good-enough” compliance zone, showing how accountability visibility (resolution proofs, traceable updates) and institutional continuity (reliable operations, O&M capacity) stabilize trust thresholds on AI-enabled public-service platforms [20,23,30]. The remainder of the paper presents the theoretical grounding, data and measures, empirical results, and policy implications for trustworthy AI platforms [[1[3,27].
Figure 1. (a) Conceptual Framework.Notes. This framework links platform governance to empirical diagnostics and policy interpretation. It outlines: (i) institutional thresholds of transparency and reliability; (ii) symmetric surveys in Fujian (N = 185) and Busan–Gyeongnam (N = 187); (iii) diagnostic analyses using heat maps and logistic regression; and (iv) policy interpretation focused on feedback visibility and institutional continuity. Subcells with n < 10 are shown for transparency but excluded from statistical inference [11,15]. (b) Bounded Compliance Map: Institutional Pathways and Behavioral Convergence. Notes. This concept map illustrates relationships among key factors sustaining participation on AI-enabled public-service platforms. Three antecedents—privacy concern (Q6, blue), accountability visibility (Q7, green), and institutional continuity (green)—jointly shape perceived adequacy (amber), representing the “good-enough” threshold of transparency and reliability. When this threshold is reached, users enter the bounded-compliance zone (navy), a satisficing equilibrium where engagement persists despite concern, leading to continued participation (Q8, red; willingness ≥ 0.70). Regional pathways (purple) differ—Korea shows accountability sensitivity (+12.5 pp), China shows continuity assurance—yet both converge toward similar behavioral outcomes.
Figure 1. (a) Conceptual Framework.Notes. This framework links platform governance to empirical diagnostics and policy interpretation. It outlines: (i) institutional thresholds of transparency and reliability; (ii) symmetric surveys in Fujian (N = 185) and Busan–Gyeongnam (N = 187); (iii) diagnostic analyses using heat maps and logistic regression; and (iv) policy interpretation focused on feedback visibility and institutional continuity. Subcells with n < 10 are shown for transparency but excluded from statistical inference [11,15]. (b) Bounded Compliance Map: Institutional Pathways and Behavioral Convergence. Notes. This concept map illustrates relationships among key factors sustaining participation on AI-enabled public-service platforms. Three antecedents—privacy concern (Q6, blue), accountability visibility (Q7, green), and institutional continuity (green)—jointly shape perceived adequacy (amber), representing the “good-enough” threshold of transparency and reliability. When this threshold is reached, users enter the bounded-compliance zone (navy), a satisficing equilibrium where engagement persists despite concern, leading to continued participation (Q8, red; willingness ≥ 0.70). Regional pathways (purple) differ—Korea shows accountability sensitivity (+12.5 pp), China shows continuity assurance—yet both converge toward similar behavioral outcomes.
Preprints 190444 g001

2. Literature Review and Theoretical Grounding

Privacy paradox beyond psychology.
A large body of research explains why people keep using data-intensive services despite perceived risk—via cognitive dissonance, risk–benefit miscalibration, trust heuristics, or habitual normalization [4,5,6,21,28,29]. These accounts illuminate micro-level tolerance, yet they often understate the institutional settings that delimit what users regard as acceptable exposure and reassurance in digital environments [20,23,30].
Platform governance as the missing layer.
In highly institutionalized contexts, platform governance—norms and routines around transparency, accountability, and operational reliability—mediates privacy decisions [1,2,20,23]. Transparency and feedback visibility can signal procedural fairness and responsiveness, bolstering trust in government/platform institutions [8,20]. Accountability channels (complaint handling, audit trails, explanation mechanisms) help users perceive protections as “good enough,” shifting the problem from a psychological paradox to an institutional assurance challenge [23,30]. Hence, the privacy paradox should be read as a function of embedded governance mechanisms rather than individual inconsistency alone [27,30].
Satisficing and bounded compliance.
Building on bounded rationality [14,17], continued use is conceptualized as satisficing: users settle for adequate outcomes under informational and institutional constraints. Where accountability signals are visible and responsive, the perceived adequacy threshold is higher; where reassurance stems from stable but less dialogic routines, engagement persists once minimal reliability is perceived [6,14,17,20]. We frame this as bounded compliance—behavior as pragmatic adaptation rather than inconsistency—consistent with findings in automation trust and risk-adjusted choice models [8,9,23]. Crucially, adequacy operates here as a moderating condition (governance cues → adequacy threshold → sustained participation), not as a mediating construct that explains variance ex post.
Relation to prior satisficing-equilibrium work.
This study extends the satisficing-equilibrium logic developed in prior work on multi-actor trust in AI-enabled smart tourism by the same authors [31]. Whereas that framework theorizes SE at a broader systems level, the present analysis narrows to subnational, platform-governance thresholds (accountability visibility; institutional continuity) and examines how they sustain bounded compliance under privacy concern. The two studies are complementary: here we operationalize platform-level adequacy and test symmetric, diagnostic expectations without causal claims.
Comparative implication for East-Asian platforms.
Applied to AI-enabled public-service platforms in East Asia, the framework anticipates similar behavioral outcomes (continued use amid concern) realized through two institutional pathways [1,3,10,20,23]:
  • ▪ Accountability-visibility pathway: clear explanations, traceable resolution proofs, participatory feedback;
  • ▪ Institutional-continuity pathway: reliable operations, predictable O&M routines, service stability.
Prior comparative work in digital governance and smart services supports these divergent yet convergent logics of trust formation [19,20,23,27]. Our design therefore examines how concern and accountability jointly condition willingness to use, without presuming a universal psychological mechanism [4,5,21].
Resulting analytical model.
Synthesizing the above, we propose a bounded-compliance model of platform governance in which privacy perceptions (concern) translate into behavioral participation (use) conditional on institutional cues—particularly accountability visibility and system reliability [14,17,20,23,30]. The model yields diagnostic expectations for threshold-based trust on AI platforms and motivates the symmetric China–Korea survey that follows [11,15].
Analytical expectations.
  • E1. High baseline willingness (≥ 0.70) is expected to persist across most conditions when either accountability visibility or reliability reaches perceived adequacy.
  • E2. Korea is expected to show stronger sensitivity to accountability visibility, while China tends to rely more on institutional continuity.
  • E3. Under high concern, willingness converges to a “good-enough” zone once at least one governance lever signals adequacy.

3. Methods and Data

Design
We implement a symmetric, diagnostic survey comparing Fujian, China (N = 185) and Busan–Gyeongnam, Korea (N = 187) to examine how privacy concern (Q6) and accountability/explanation (Q7) jointly shape willingness to use AI services (Q8) on public-service platforms. The design emphasizes cross-regional parity, nonparametric visualization, and transparent handling of sparse subcells, following best practices in comparative governance research [12,19,20]. Subcells with n < 10 are disclosed for transparency and treated as diagnostic rather than inferential, in line with ethical small-sample reporting [11,15].
Measures and coding.
We analyze three binary indicators (coded to {0,1}):
  • Q6 – Privacy concern: 0 = low, 1 = high.
  • Q7 – Accountability/explanation: 0 = absent, 1 = present.
  • Q8 – Willingness to use AI services: 0 = low, 1 = high.
For each region, we compute the 2×2 conditional probability table of Pr(HighUse = 1 | Q6, Q7) and a difference panel to visualize cross-regional contrasts. Δ is defined cell-by-cell as BG minus FJ (in percentage points). These summaries support diagnostic comparability and bounded-compliance checks across subsamples [11,15,17]. Descriptive statistics appear in Table 1.
Visualization and estimation.
We present dual-color heat maps for each region and a Δ panel (Figure 2). For proportions and small cells, we report 95% confidence intervals using Wilson’s method, which is preferable to normal approximations in this setting [11,15]. Where subcell counts are sparse (e.g., n < 10), estimates are labeled diagnostic and excluded from formal inference.
Measures. Q6 (concern), Q7 (accountability), Q8 (use). Cells display Pr(HighUse = 1 | Q6, Q7) × 100; the right panel shows Δ (percentage points). Estimation. Ninety-five percent CIs via Wilson’s method; scales harmonized for legibility.
Reading. Both regions sustain high probabilities (≥ 0.70). The largest Δ (+12.5 pp) appears in low-concern + high-accountability, indicating stronger accountability sensitivity in Korea, while Fujian patterns are consistent with implicit institutional assurance—a motif noted in East-Asian digital-trust literature [1,3,20,23]. Sparse subcells (e.g., BG: Concern = 0 and Account = 1, n = 5) are reported for transparency only [11,15].
Robustness checks.
We assess stability through diagnostic resampling and sensitivity checks for each region’s 2×2 table and the Δ panel. Figure 3 overlays Fujian (left), Busan–Gyeongnam (middle), and Δ = BG − FJ (right) on common scales. Across these checks, cell probabilities remain high (typically ≥ 0.70), and the sign of Δ aligns with the main heat maps, confirming stability and directional robustness. Full descriptive summaries and probability ranges appear in Table 2 and the supplementary CSV files.
Bivariate tests and regression (for completeness).
We also report (i) region–variable chi-square tests (Table 3A) and (ii) a logistic specification with Q6, Q7, their interaction, and Region as covariates (Table 3B). These outputs are read as pattern-consistent diagnostics, not hypothesis tests, and are included solely to align with common reporting practice in socio-behavioral diagnostics [15].
Ethics and Data Availability. This study analyzes anonymized adult survey data (N = 372) approved under protocol YSUIRB-202511-HR-194-02 at Youngsan University. The dataset represents an adult-only subset of previously authorized minimal-risk social-behavioral research and involves no intervention or collection of personally identifiable information [12,19]. No new data collection was conducted for this analysis.
Data availability. De-identified cell matrices and Δ values (CSV) are provided as Supplementary Materials, together with variable labels and coding keys to support replication. The governance-cue coding (Q5–Q7) follows the Information Control Level (ICL) structure reported in the related satisficing-equilibrium study [31]. As the composite is formative rather than reflective, internal-consistency coefficients are not used as validity diagnostics; details of bilingual wording and coding can be provided upon request.
Multicollinearity was assessed in the Satisficing Equilibrium (SE) dataset [31], where all variance inflation factors (VIF ≈ 1.03–1.10) indicated distinct, non-interchangeable governance cues among Q5–Q7. Given that the present study draws on the same measurement structure with a reduced diagnostic specification, further VIF tests were unnecessary.
The coding of governance cues (Q5–Q7) follows the Information Control Level (ICL) structure previously validated in the satisficing-equilibrium study [31]. Each item was measured on a 0–3 ordinal scale reflecting ascending adequacy of transparency, concern (reverse-coded), and accountability. The composite is formative rather than reflective, so internal-consistency coefficients are not diagnostic (Cronbach’s α ≈ 0.29, VIF ≈ 1.03–1.10 [31]). The full bilingual wording and detailed coding scheme are available upon request.

4. Discussion

To situate our findings, Table 4 contrasts this study with influential work on the privacy paradox, trust, and governance. Psychological and calculus-based explanations emphasize individual trade-offs under risk and uncertainty [21,28], while governance-centered studies show that transparency can lift institutional trust yet rarely specify platform-level accountability and reliability levers [20]. Human–automation/HCI research clarifies when reliance is “appropriate” but typically abstracts from subnational governance variation and public-service platforms [8,23].
Our symmetric China–Korea survey adds three elements. First, we recast continued use as bounded compliance—a satisficing equilibrium that arises once accountability visibility (clear explanations, traceable resolution) and operational continuity (predictable O&M) cross “good-enough” adequacy thresholds. We treat adequacy as a moderating condition, not a mediating construct. Second, we operationalize these platform thresholds with diagnostic cell probabilities and bootstrap checks, aligning measurement with governance cues rather than individual psychometrics. Third, we show pathway differences—Korea is more accountability-sensitive, whereas China relies more on implicit institutional assurance—yet both converge to sustained participation even under high concern. This shifts the emphasis from a paradox located in the individual toward threshold governance on platforms.
Policy implications. For AI-enabled public-service platforms, two levers are actionable:
  • ▪ Feedback visibility. Publish explanation artifacts and closure proofs (e.g., ticket IDs, time-to-resolution, status traces) at the user interface to make accountability verifiable.
  • ▪ Continuity capacity. Harden O&M routines (uptime SLAs, incident-drill coverage, recovery-time targets) and communicate them plainly.
Calibrating these levers to local expectations can raise perceived adequacy and sustain engagement even under elevated concern, without over-promising universal psychological change.

5. Interpretive Synthesis and Conclusions

The symmetric survey comparison between Fujian (N = 185) and Busan–Gyeongnam (N = 187) reveals a consistent behavioral mechanism on AI-enabled public-service platforms: users continue to participate despite privacy concern once minimal transparency and reliability thresholds are perceived [20,23]. Heat-map and bootstrap diagnostics (Figure 2 and Figure 3) indicate high willingness to use (≥ 0.70) across concern–accountability conditions, with stronger accountability sensitivity in Korea and stronger institutional assurance in China. We interpret this pattern as bounded compliance—a satisficing equilibrium structured by institutional cues rather than a contradiction in individual psychology [14,17,21,28].
Regional pathways.
Fujian. Engagement aligns with institutional continuity (predictable procedures, stable O&M), which reassures even under concern—consistent with evidence that reliability and continuity calibrate reliance on automated or platformized services [8,23].
Busan–Gyeongnam. Engagement aligns with accountability visibility (explanations, feedback closure), which raises the perceived “good-enough” protection threshold, echoing findings that transparency cues lift institutional trust [20].
Despite these pathway differences, both regions converge on sustained participation under bounded conditions—consistent with satisficing under bounded rationality [14,17] and with privacy-paradox regularities observed in prior work [21,28,29].
Conclusions.
(1)
Privacy satisficing. Continued participation stems not from optimizing privacy but from achieving a good-enough trust threshold once safeguards are perceived as credible [14,17,21].
(2)
Governance-shaped thresholds. Where visibility and redress channels are salient, users expect higher procedural openness; where continuity and reliability dominate, concern is tolerated if operations remain predictable [8,20,23].
(3)
Diagnostic value of symmetry. Treating sparse subcells as diagnostic and maintaining regional parity enables transparent comparison of platform trust without overstated inference, aligning with best practice for small-cell analysis [11,15].
Policy implications.
China (Fujian). Reinforce feedback visibility—publish closure evidence and service-level commitments—while sustaining multi-year O&M continuity to stabilize satisficing thresholds [8,20].
Korea (Busan–Gyeongnam). Ensure institutional continuity within accountability systems as AI services scale; procedural volatility could erode trust [20,23].
Both contexts. Develop platform-trust dashboards tracking two levers—accountability visibility and operational continuity—as leading indicators of bounded compliance [20,23].
Limitations and future research. The design is diagnostic: some cross-tab cells are sparse and reported for transparency rather than inference [11,15]. The analysis is cross-sectional and limited to two subnational cases. Future work should extend to additional regions, employ longitudinal tracking, and experimentally vary visibility and continuity to observe shifts in satisficing thresholds [8,20,23].
Overall synthesis. Viewing the privacy paradox as governance-dependent bounded compliance offers a grounded explanation for why caution and participation coexist on public-service platforms. Institutions that calibrate transparency and reliability—rather than attempting to eliminate concern—can sustain trust and foster responsible AI adoption across diverse governance contexts [14,17,20,21,23,28,29].

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org.

Author Contributions

Conceptualization, methodology, formal analysis, visualization, and writing—original draft preparation: S.H.; Data curation and survey coordination: S.C.; Supervision, project administration, and writing—review and editing: G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study analyzes a fully anonymized, adult-only subset of survey data reviewed and approved by the Youngsan University Institutional Review Board (IRB No. YSUIRB-202511-HR-194-02; review type: expedited; approval/notice date: 28 November 2025). The approved protocol classifies the project as minimal-risk social-science research involving voluntary participation, no intervention, and no collection of personally identifiable information. All analyses reported in this paper exclude responses from minors; only adult participants (aged 18 years and above) who provided electronic informed consent were retained.

Informed Consent Statement

All participants provided electronic informed consent before participation and could withdraw at any time without penalty.

Data Availability Statement

Aggregated and anonymized regional datasets (Fujian and Busan–Gyeongnam subsamples) used to generate the descriptive tables and figures in this study are provided as supplementary CSV files. Due to ethical and institutional constraints, the full survey dataset and analysis scripts are not publicly released. Additional materials may be made available upon reasonable request.

Acknowledgments

The authors thank the participating respondents and local administrative offices in Fujian and Busan–Gyeongnam for their cooperation and logistical support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Suanpang, P.; Pothipassa, P. Integrating Generative AI and IoT for Sustainable Smart Tourism Destinations. Sustainability 2024, 16, 7435. [Google Scholar] [CrossRef]
  2. Florido-Benítez, L.; del Alcázar Martínez, B. How Artificial Intelligence (AI) Is Powering New Tourism Marketing and the Future Agenda for Smart Tourist Destinations. Electronics 2024, 13, 4151. [Google Scholar] [CrossRef]
  3. Siddik, A.B.; Forid, M.S.; Yong, L.; Du, A.M.; Goodell, J.W. Artificial intelligence as a catalyst for sustainable tourism growth and economic cycles. Technological Forecasting and Social Change 2025, 210, 123875. [Google Scholar] [CrossRef]
  4. Hirschprung, R.S. Is the Privacy Paradox a Domain-Specific Phenomenon? Computers 2023, 12, 156. [Google Scholar] [CrossRef]
  5. Arzoglou, E.; et al. The Role of Privacy Obstacles in the Privacy Paradox: A System Dynamics Perspective. Systems 2023, 11, 205. [Google Scholar] [CrossRef]
  6. Wilde, G.J.S. The Theory of Risk Homeostasis: Implications for Safety and Health. Risk Analysis 1982, 2, 209–225. [Google Scholar] [CrossRef]
  7. Gordon, L.A.; Loeb, M.P. The Economics of Information Security Investment. ACM Transactions on Information and System Security 2002, 5, 438–457. [Google Scholar] [CrossRef]
  8. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Human Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
  9. Firmino de Souza, D.; Sousa, S.; Kristjuhan-Ling, K.; Dunajeva, O.; Roosileht, M.; Pentel, A.; Mõttus, M.; Özdemir, M.C.; Gratšjova, Ž. Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review. Electronics 2025, 14, 1557. [Google Scholar] [CrossRef]
  10. Jin, Z.; Wang, H.; Xu, Y. Artificial Intelligence as a Catalyst for Sustainable Tourism: A Case Study from China. Systems 2025, 13, 333. [Google Scholar] [CrossRef]
  11. Hosmer, D.W.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression, 3rd ed.; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar] [CrossRef]
  12. Benlidayi, I.C.; Gupta, L. Translation and Cross-Cultural Adaptation: A Critical Step in Multi-National Survey Studies. Journal of Korean Medical Science 2024, 39(49), e336. [Google Scholar] [CrossRef]
  13. Wei, W.; Zhang, L.; Law, R. Smart tourism destination: developing and validating a measurement scale. Current Issues in Tourism 2024. [Google Scholar] [CrossRef]
  14. Lilly, G. Bounded Rationality: A Simon-Like Explication  . J. Econ. Dyn. Control 1994, 18(1), 205–230. [Google Scholar] [CrossRef]
  15. Mize, T.D. Best Practices for Estimating, Interpreting, and Presenting Average Marginal Effects. Sociological Science 2019, 6, 81–117. [Google Scholar] [CrossRef]
  16. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.-I. From Local Explanations to Global Understanding with Explainable AI for Trees. Nature Machine Intelligence 2020, 2, 56–67. [Google Scholar] [CrossRef] [PubMed]
  17. Simon, H.A. A Behavioral Model of Rational Choice. The Quarterly Journal of Economics 1955, 69, 99–118. [Google Scholar] [CrossRef]
  18. Aquilino, L.; Di Dio, C.; Manzi, F.; Massaro, D.; Bisconti, P.; Marchetti, A. Decoding Trust in Artificial Intelligence: A Systematic Review of Quantitative Measures and Related Variables. Informatics 2025, 12, 70. [Google Scholar] [CrossRef]
  19. Jeong, M.; Shin, H.H. Tourists’ Experiences with Smart Tourism Technology at Smart Destinations and Their Behavioral Intentions. Journal of Travel Research 2020, 59(8), 1467–1484. [Google Scholar] [CrossRef]
  20. Grimmelikhuijsen, S.G.; Porumbescu, G.A.; Hong, B.; Im, T. The Effect of Transparency on Trust in Government: A Cross-National Comparative Experiment. Public Administration Review 2013, 73(4), 575–586. [Google Scholar] [CrossRef]
  21. Acquisti, A.; Brandimarte, L.; Loewenstein, G. Privacy and Human Behavior in the Age of Information. Science 2015, 347(6221), 509–514. [Google Scholar] [CrossRef]
  22. Arthur, W.B. Competing Technologies, Increasing Returns, and Lock-In by Historical Events. The Economic Journal 1989, 99(394), 116–131. [Google Scholar] [CrossRef]
  23. Gulati, S.; McDonagh, J.; Sousa, S.; Lamas, D. Trust Models and Theories in Human–Computer Interaction: A Systematic Literature Review. Computers in Human Behavior Reports 2024, 16, 100495. [Google Scholar] [CrossRef]
  24. Majid, G.M.; Ali, G.A.; Jermsittiparsert, K.; Rahman, M.; Salam, M.A. Intelligent automation for sustainable tourism: a systematic review. Journal of Sustainable Tourism 2023. [Google Scholar] [CrossRef]
  25. Pop, I.L.; Nițu, A.; Ionescu, A.; Gheorghe, I.G. AI-Enhanced Strategies to Ensure New Sustainable Destination Tourism Trends among the 27 European Union Member States. Sustainability 2024, 16(22), 9844. [Google Scholar] [CrossRef]
  26. UNWTO. Tourism and the Sustainable Development Goals—Journey to 2030, 2nd ed.; United Nations World Tourism Organization: Madrid, Spain, 2022. [Google Scholar] [CrossRef]
  27. Zeqiri, A.; Ben Youssef, A.; Maherzi Zahar, T. The Role of Digital Tourism Platforms in Advancing Sustainable Development Goals in the Industry 4.0 Era. Sustainability 2025, 17(8), 3482. [Google Scholar] [CrossRef]
  28. Dinev, T.; Hart, P. An Extended Privacy Calculus Model for E-Commerce Transactions. Information Systems Research 2006, 17(1), 61–80. [Google Scholar] [CrossRef]
  29. Lei, S.S.I.; Ye, S.; Law, R. Will tourists take mobile travel advice? Examining the personalization–privacy paradox and self-construal. Tourism Management Perspectives 2022, 41, 100949. [Google Scholar] [CrossRef]
  30. Elshaer, I.A.; Alyahya, M.; Azazz, A.M.S.; Ali, M.A.S.; Fathy, E.A.; Fouad, A.M.; Soliman, S.A.E.M.; Fayyad, S. Building Digital Trust and Rapport in the Tourism Industry: A Bibliometric Analysis and Detailed Overview. Information 2024, 15, 598. [Google Scholar] [CrossRef]
  31. Su, H.; Liao, J.; So, G. Satisficing Equilibrium and Multi-Actor Trust in AI-Enabled Smart Tourism: Nonlinear Evidence from Digital Governance Dynamics. Preprints 2025, 2025110778. [Google Scholar] [CrossRef]
Figure 2. Cross-regional heat maps of Pr(HighUse = 1) and Δ (BG − FJ).
Figure 2. Cross-regional heat maps of Pr(HighUse = 1) and Δ (BG − FJ).
Preprints 190444 g002
Figure 3. Bootstrap verification of regional probability matrices. Notes. Panels show Fujian (left), Busan–Gyeongnam (middle), and Δ (right). Each cell reports the bootstrap mean of Pr(HighUse = 1 | Q6, Q7) × 100; Δ is in percentage points. Resampling confirms high-use probabilities and consistent Δ directionality, in line with satisficing-behavior models [11,14,17].
Figure 3. Bootstrap verification of regional probability matrices. Notes. Panels show Fujian (left), Busan–Gyeongnam (middle), and Δ (right). Each cell reports the bootstrap mean of Pr(HighUse = 1 | Q6, Q7) × 100; Δ is in percentage points. Resampling confirms high-use probabilities and consistent Δ directionality, in line with satisficing-behavior models [11,14,17].
Preprints 190444 g003
Table 1. Descriptive statistics of survey samples (by region).
Table 1. Descriptive statistics of survey samples (by region).
Region N Q6 Concern (mean) Q7 Accountability (mean) Q8 High Use (mean)
Fujian 185 76.2% 57.8% 84.9%
Busan–Gyeongnam 187 73.3% 26.2% 78.1%
Total 372 74.7% 41.9% 81.5%
Notes. Q6 = privacy concern; Q7 = accountability/explanation; Q8 = willingness to use AI services. Means are reported as percentages.
Table 2. Cell probabilities and 95% confidence intervals of Pr(HighUse = 1) by region and Concern × Accountability.
Table 2. Cell probabilities and 95% confidence intervals of Pr(HighUse = 1) by region and Concern × Accountability.
Region Concern Account mean lo95 hi95 N
Fujian 0 0 55.2% 30.8% 77.3% 20
Fujian 0 1 87.5% 71.9% 100.0% 24
Fujian 1 0 88.0% 79.0% 95.6% 58
Fujian 1 1 89.1% 81.9% 95.3% 83
Busan–Gyeongnam 0 0 51.2% 36.4% 65.3% 45
Busan–Gyeongnam 0 1 100.0% 100.0% 100.0% 5
Busan–Gyeongnam 1 0 85.2% 77.1% 92.4% 61
Busan–Gyeongnam 1 1 88.9% 81.8% 94.9% 76
Notes. Concern and Accountability are binary indicators (0 = low, 1 = high). Confidence intervals are reported for descriptive completeness. Subcells with n < 10 (e.g., Busan–Gyeongnam: Concern = 0 and Accountability = 1, n = 5) may yield boundary estimates (e.g., 100.0%) due to sparse observations; these cells are reported transparently but not interpreted inferentially.
Table 3. (A). Chi-square tests (region vs. Q6, Q7, Q8). (B). Logistic regression (diagnostic check; not used for inference).
Table 3. (A). Chi-square tests (region vs. Q6, Q7, Q8). (B). Logistic regression (diagnostic check; not used for inference).
(A)
Variable Chi2 df p_value
Q6_CONCERN_LEVEL 0.728 1 0.3939
Q7_EXPLAIN_LEVEL 63.026 1 0.0000
Q8_WILLINGNESS 2.548 1 0.1104
(B)
Variable Coef Std.Err z p_value
Intercept 0.171 0.325 0.53 0.5995
Concern (Q6) 1.723 0.343 5.02 0.0000
Accountability (Q7) 2.009 0.676 2.97 0.0030
Q6 × Q7 -1.774 0.758 -2.34 0.0192
Region (BG=1) -0.113 0.302 -0.37 0.7084
Note. This logistic model is reported solely as a diagnostic consistency check with the nonparametric heat maps. Coefficients are interpreted directionally only. Given sparse subcells and the study’s descriptive focus, no inferential or causal claims are drawn from this model.
Table 4. Prior studies versus this paper: mechanisms, findings, and contributions.
Table 4. Prior studies versus this paper: mechanisms, findings, and contributions.
Study (year) Domain/Context Design/Data Mechanism focus (key variables) Main finding re paradox/continued use What this paper adds (relative to the study)
Acquisti, Brandimarte & Loewenstein [21] Privacy & human behavior (general) Conceptual synthesis/evidence review Cognitive biases, contextual framing, bounded rationality Users often accept data practices despite concern; paradox rooted in psychology and context Moves from individual paradox to platform-governance thresholds; quantifies bounded compliance via accountability visibility and continuity cues on public-service platforms
Dinev & Hart [28] E-commerce privacy Empirical model (privacy calculus) Perceived risk vs. benefits, trust Continued use/intention shaped by a risk–benefit calculus with trust as mediator Shifts from dyadic calculus to institutional satisficing: shows good-enough participation once governance cues cross minimal thresholds, even at high concern
Grimmelikhuijsen et al. [20] Government transparency & trust Cross-national experiment Transparency → Trust in government Visibility can increase trust, but platform-level instantiation is not specified Specifies platform-level accountability visibility (explanations/feedback closure) and links it to sustained use; identifies regional pathway differences (Korea vs. China)
Lee & See [8] Trust in automation Conceptual/design guidance Appropriate reliance, reliability, feedback Proper cues calibrate reliance on automated systems Applies “appropriate reliance” to AI public-service platforms; couples reliability with institutional continuity as a governance lever, evidenced by symmetric subnational data
Gulati et al. [23] HCI trust models Systematic literature review Trust antecedents in interactive systems Synthesizes models/antecedents; limited on public-sector platform governance Bridges HCI trust models with government platform context; provides heat-map diagnostics and bootstrap robustness for threshold-based participation
Synthesis. Prior work explains why people tolerate risk (psychology, calculus) or how cues shape reliance (transparency, automation design), but generally lacks a platform-governance threshold account with subnational symmetry. Our results indicate that once accountability visibility (clear explanations, feedback closure) and operational continuity (predictable O&M) reach “good-enough” levels, participation persists despite concern—consistent with bounded compliance. The China–Korea contrast suggests that trust formation is institutionally routed: in Korea, visible accountability raises willingness more sharply; in China, stable continuity underpins similar outcomes. This supports policy designs that tune visibility and continuity rather than assuming uniform psychological remediation [8,20,21,23,28].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated