1. Introduction
AI enabled fitness services collect activity, physiological, and location data for monitoring and personalized guidance, which raises persistent privacy and security concerns. Nevertheless, many users continue to rely on such services while simultaneously reporting privacy worry, a pattern commonly described as a privacy paradox [
1,
2]. This coexistence of concern and use is not merely a behavioral inconsistency. Rather, it reflects an ethical and governance related tension inherent in data intensive AI services.
Prior research shows that the privacy paradox does not manifest uniformly across application domains. Instead, it varies with contextual constraints and service characteristics rather than reflecting a simple inconsistency between attitudes and behavior [
3,
4]. Studies on security awareness further indicate that increased knowledge of vulnerabilities does not necessarily lead to reduced use. In high benefit settings, perceived usefulness, convenience, and habitual reliance often sustain continued engagement despite acknowledged risks [
8].
Compared with general social media platforms, AI fitness services involve more sensitive health related data while providing recurring and tangible benefits. Empirical research on health club and leisure sport participants suggests that perceived service attributes, motivation, and commitment are closely associated with continued use, satisfaction, and loyalty, rather than with service discontinuation [
18,
23]. As a result, privacy related decisions in this domain carry greater weight in everyday practice [
5,
6,
7].
Because data disclosure in AI fitness services is continuous rather than episodic, treating privacy concern and service use as a binary opposition is analytically limited. A spectrum based perspective is more appropriate for capturing gradual variation in user positioning. This view is consistent with bounded rationality, under which individuals tolerate a certain level of perceived risk as long as basic safeguards and functional benefits remain sufficient for routine use [
10,
14,
15]. Privacy outcomes are therefore not continuously optimized. Instead, acceptable configurations emerge through everyday judgment under constraint. Review studies across AI application domains similarly suggest that the coexistence of concern and continued use reflects structural dependence on service benefits and institutionalized usage conditions, rather than temporary attitudinal inconsistency [
21].
From a governance perspective, security education and awareness initiatives may enhance risk recognition and protective intentions [
8]. At the platform level, however, research on public service systems shows that misaligned information flows and accountability structures can suppress usage even when services are technically available [
9]. AI fitness represents a contrasting configuration. Despite persistent privacy and security concerns, adoption remains widespread. This contrast raises a descriptive question regarding how users manage perceived risk and responsibility when withdrawal from AI enabled services is impractical.
Building on the Satisficing Equilibrium perspective, which emphasizes good enough configurations under bounded rationality rather than theoretical optima [
10], this study examines AI fitness as a user side, single domain case of privacy satisficing. Using survey data from 596 adults aged 18 years and above, a Deviation index is constructed as standardized privacy concern minus standardized risk acceptance. The fitted probability of high AI fitness use is examined across this index. Perceived transparency and safety, measured as Information Control Level, and stated privacy trade off attitudes are included as contextual factors. The contribution of this study is descriptive. It documents how the likelihood of continued high use varies gradually across the worry risk configuration and discusses this pattern in relation to privacy satisficing and governance oriented discussions of security education and platform conditions [
2,
3,
4,
5,
8,
9,
10].
Figure 1.
Conceptual illustration of the privacy satisficing perspective in AI fitness. Note. Deviation denotes standardized privacy concern minus standardized risk acceptance. Information Control Level and privacy trade off attitudes are treated as contextual factors. The figure is conceptual. Empirical results are reported in
Figure 2.
Figure 1.
Conceptual illustration of the privacy satisficing perspective in AI fitness. Note. Deviation denotes standardized privacy concern minus standardized risk acceptance. Information Control Level and privacy trade off attitudes are treated as contextual factors. The figure is conceptual. Empirical results are reported in
Figure 2.
2. Theoretical Background
The privacy paradox refers to the observed gap between stated privacy concern and continued data disclosure or tracking acceptance [
1,
2]. In digital health and AI fitness contexts, this tension is particularly salient. Service operation requires continuous collection of health related data, while recurring benefits such as feedback and self management support are delivered simultaneously. Empirical studies on fitness trackers, wearable devices, and leisure sport participation consistently report that privacy concern or dissatisfaction may coexist with sustained use rather than leading to service withdrawal [
5,
6,
7,
11,
12,
18,
23].
To interpret this pattern, this study adopts a privacy satisficing perspective grounded in bounded rationality and Satisficing Equilibrium reasoning. Under this view, privacy outcomes are not continuously optimized. Instead, users tolerate a certain level of perceived risk when expected benefits and basic safeguards remain acceptable for everyday practice [
10,
14,
15]. Related risk oriented perspectives similarly describe behavior as adjusting around a subjectively acceptable level of risk rather than seeking full risk elimination [
16].
Operationally, this study defines a Deviation index as the standardized difference between privacy concern and risk acceptance. Lower or negative Deviation values indicate lower concern or greater willingness to accept risk, whereas higher values reflect a concern dominant configuration. Treating Deviation as a continuous positioning measure allows examination of whether the fitted probability of high service use varies gradually across the worry risk configuration, rather than framing the privacy paradox as a binary condition. In the empirical analysis, perceived transparency and safety, measured as Information Control Level, and stated privacy trade off attitudes are incorporated as contextual factors [
5,
6,
7,
10].
This analytical framing is consistent with security awareness research showing that increased risk awareness does not necessarily lead to reduced use in high benefit settings. In such contexts, convenience, habitual reliance, and situational constraints often sustain continued engagement [
8,
10,
13,
17]. At the platform level, studies on platform disconnection demonstrate that misaligned information flows and responsibility structures can suppress usage even when services remain available [
9]. AI fitness represents a contrasting configuration. Despite persistent privacy and security concerns, adoption remains widespread. Accordingly, this study treats AI fitness privacy as a case of continued use under acknowledged risk and examines how Deviation, perceived transparency, and privacy trade off attitudes relate to observed use patterns, without assuming a single optimal level of privacy behavior [
5,
6,
7,
8,
9,
10,
13,
17].
3. Methods
3.1. Data and Sample
This study uses data from an online survey on AI enabled fitness services administered via Tencent Questionnaire, a widely used survey platform operated by Tencent Technology Co., Ltd. The survey page recorded 645 views and 633 submissions. After data screening, 596 valid responses were retained. Responses were excluded if completion time was unrealistically short, defined as below 35 seconds, if duplicate response patterns were identified across accounts or devices, or if the respondent was under 18 years of age. The final dataset therefore consists exclusively of adult participants.
Tencent Questionnaire requires account based login through QQ, WeChat, email, or a mainland China mobile number, and restricts each account to a single submission. This design helps reduce duplicate responses. Most responses originated from mainland China, with additional responses from the Republic of Korea and a small number from other regions. The questionnaire included items on AI fitness use, perceived transparency and safety, willingness to use AI fitness services, privacy attitudes, and basic demographic characteristics. Perceptual items were measured using five point Likert scales.
Participants were informed at the beginning of the survey that the study targeted adults aged 18 years and above, that participation was voluntary, and that no personally identifiable or sensitive information would be collected. Responses were anonymous, could be discontinued at any time, and were analyzed only in aggregated form. The study was designed as an anonymous adult-only questionnaire without intervention or physical procedures and falls within the scope of minimal risk social science research. Research procedures followed generally accepted ethical principles for human subject research and were approved by the Institutional Review Board of Youngsan University (Approval No. YSUIRB-202601-HR-198-02).
The analytical design adopts a parsimonious and descriptive approach, focusing on observed association patterns rather than causal inference or scale development.
3.2. Variables and Analytical Strategy
The dependent variable is high willingness to use AI fitness services. The original five point item measuring willingness to use AI fitness applications or devices is recoded into a binary indicator. Respondents at or above the sample median are coded as high use, while others are coded as low use. This approach follows prior studies that contrast relatively higher versus lower intention rather than relying on the full ordinal scale [
5,
6,
11,
12].
The main explanatory variable is the Deviation index, defined as the standardized difference between privacy concern and risk acceptance. Both components are z scored prior to subtraction. Higher Deviation values indicate that privacy concern exceeds stated risk acceptance, whereas lower values indicate a more risk tolerant configuration. Deviation is treated as a descriptive positioning measure within the sample and is not interpreted as an objective indicator of compliance or risk [
2,
11,
12].
Statistical calculations are conducted using standard spreadsheet based procedures. Two additional predictors are included to capture perceived service context. Perceived transparency and safety are summarized as Information Control Level, constructed as the mean of three perceptual items measuring information transparency, accountability expectation, and perceived reliability and safety. The Information Control Level index shows a Cronbach’s
α of 0.72, which is acceptable for exploratory research using a multidimensional perceptual instrument. Privacy trade off is measured using a single item capturing willingness to exchange personal data for convenience or improved service. Internal consistency reliability is therefore not applicable for this measure. Both variables are treated as continuous indices derived from Likert scale items, following established guidance for composite indicator construction [
7,
10]. Gender, age group, and prior app use experience are included as control variables in extended specifications.
Before model estimation, variance inflation factors are calculated to assess potential multicollinearity among the main explanatory variables. As reported in
Table 1, the Deviation index exhibits low variance inflation, while perception based indices show higher values. This pattern reflects shared attitudinal structure rather than statistical redundancy or dominance of any single predictor. Given the descriptive focus of the analysis and the emphasis on fitted probability patterns rather than precise marginal effects, these correlated indices are retained.
The analysis proceeds in two stages. First, descriptive statistics are reported. Second, logistic regression models are estimated with high use membership as the dependent variable and Deviation, Information Control Level, and privacy trade off as the main predictors, with optional demographic controls. Logistic regression is used to model a binary outcome and to present results in terms of fitted probabilities [
19,
20]. The analytical goal is to examine whether the probability of high AI fitness use varies smoothly across Deviation after accounting for perceived transparency and privacy trade off attitudes. The results are interpreted as descriptive patterns consistent with a privacy satisficing perspective rather than as causal effects [
10].