Preprint
Article

This version is not peer-reviewed.

Improving Self-Efficacy and Patient Activation in Cancer Survivors through Digital Health Interventions: Single-Case Experimental Prospective Study

A peer-reviewed article of this preprint also exists.

Submitted:

05 February 2025

Posted:

06 February 2025

You are already at the latest version

Abstract
Cancer survivors face numerous challenges, and digital health interventions can empower them by enhancing self-efficacy and patient activation. This prospective study aimed to assess the impact of an mHealth app on self-efficacy and patient activation in 166 breast and colorectal cancer survivors. Participants received a smart-bracelet and used the app to access personalized care plans. Data was collected at baseline and follow-ups, including patient-reported outcomes and clinician feedback. The study demonstrated positive impacts on self-efficacy and patient activation. The overall trial retention rate was 75.3%. Participants reported high levels of activation (PAM levels 1-3: P=1.0; level 4: P=.65) and expressed a willingness to stay informed about their disease (CASE-Cancer factor 1: P=.98; factor 2: P=.66; factor 3: P=.25). Usability of the app improved, with an increase in participants rating the system as having excellent usability (from 14.82% to 22.22%). Additional qualitative analysis revealed positive experiences from both patients and clinicians. This paper contributes significantly to cancer survivorship care by providing personalized care plans tailored to individual needs. The PERSIST platform shows promise in improving patient outcomes and enhancing self-management abilities in cancer survivors. Further research with larger and more diverse populations is needed to establish its effectiveness.
Keywords: 
;  ;  ;  ;  
Subject: 
Engineering  -   Other

1. Introduction

Cancer survivorship is a transformative experience that encompasses both physical and psychological challenges [1]. As individuals navigate the complexities of managing their health post-treatment, fostering self-efficacy and patient activation is paramount to achieving optimal health outcomes. Digital health interventions (DHIs), owing to their capacity to personalize, engage, and empower patients, offer considerable promise in addressing these critical aspects of cancer survivorship [2,3].
Self-efficacy and patient activation are crucial components in cancer survivorship [4]. Self-efficacy refers to an individual’s belief in their capacity to perform a specific task or behavior, whereas patient activation pertains to an individual’s knowledge, skills, and confidence in managing their health [5]. Mazanec et al. [6] demonstrated that higher levels of self-efficacy and patient activation correlate with improved outcomes for cancer survivors, including enhanced quality of life, reduced symptomatology, and the adoption of healthier lifestyles.
Self-efficacy is a key determinant of health outcomes [7]. Cancer survivors with high self-efficacy are more likely to adhere to treatment regimens, engage in health-promoting behaviors, and effectively manage symptoms [8]. Activated patients assume responsibility for their care, collaborate with healthcare providers, and actively seek out necessary information and support [9].
Traditionally, interventions aimed at promoting self-efficacy and patient activation have relied on in-person counseling and education [10]. While these approaches offer value, they often lack the scalability and accessibility required to accommodate the growing global population of cancer survivors. DHIs provide a personalized, accessible, and continuous approach to enhancing self-efficacy and patient activation among cancer survivors [11]. These interventions have been shown to improve adherence to treatment regimens, increase physical activity, and reduce symptom burden. Furthermore, DHIs can assist survivors in managing stress and anxiety, thereby fostering resilience and promoting overall well-being [12,13,14].
Several studies have examined the impact of DHIs on self-efficacy and patient activation. Powley et al. [15] evaluated the effectiveness of a digital health coaching program for patients undergoing surgery. Participants received personalized guidance via a mobile application, focusing on enhancing self-efficacy and lifestyle factors. The results demonstrated significant improvements in both areas, with patients reporting increased confidence in managing their health and adopting healthier behaviors. Van Der Hout et al. [16] investigated the efficacy of Oncokompas, a web-based self-management platform, in improving health-related quality of life and reducing symptoms among cancer survivors. The findings indicated that individuals with lower self-efficacy and higher levels of personal control or health literacy exhibited greater improvements. These results suggest that tailoring interventions to address specific needs can enhance their effectiveness in supporting the well-being of cancer survivors.
A growing body of evidence indicates that the adoption of healthy lifestyle practices, such as regular physical exercise [17], increased consumption of fruits and vegetables [18], maintaining a healthy weight and body composition [18], smoking cessation [19], and engagement in cognitive behavioral therapy [20], can positively influence cancer prognosis. However, many cancer survivors face significant challenges in adhering fully to these recommendations [19]. Previous studies have predominantly focused on either self-efficacy [16,21,22] or patient activation [12,23], rather than integrating both factors within a unified framework.
This study distinguishes itself from prior research by adopting a personalized approach to cancer survivorship care. It utilizes a mobile health (mHealth) application to deliver tailored interventions that address the unique needs of each patient. Moreover, this study conducts a multifaceted evaluation of the mHealth application's impact on various dimensions of patient well-being, including self-efficacy, patient activation, satisfaction with care, and physical activity levels. Notably, this study also examines the real-world feasibility and acceptability of implementing digital health interventions in clinical settings, addressing critical challenges such as user engagement, data integration, and technical issues.
The hypothesis of this clinical trial posits that “A comparison of self-efficacy levels at the beginning and end of the intervention will demonstrate a significant increase in self-efficacy among participants who receive the personalized intervention supported by the mHealth application.” The primary objective of this study is to assess the acceptability, usability, and impact of the mHealth application on survivors' perceived self-efficacy and satisfaction with their care. The secondary objective includes measuring patient activation and evaluating the acceptance of the mHealth application, as well as exploring survivors' experiences during their use of the application. Ultimately, the aim of this study is to assess the effectiveness of a personalized mHealth intervention in enhancing self-efficacy, patient activation, and satisfaction with care among cancer survivors.

2. Materials and Methods

2.1. PERSIST Project

This study was conducted as part of a European project titled PERSIST [24], which aims to enhance health outcomes and quality of life, while simultaneously reducing stress among cancer survivors through a patient-centered care plan that leverages big data and artificial intelligence. The PERSIST consortium seeks to develop an open, interconnected system to optimize the care provided to cancer survivors. The primary objective of the PERSIST project was to evaluate whether and how self-efficacy and patient activation can be addressed as a collaborative effort between patients and clinical professionals. The secondary objective focused on assessing patient engagement, willingness to use the mHealth application, and gathering feedback from both patients and clinicians regarding their experiences with the PERSIST solution (see study protocol [25]).
An overview of the PERSIST platform is depicted in Figure 1. Following the collection of real-world data from patients through mobile applications and smart bracelets, cutting-edge technologies were employed to extract multimodal features using the Multimodal Risk Assessment and Symptom Tracking (MRAST) framework. In addition to physical markers, subjective data was collected through questionnaires. All objective markers (vital signs) and subjective markers (Patient-Reported Experience Measures [PREMs], Patient-Reported Outcome Measures [PROMs], and linguistic/vocal/face cues) were gathered via the mHealth application. The fused data were processed using various tools (Cohort and Trajectory Analysis, Information Retrieval Tool, Alerts, and Motivation Mechanisms), and subsequently integrated into the Clinical Decision Support System (CDSS) and the mClinician application.
Figure 1. The main data flow through PERSIST platform.
Figure 1. The main data flow through PERSIST platform.
Preprints 148394 g001

2.1.1. Digital Interventions

To facilitate data collection and analysis, the study utilized several digital interventions. The mHealth application enabled patients to self-report health parameters and vital signs, while the mClinician application allowed healthcare providers to input and manage patient data. The MRAST platform employed advanced artificial intelligence (AI) techniques, including automatic speech recognition and natural language processing, to extract valuable insights from patient interactions. These digital tools collectively contributed to the efficient and accurate collection of patient data, enabling the platform to deliver personalized care recommendations and support through the Clinical Decision Support System (CDSS).
The mHealth application served as the central component for populating the Big Data Platform with diverse types of patient data, thereby enabling other services to process and analyze this information. The mHealth application allows individuals to track their health parameters and vital signs (Figure 2). Furthermore, the application facilitates the flow of data between users and healthcare providers. The mobile applications were developed using the Flutter framework and the Dart programming language. Interoperability of patient data was ensured using Fast Healthcare Interoperability Resources (FHIR) standards, which enabled data sharing among project partners [26].
Figure 2. mHealth App screenshots: (a) Summary of vital parameters; (b) Login screen. .
Figure 2. mHealth App screenshots: (a) Summary of vital parameters; (b) Login screen. .
Preprints 148394 g002
The mClinician application served as a data ingestion tool within the PERSIST project (Figure 3). It was primarily developed for use by clinicians to assist in collecting patient data and providing an overview of the acquired information. The mClinician application displays concepts from Symptoma’s API [27] to generate structured data for patients in Fast Healthcare Interoperability Resources (FHIR) format. Clinicians determined the most relevant data to be collected from patients’ electronic health records.
Figure 3. mClinican app screenshots: (a) Patient trends; (b) Patient Summary.
Figure 3. mClinican app screenshots: (a) Patient trends; (b) Patient Summary.
Preprints 148394 g003
The MRAST platform incorporates multimodal analysis of patient video recordings [28] (Figure 4). It primarily consists of automatic speech recognition (SPREAD [29]), natural language processing, and facial landmark detection to extract linguistic, speech, and visual features. MRAST also includes disease-centric discourse through the extraction of symptoms from free-text data, which is carried out using an information retrieval tool [30] developed based on Symptoma’s core technology.
Figure 4. Feature extraction at MRAST framework.
Figure 4. Feature extraction at MRAST framework.
Preprints 148394 g004
Figure 5 illustrates the Clinical Decision Support System (CDSS), which leverages patient-specific information and a comprehensive knowledge repository to generate evidence-based medical decisions or recommendations. Healthcare providers can initiate requests for decision support, which are processed by the inference engine to produce tailored clinical guidance. Additionally, the CDSS is integrated with cohort and trajectory analysis in multi-agent support systems [31], a breast cancer survival analysis with high-risk marker detection [32], an AI service predicting five-year relapse recurrence for breast and colorectal cancer survivors, and a model for automatic circulating tumor cell (CTC) detection in liquid biopsy samples [33].
Figure 5. Overview of CDSS structure.
Figure 5. Overview of CDSS structure.
Preprints 148394 g005

2.2. Clinical Trial

2.2.1. Trial Design

The clinical trial was designed using a single-case experimental design (SCED) methodology [34] as a prospective study to validate the PERSIST platform. SCED provides a robust approach for studying individual participants in depth and offers valuable insights into the effects of interventions over an extended period. Its flexibility, repeated measures, and potential for strong internal validity render it a valuable tool for this trial. Cancer types with relatively high incidence and survival rates were selected to ensure the identification of a sufficiently large survivor population amenable to enhanced follow-up.

2.2.2. Participants

Breast and colorectal cancer survivors who had completed curative treatment were included in this study. A survivor was defined as a patient who had remained recurrence-free for 3-24 months post-treatment (surgery, radiation therapy, and/or chemotherapy). For colorectal cancer survivors, two subgroups were considered: those who had received chemotherapy and those who had not. Each subgroup comprised at least 33% of the total colorectal cancer survivor group. For breast cancer survivors, both patients who underwent surgery and those who received chemotherapy were included.
The inclusion criteria encompassed individuals aged 18-75 years, in stable health, with a life expectancy exceeding two years. Participants were required to understand the study information, attend follow-up appointments, provide informed consent, and possess sufficient technological skills to use mobile devices and reliable internet access. Exclusion criteria included individuals with a life expectancy of less than one year, dementia or cognitive impairments, dependence on others for daily tasks, inability to make dietary decisions, current participation in other clinical studies, anticipated relocation, or major depression/psychiatric conditions that could interfere with daily activities.
Patients meeting the inclusion criteria and not meeting the exclusion criteria were assessed as outpatients, informed about the study, and personally invited to participate by clinicians at four clinical centers: Centre Hospitalier Universitaire de Liège (CHU) in Belgium, University Medical Centre Maribor (UKCM) in Slovenia, Complejo Hospitalario Universitario de Ourense (SERGAS) in Spain, and Riga East Clinical University Hospital (REUH) in Latvia, in collaboration with the University of Latvia (UL).

2.2.3. Sample Size Calculation

The sample size for this study was estimated based on the expected effectiveness of mobile device interventions in promoting healthy habits among cancer survivors, as demonstrated in previous research [35,36]. A power analysis using G*Power software indicated that 160 patients would be sufficient to detect significant differences between pre- and post-intervention measures of healthy habits, assuming a two-sided confidence level of 95%, a statistical power of 90%, and an effect size of 0.25.

2.2.4. Recruitment

The recruitment process spanned a four-month screening period and a four-month enrollment period (Figure 6). To ensure successful patient recruitment: (1) Clinical research staff at participating hospitals underwent comprehensive training on the PERSIST project (goals, criteria, timeline), devices (mobile phones and smart bracelets), and received recruitment materials (brochures, videos); (2) Trained clinicians then invited eligible post-treatment cancer patients; (3) Eligible patients were assessed in outpatient settings; and (4) Nurses or data managers assisted medical staff in explaining the study and obtaining informed consent from selected patients. The trial concluded after receiving all results from the participants within the allocated timeframe in the PERSIST project.

2.2.5. Data Collection

The data collection timeline is provided in the patient flow chart (Figure 6). Following a four-month screening period, eligible participants (n=166) were invited to sign informed consent documents and receive the study devices. During the enrollment process, participants were given detailed explanations of the study phases, and their medical history was recorded. Three questionnaires were administered at three distinct time points: baseline, first follow-up, and last follow-up.
Participants in this study received both a mobile phone with the mHealth app (Huawei Y6 2019) and a smart band (Naicom Company smart bracelet), which collected various types of data, including sociodemographic, clinical, and lifestyle information. The specific model of the smart band was selected by the technical partners based on data safety (GDPR compliance) and budget constraints. The device was capable of measuring steps/activity, sleep, heart rate, and blood pressure, and was compatible with a smartphone-based data access kit that avoids cloud storage from the device manufacturer.
Prior to patient recruitment, clinical and technical partners developed training materials for patients, which were translated into Spanish, French, Slovenian, Latvian, and Russian. During recruitment, training on the study protocol and the use of the mHealth app and smart band was initially conducted for all clinical partners and subsequently for all participants at each hospital by nurses or data managers supporting the medical doctors/physiotherapists. The app also provided personalized follow-up based on patterns identified from big data. Additionally, patients were able to input additional data through online questionnaires, which were prompted by notifications from the app. Selected questionnaires were collected automatically during phone calls or medical follow-ups as a validation tool, while others required patients to record video diaries discussing their daily lives.
The data collected through the mClinician tool allowed clinicians to access information about patients, including their demographics, cancer diagnosis, treatment history, and diagnostic performance. Data collected via the Clinical Decision Support System (CDSS) included results, scores, and the history of CDSS outputs presented to the clinician.
Clinical centers were selected based on the presence of survivor populations and the opportunity to implement the intervention across diverse regions in Europe, aligning with the PERSIST project’s goals. Although the last data collection occurred between September and October 2022, the clinical trial was extended to December 2022 to facilitate additional data collection and updates to the mClinician app. During this extension, a substantial number of patients voluntarily continued their participation, enabling a more comprehensive analysis of study outcomes.

2.2.6. Outcomes

To measure self-efficacy, usability, and patient activation, the Communication and Attitudinal Self-Efficacy scale for cancer (CASE-cancer) [37], the System Usability Scale (SUS) [38], and the Patient Activation Measure (PAM) [39] were used, respectively. All questionnaires were made available online within the mHealth app.
The CASE-Cancer scale, validated by Wolf et al. [37], is a 12-item instrument designed to assess the communication and attitudinal self-efficacy of cancer patients. The participants' perceived ability to engage in their care was measured across three domains: understanding and participating in care, maintaining a positive outlook, and actively seeking and obtaining relevant information. Higher scores on this scale indicate greater self-efficacy. This instrument has been used in various cancer cases in recent literature [40,41,42].
This study utilized the PAM-13 questionnaire as a secondary endpoint to assess patients' self-management knowledge, skills, and confidence. The PAM-13 is a validated short form of the PAM-22 [29] and has demonstrated comparable effectiveness [43]. PAM levels 1 and 2 indicate lower patient activation, while PAM levels 3 and 4 indicate higher patient activation. Ng et al. [44] demonstrated the cross-cultural reliability and validity of the PAM tool, highlighting its correlation with health outcomes relevant to patient-centered care, as evidenced by a comprehensive review of 39 studies.
The SUS is a ten-item Likert scale questionnaire designed to provide a global view of subjective assessments of system usability. Developed by John Brooke in 1986 [38], the SUS has been widely used to measure the usability of electronic office systems. It consists of five positively worded items and five negatively worded items, with responses ranging from strongly agree to strongly disagree. The SUS is a well-established tool for evaluating system usability and has been applied in various contexts [45].

2.2.7. Statistical Analysis

Statistical analysis involved descriptive analyses of epidemiological and clinical characteristics, including means, standard deviations, percentiles, frequencies, and percentages. To evaluate changes in CASE-cancer, SUS, and PAM scores, paired t-tests, Wilcoxon signed-rank tests, and McNemar tests were employed. An intention-to-treat analysis and sensitivity analysis were conducted using R 3.4.2 and SPSS version 19. Results with p-values below 0.05 were considered statistically significant.

2.2.8. Ancillary Analyses

A mixed-methods approach was employed to gather comprehensive data on both patient and clinician experiences. Patient feedback was collected through a longitudinal, app-based survey administered at three time points, supplemented by narrative feedback obtained during follow-up interactions. This survey was divided into three distinct sections focusing on different aspects of patient experience. (1) Part A: Understanding the Patient's Study Experience aimed to capture the patient's perspective on their overall participation in the study. It focused on the clarity of study instructions and sought to identify the most valuable insights gained during the process. (2) Part B: Evaluating the mHealth App's User Experience concentrated on assessing the user experience of the mHealth app utilized in the study. It explored various aspects of the app's design and functionality to understand how patients interacted with and perceived the technology. (3) Part C: Assessing Device Usage and Experience consisted of two questions designed to gather feedback on the experience of using the devices provided: the smart bracelet and the mobile phone.
Narrative feedback from follow-up interactions was transcribed verbatim and analyzed using thematic analysis. Two independent coders identified recurring themes and patterns. Discrepancies between coders were resolved through discussion to ensure inter-rater reliability.
Clinician feedback was gathered using a standardized user acceptance questionnaire designed to assess the usability and effectiveness of the PERSIST system and mClinician app. Additionally, a generic 18-question questionnaire with mixed answer types, including free-text options, was developed and distributed across all participating hospitals.

2.2.9. Ethical Considerations

The study adhered to the highest ethical and legal standards, including the Charter of Fundamental Rights of the EU (2000/C 364/01), the General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679), the European Code of Conduct for Research Integrity, and the OECD Council Recommendations on Health Data Governance. Furthermore, this clinical study was conducted in accordance with the laws and regulations in force in the participating countries (Slovenia, Spain, Latvia, and Belgium).
A study protocol, informed consent forms, and clinical data forms were developed and translated into the local languages of the participating countries. These documents were reviewed and approved by the Institutional Review Board and Ethical Committee of each clinical center (Institutional Ethics Committee of CHU de Liege, approval ref. no: 2020/248; Riga Eastern Clinical University Hospital Support Foundation Medical and Biomedical Research Ethics Committee, approval ref. no: 8-A/20; Slovenia National Ethics Committee, approval ref. no. 0120–352/2020/5; Spain Regional Institutional Review Board, approval ref. no. 2020/394.).
Once institutional approvals were obtained, pilot studies were initiated. The recruited patients received the necessary devices for the proper execution of the follow-up phase of the clinical study: a smartphone and a wearable device for quantifying physical activity and measuring health parameters (such as blood pressure, steps, and heart rate).
The personal data of patients were pseudonymized before leaving the clinical facilities, and a set of privacy metrics were calculated to assess the risk of reidentifying patient data. The data necessary for conducting the PERSIST project research activities related to the clinical studies were stored in a high-performance computer center provided by CESGA (Spanish public institution).

3. Results

3.1. Participant Flow

Figure 6 illustrates the participant flow throughout the conducted study, from initial screening to final analysis. Only SUS was administered at a different baseline timepoint due to the delay in the development of the mHealth app. While the retention rates at baseline and the first follow-up for the CASE-Cancer, PAM, and SUS questionnaires were relatively high (75.9%, 77.1%, and 46.4%, respectively), significant attrition occurred by the end of the study (CASE-Cancer = 40.5%, PAM = 39.1%, and SUS = 64.9%). This attrition led to a reduced sample size for the final analysis, with data included from CASE-Cancer (n=75), PAM (n=77), and SUS (n=27). A total of 41 out of 166 patients (25%) withdrew from the clinical study by the final follow-up. The majority (n=31 out of n=41, 76%) were female, and withdrawals were more frequent in the breast cancer group (n=24 out of n=41, 59%). The overall attrition rate for the clinical trial at completion was 25% (n=41 out of 166 patients). Attrition rates for the CASE -Cancer, PAM, and SUS questionnaires were 40.47% (n=51 out of 126 patients), 40% (n=51 out of 128 patients), and 64.93% (n=50 out of 126 patients), respectively.
Figure 6. Participant flow diagram.
Figure 6. Participant flow diagram.
Preprints 148394 g006
A comprehensive analysis of the reasons for withdrawal (total n=65) revealed that these were primarily related to personal circumstances (n=11 out of 65, 17%), technical issues (n=10 out of 65, 15%), including smart-bracelet malfunctions and other technical problems, and time constraints associated with participation (n=9 out of 65, 14%).

3.2. Baseline Data

Patients diagnosed with breast cancer (ICD-10 C50) and colorectal cancer (ICD-10 C18/C19) were recruited for this study. The average age of the patients was 55 years. A total of 166 patients were included, with 37 male (22%) and 129 female (78%) participants. Table 1 presents a summary of the demographic characteristics of the patients enrolled in this study. UL recruited the highest number of patients (n=46, 28%), UKCM had the oldest cohort of patients with a mean age of 57 and SERGAS had the highest proportion of male patients (12 out of 37, 32%).
Table 1. Characteristics of all participants at baseline.
Table 1. Characteristics of all participants at baseline.
UL UKCM CHU SERGAS TOTAL
Recruited Patients 46 40 41 39 166
Mean Age (years) 54.17 56.3 54.92 54.85 55.03
Std. Dev. Age (years) 11.31 8.34 11.06 10.5 10.34
Breast Cancer Cases 24 20 21 20 85
Colorectal Cancer Cases 22 20 20 19 81
Male 7 11 7 12 37
Female 39 29 34 27 129

3.3. Outcomes and Estimation

3.3.1. Perceived Self-Efficacy of Patients by CASE-Cancer

A total of 75 questionnaires were analyzed, and descriptive statistics were computed for each score factor. No statistically significant differences were observed in scores between the recruitment phase and the last follow-up across any of the three factors, as assessed using the Wilcoxon signed-rank test (Table 2).
Table 2. Descriptive statistics of CASE-Cancer Results (C.I.: Confidence interval).
Table 2. Descriptive statistics of CASE-Cancer Results (C.I.: Confidence interval).
Factor 1: Understand & Participate in care Factor 2: Maintain positive attitude Factor 3: Seek & obtain information
Recruitment Last follow-up Recruitment Last follow-up Recruitment Last follow-up
N 75 75 75 75 75 75
Mean 13.73 13.75 13.28 13.17 13.81 13.55
Median 14 14 14 14 15 14
Std. Deviation 1.9 2.01 2.3 2.44 2.31 2.21
Minimum 9 9 6 4 7 8
Maximum 16 16 16 16 16 16
Percentiles 25 12 12 12 12 12 12
50 14 14 14 14 15 14
70 16 15 15 15 16 16
P-value .98 .66 .25
95% C.I. [-0.99 to 0.50] [-0.50 to 0.99] [-1.00 to 1.31e-05]

3.3.2. Activation Levels of Patients by PAM

A total of 78 completed PAM questionnaires, incorporating all data collection points, were analyzed. As presented in Table 3, the majority of patients reported activation levels 3 or 4 at both recruitment (42.3% and 32.1%, respectively) and the final follow-up (35.9% and 35.9%, respectively), with a modest increase (3.8%) in the number of patients reporting level 4 activation at follow-up. No statistically significant differences were found in the distribution of activation levels between the recruitment and last follow-up.
Table 3. PAM Results: Comparison of the percentage of patients in each level at the recruitment vs at the last follow-up. (P values have been calculated with McNemar Test, C.I.: Confidence interval).
Table 3. PAM Results: Comparison of the percentage of patients in each level at the recruitment vs at the last follow-up. (P values have been calculated with McNemar Test, C.I.: Confidence interval).
Level Recruitment (N=78)
n (%)
Last follow-up (N=78)
n (%)
P -value
Level 1 5 (6.4)
95% C.I.: [2.2 to 14.9]
6 (7.7)
95% C.I.: [3.0 to 16.6]
1.0
Level 2 15 (19.2)
95% C.I.: [11.7 to 30.8]
16 (20.5)
95% C.I.: 12.7 to 32.3]
1.0
Level 3 33 (42.3)
95% C.I.: [32.6 to 55.9]
28 (35.9)
95% C.I.: [26.4 to 49.3]
.49
Level 4 25 (32.1)
95% C.I.: [22.9 to 45.2]
28 (35.9)
95% C.I.: [26.4 to 49.3]
.65

3.3.3. User Acceptance of mHealth App by SUS

The System Usability Scale (SUS) was the most significantly affected questionnaire due to participant attrition throughout the study. While 77 questionnaires were completed at baseline, this number decreased to 27 (35%) by the final follow-up, resulting in a low retention rate of 35.06%. Consequently, the SUS analysis was conducted using data from these 27 complete questionnaires across all data collection points (Table 4).
At recruitment, most participants (n=10 out of 27, 37%) rated the system as experiencing "usability issues" (level 50–70) and as "acceptable to good" (level 70–85). This could be attributed to participants' prior technological proficiency and their ability to adapt to the evolving nature of the mHealth app, which was still under development. During the study, the percentage of participants who rated the system as having "excellent usability" (level >85) increased from 14.82% to 22.22%. This improvement can likely be attributed to the ongoing upgrades made to the mHealth app in collaboration with technical partners.
At the final follow-up, the most frequently reported score group for the system was "Experiencing usability issues" (level 50–70), which could be explained by negative feedback from patients regarding the increasing complexity of the system.
Table 4. Frequencies of each score group for SUS results.
Table 4. Frequencies of each score group for SUS results.
Score group Frequency Percent
Score at recruitment <=50 3 11.11
50-70 10 37.04
70-85 10 37.04
>85 4 14.82
Score at last follow-up <=50 5 18.52
50-70 10 37.04
70-85 6 22.22
>85 6 22.22

3.4. Ancillary Analyses

3.4.1. General Feedback from Patients

A total of 32 participants were included in the analysis of Part A (6 participants, 19% from CHU; 8 participants, 25% from SERGAS; 14 participants, 44% from UKCM; and 4 participants, 12% from UL) (Table 5 and Table 6).
Table 5. Results of patient feedback – Part A (Means of all centres with standard deviation).
Table 5. Results of patient feedback – Part A (Means of all centres with standard deviation).
Question1 Time Point Mean (SD) Median
1st Question – How do you rate your experience with participation in the PERSIST project (in general)? Initial 7.41 (1.64) 8
Middle 7.75 (1.70) 8
Final 7.69 (1.53) 8
2nd Question – Are the instructions and explanations about the project from personnel understandable? Initial 8.53 (1.67) 9.5
Middle 8.53 (1.16) 8.5
Final 8.47 (1.24) 8
3rd Question – How does the participation in the PERSIST project make you feel? Initial 8.13 (1.86) 8
Middle 8.19 (1.55) 8
Final 8.06 (1.69) 8
1 Friedman's ANOVA: For 1st Question, p = 0.58. For 2nd Question, p = 0.83. For 3rd Question, p = 0.50.
Table 6. Results of patient feedback – Part A (Post-Hoc Conover's Test p-values).
Table 6. Results of patient feedback – Part A (Post-Hoc Conover's Test p-values).
Questions Initial vs. Middle Initial vs. Final Middle vs. Final
1st Question 0.39 0.35 0.93
2nd Question 0.87 0.67 0.55
3rd Question 0.55 0.24 0.55
Twenty participants responded to Part B at three different time points (4 from CHU, 4 from SERGAS, and 12 from UKCM) (Table 7 and Table 8).
Table 7. Results of patient feedback – Part B (Means of all centres with standard deviation).
Table 7. Results of patient feedback – Part B (Means of all centres with standard deviation).
Question1 Time Point Mean (SD) Median
1st Question – How do you rate the emotion wheel/detection in the app? From 1 (bad, confusing) to 10 (super, interesting) Initial 6.60 (2.40) 7
Middle 6.35 (2.68) 7.5
Final 6.85 (2.21) 8
2nd Question – How do you rate your experience with questionnaires in the app? From 1 (bad) to 10 (excellent) Initial 7.60 (1.64) 8
Middle 7.25 (2.02) 8
Final 7.60 (1.79) 8
3rd Question – How do you rate your experience with diary recording? From 1 (bad, confusing) to 10 (super, interesting) Initial 6.65 (2.46) 7
Middle 7.00 (2.75) 8
Final 7.00 (2.70) 8
4th Question – How do you rate your experience with the mHealth app? From 1 (really bad) to 10 (excellent) Initial 7.60 (1.67) 7.5
Middle 7.35 (1.90) 8
Final 7.90 (1.55) 8
5th Question – Are the instructions and explanations about mHealth app usage understandable? From 1 (completely confusing) to 10 (completely clear) Initial 8.60 (1.31) 9
Middle 8.60 (1.27) 9
Final 8.25 (1.33) 8
6th Question – Do you follow up your gathered data in the mHealth app? From 1 (no at all) to 10 (all the time) Initial 7.35 (2.89) 8
Middle 6.80 (2.78) 7.5
Final 6.90 (2.53) 8
7th Question – Does the mHealth app affect your behaviour? From 1 (no at all) to 10 (I modify my behaviour after looking at the data) Initial 5.50 (3.05) 5
Middle 5.75 (2.69) 6
Final 6.15 (2.98) 6
1 Friedman's ANOVA: For 1st Question, p = 0.11. For 2nd Question, p = 0.78. For 3rd Question, p = 0.58. For 4th Question, p = 0.28. For 5th Question, p = 0.11. For 6th Question, p = 0.39. For 7th Question, p = 0.75.
Table 8. Results of patient feedback – Part B (Post-Hoc Conover's Test p-values).
Table 8. Results of patient feedback – Part B (Post-Hoc Conover's Test p-values).
Questions Initial vs. Middle Initial vs. Final Middle vs. Final
1st Question >.99 0.23 0.23
2nd Question 0.49 0.84 0.62
3rd Question 0.3 0.51 0.71
4th Question 0.89 0.14 0.18
5th Question 0.91 0.08 0.06
6th Question 0.7 0.19 0.34
7th Question 0.71 0.45 0.71
In total, 15 participants completed the questions for Part C at all three time points (6 participants, 40% from CHU; 3 participants, 20% from SERGAS; 1 participant, 7% from UL; and 5 participants, 33% from UKCM) (Table 9 and Table 10).
Table 9. Results of patient feedback – Part C (Means of all centres with standard deviation).
Table 9. Results of patient feedback – Part C (Means of all centres with standard deviation).
Questions1 Time Point Mean (SD) Median
1st Question – How do you rate your experience with smart bracelets? Initial 6.87 (2.23) 7
Middle 6.00 (2.10) 6
Final 6.93 (1.53) 7
2nd Question – How do you rate your experience with mobile phone? Initial 6.80 (2.15) 7
Middle 7.33 (1.99) 8
Final 6.87 (2.10) 7
1 Friedman's ANOVA: For 1st Question, p = 0.04. For 2nd Question, p = 0.227.
Table 10. Results of patient feedback – Part C (Post-Hoc Conover's Test p-values).
Table 10. Results of patient feedback – Part C (Post-Hoc Conover's Test p-values).
Question Initial vs. Middle Initial vs. Final Middle vs. Final
1st Question 0.03 >.99 0.035
2nd Question 0.09 0.5 0.28
For all three sections, no statistically significant differences were found between any two time points for any of the questions. Additionally, narrative feedback provided valuable insights into patients' experiences with healthcare services, complementing the quantitative measures (Textbox 1). Overall, the narrative feedback suggested that the mHealth app holds potential as a valuable tool for cancer survivors. However, it is important to balance its benefits with its potential drawbacks.
Textbook 1. Some examples from narrative feedback from patients.
Textbook 1. Some examples from narrative feedback from patients.
‘‘It is interesting to record and monitor measurements. It’s good because it diverts your thoughts’ (female survivor of breast cancer)’
‘I would like to know more about the development itself and how this technology works’ (female survivor from a colorectal cancer)
‘I enjoed the opportunity to monitor my features, and I think this may help other patients’ (male survivor from a colorectal cancer)
‘The project has encouraged some positive emotions. It helps me to follow my state of health in general, the opportunity to view the data stimulates the consciousness of the need to get moving’ (female survivor of breast cancer)
‘The appreciation to the people who treated me motivated me to participate in clinical study. Technology can help cancer patients and survivors, however, the constant thinking about oneself may prevent them moving on.’ (female survivor from a colorectal cancer)

3.4.2. General Feedback from Clinicians

The user acceptance questionnaire was completed by 16 clinicians at the recruitment time and 17 clinicians at the last follow-up (see Table 11). According to the System Usability Scale (SUS) results, most clinicians who responded considered the system to be "not easy to use" (score ≤ 50) (n=7 at both time points) and reported experiencing usability issues (score 50-70) (increasing from n=6 to 7).
Table 11. SUS scores of mClinician at recruitment and last follow-up.
Table 11. SUS scores of mClinician at recruitment and last follow-up.
Score group first Frequency Percent
Score at recruitment <=50 7 43.75
50-70 6 37.50
70-85 2 12.50
>85 1 6.25
Last follow-up <=50 7 41.18
50-70 7 41.18
70-85 2 11.76
>85 1 5.88
The developed generic questionnaire was completed by 11 clinicians (4 clinicians, 37% from UL; 2 clinicians, 18% from SERGAS; 2 clinicians, 18% from CHU; and 3 clinicians, 27% from UKCM) (see supplementary file).
Clinician evaluations of the PERSIST system revealed generally favorable, albeit not uniformly enthusiastic, perceptions. The system received a mean overall rating of 6.27 out of 10, with usability assessed more positively, averaging 7. Precision in risk identification by the PERSIST system was rated with a mean score of 6.9. A substantial majority of clinicians (9 out of 10) expressed a desire to integrate the PERSIST system into their clinical practice; however, two clinicians raised concerns regarding system processing speed and alignment with oncology practice workflows. The most valued functionalities of the PERSIST system included the provision of patient feedback data, vital parameter tracking, subjective patient reports, statistical summaries, and risk factor identification. The system was most frequently associated with general practice, followed by psychology, infectious diseases, and inflammatory diseases. Among modifiable lifestyle factors, physical activity was considered the most salient, followed by blood pressure, heart rate, and depression indicators. The overarching added value of the PERSIST system was perceived to be its capacity for patient monitoring.
Regarding personalized care plans, clinician responses varied: some affirmed their contributions, while others expressed partial effectiveness or uncertainty. Suggested optimal strategies for implementing preventive measures based on individual patient trajectories included app automation, regular monitoring (weekly or bi-annually), and the incorporation of trained support staff.
In contrast, clinician evaluations of the mClinician web version were less favorable compared to the PERSIST system. The web interface received a mean overall rating of 6.1 out of 10. Most respondents (9 out of 10) expressed reluctance or unwillingness to integrate the web version into their clinical workflows; 5 clinicians indicated ambivalence, while 4 explicitly declined. The most valued components of the web version were mHealth data, followed by tests, general and medical history, diagnoses, symptoms, and cancer treatment information. However, clinicians recommended modifications or the removal of the tests, diagnostic, and therapeutic modules from the mClinician web version.
In contrast, the mClinician mobile application received significantly more positive feedback. The app earned a mean overall rating of 6 out of 10. A substantial majority of clinicians (9 out of 10) expressed a willingness to utilize the app in their practice, with only one clinician dissenting. The most highly valued features of the mClinician app included alerts, patient overview, appointments, recurrence prediction, cardiovascular disease risk assessment, usage statistics, and patient trajectories. Recommendations for the refinement of the mClinician app focused on the removal of the trajectories feature and the reduction of electronic health record (EHR) data redundancy.

4. Discussion

4.1. Principal Findings

This study partially supports the initial hypothesis [25], suggesting that mHealth applications can positively influence self-efficacy, patient activation, and satisfaction with care among colorectal and breast cancer survivors, although statistical significance was not consistently achieved. While the CASE-Cancer questionnaire results (Table 2) did not show significant changes over time, the baseline scores for Factor 1 indicated a high level of patient understanding and engagement in their care at recruitment, consistent with previous findings [37]. The stability of the PAM scores (Table 3) suggests that patients maintained good self-management skills throughout the study, indicating the potential of the PERSIST app to support these skills [9], even without demonstrable improvement.
The increasing SUS scores (Table 4) reflect improvements in perceived usability, likely due to iterative app development based on user feedback. The final positive usability ratings, while not universally excellent, emphasize the importance of continuous user-centered design in mHealth development [46]. Positive patient feedback regarding study participation, clarity of instructions, and explanations (Table 5 and Table 6) indicates successful patient engagement. High patient ratings for the emotion wheel/detection feature, instructions, and questionnaires within the app (Table 7 and Table 8) suggest user acceptance of these specific functionalities. Furthermore, positive experiences with smart bracelets and mobile phones (Table 9 and Table 10), with no decline in user satisfaction over time, further support the feasibility of this technology within this population.
However, clinician evaluations of the mClinician web version revealed significant challenges. Low SUS scores (Table 11) and predominantly negative or neutral responses regarding their integration into clinical practice highlight the need for substantial revisions to improve usability and align with clinical workflows. While the mClinician app fared better, with most clinicians willing to use it, addressing identified usability issues could enhance its adoption and effectiveness. Critically, the disparity in clinician acceptance between the PERSIST system and the mClinician web version suggests that system design and integration with existing clinical practices are crucial determinants of successful mHealth implementation. Positive clinician feedback on PERSIST, particularly regarding patient feedback integration, vital parameter tracking, and risk factor identification, points to specific areas of value in mHealth solutions for cancer survivors. Physical activity was identified as the most important potentially modifiable lifestyle factor for cancer survivors that the PERSIST system detects [47,48]. Future research should focus on identifying the factors contributing to successful mHealth implementation and tailoring interventions to meet the specific needs of both patients and clinicians.

4.2. Comparison to Prior Work

This study builds upon a substantial body of research emphasizing the importance of self-efficacy and patient activation in cancer survivorship [4,5,6]. Consistent with previous findings [6,7,8,9], our study reinforces the notion that these constructs are critical for positive health outcomes, including improved quality of life, symptom management, and adherence to healthy behaviors. Prior research has also demonstrated the potential of digital health interventions (DHIs) to enhance these outcomes [11,12,13,14], with studies showing positive effects on treatment adherence, physical activity, and symptom burden [15,16]. Our findings align with this trend, suggesting that an mHealth application can contribute to self-efficacy, patient activation, and satisfaction with care among colorectal and breast cancer survivors. Specifically, the observed high baseline levels of understanding and participation in care (Factor 1 of CASE-Cancer) and maintained levels of patient activation throughout the study period, despite no statistically significant changes, suggest that the mHealth app effectively supported existing self-management skills. This aligns with findings from Van Der Hout et al. [16], who demonstrated that DHIs can be particularly beneficial for individuals with already moderate to high baseline self-efficacy and health literacy by reinforcing and maintaining positive behaviors.
However, this study distinguishes itself from prior work in several key aspects. While previous studies have often focused on either self-efficacy or patient activation in isolation [12,16,21,22,23], our research adopted a more holistic approach by examining the impact of a personalized mHealth intervention on multiple dimensions of patient well-being, including self-efficacy (measured using CASE-Cancer), patient activation (measured using PAM), satisfaction with care, and physical activity levels. Furthermore, this study addressed a critical gap in the literature by focusing on the real-world feasibility and acceptability of implementing a DHI in clinical settings. The high ratings received for the app's instructions, explanations, and overall user experience, along with the positive feedback on the smart bracelets and the mHealth app's features, suggest that the PERSIST system was well-received by patients and demonstrates promise for real-world implementation. This emphasis on implementation and user experience aligns with the growing recognition of the importance of user-centered design in DHI development [46]. Although the attrition rate posed a significant limitation, particularly for the SUS analysis, the positive feedback from the remaining participants, including their expressed enthusiasm for future participation, underscores the potential of this approach and highlights areas for improvement in future iterations.
Moreover, while previous studies have demonstrated the potential of DHIs to improve health outcomes, this study offers valuable insights into the specific challenges and facilitators of implementing such interventions in clinical practice. The identified usability issues with the mClinician app highlight the importance of considering the needs of both patients and clinicians in DHI design and implementation. The clinicians' feedback regarding the most useful aspects of the PERSIST system (patient feedback, alerts, vital parameter data, and risk factors) provides crucial information for future development and refinement of similar systems. By addressing the practical challenges of implementation, this study offers valuable guidance for translating the potential benefits of DHIs into real-world clinical practice.

4.3. Strengths and Limitations

The PERSIST project effectively engaged cancer survivors and provided a positive experience. Participants demonstrated patient activation and self-efficacy, and the project enhanced self-management among cancer patients. The majority of participants were satisfied with the mobile app and its usage. However, despite high attrition rates for questionnaires, particularly for the SUS, the overall retention rate for the trial remained moderate. The PERSIST system shows potential applications in various medical fields, particularly in primary care settings, where it could significantly improve patient outcomes and clinical decision-making. Patient engagement with the mHealth app was robust, as evidenced by consistent follow-up and active data monitoring. Additionally, the integration of mHealth apps and smart bracelets significantly reduced depression and anxiety symptoms, suggesting a positive impact on patients' mental health. Clinicians also provided positive feedback, indicating that both the PERSIST system and the mClinician app have potential as useful tools in clinical practice. They highlighted their value in monitoring patient parameters, receiving timely alerts, and providing personalized care plans.
Several limitations inherent to this study should be acknowledged. Firstly, the substantially smaller sample size than initially projected by the power analysis is the primary limitation of this study, which directly impacts our ability to detect statistically significant effects. The observed effects in our study may not reach statistical significance due to the limited sample size, even if the intervention had a positive impact on healthy habits. This limitation is particularly relevant for the SUS questionnaire, which had the highest attrition rate and the smallest final sample size. While we report observed effect sizes, the lack of statistical power weakens the strength of our conclusions regarding the intervention’s effectiveness. This highlights the need for cautious interpretation of the findings and underscores the importance of future research with larger, more stable samples to confirm these preliminary observations. Factors contributing to the attrition included participant burden, fluctuating health conditions, technical difficulties, and a lack of perceived benefit or decreasing motivation. To mitigate these challenges in future studies, researchers should focus on minimizing participant burden, recruiting more participants than initially estimated, providing enhanced technical support, maximizing perceived benefit, and incentivizing participation.
The second limitation was the challenge of ensuring continuous data integration into the medical workflow, particularly regarding the timely transfer of lifestyle data, laboratory results, and other EHR data. Future research should prioritize the development of automated data entry systems to ensure that the warning system and prediction models remain up-to-date, fast, and accurate, ultimately optimizing clinical decision-making.
Thirdly, technical considerations limited participant recruitment. To avoid burdening participants with complex devices, simple and cost-effective options were chosen. However, this decision potentially impacted the richness of the collected data. In the future, careful device selection and testing will be essential to balance participant comfort with comprehensive data collection.
Fourthly, the absence of a traditional control or comparison group is a key limitation resulting from the SCED methodology employed in this study. While SCED offers strong internal validity by controlling for individual variability, it lacks the between-participant comparison provided by traditional randomized controlled trials or cohort studies. This means we cannot directly compare the outcomes of participants using the PERSIST platform with a separate group receiving standard care or no intervention. The rigorous requirements of SCED, particularly the need for complete data across all measurement points, led to the exclusion of participants with incomplete data. This attrition could potentially introduce selection bias. However, the strength of SCED lies in its focus on within-participant effects, allowing for strong inferences about individual responses to the intervention, even with smaller sample sizes. Consequently, while we can demonstrate changes within individuals over time, attributing these changes solely to the PERSIST platform and generalizing the findings to a broader population requires careful consideration.
Finally, interoperability across different data types and clinical centers remains a significant hurdle in healthcare research. This challenge requires further investigation to facilitate seamless data exchange and collaboration.

4.4. Future Directions

This study highlights several key implications for the design and conduct of future definitive trials. Firstly, the high attrition rate due to technological challenges underscores the importance of careful device selection and user-friendly interfaces. Rigorous pre-trial testing of devices is essential to ensure compatibility with a diverse participant population and minimize technical difficulties. Secondly, comprehensive training programs should be implemented to equip participants with in-depth knowledge of device usage and study procedures, including personalized support and troubleshooting assistance. Thirdly, an agile implementation approach can help mitigate participant burden and improve retention rates by gradually introducing study components. Finally, a robust system for ongoing monitoring of participant engagement and satisfaction should be established to allow for timely adjustments to the study design or procedures.
By incorporating these improvements, future definitive trials can build upon the lessons learned from this study, increasing the likelihood of successful outcomes. Additionally, the PERSIST system has the potential to significantly impact clinical routines by providing valuable insights into patient behavior, treatment adherence, and overall health outcomes. By integrating the PERSIST system into clinical practice, healthcare providers can make more informed decisions, tailor interventions to individual patient needs, and ultimately improve patient care. To facilitate the wider adoption of digital therapies in cancer survivor care, further research is essential to address usability issues and demonstrate their efficacy and generalizability through large-scale studies.

5. Conclusions

The PERSIST project’s tools represent a significant advancement in cancer survivorship care by delivering personalized and dynamic care plans tailored to the unique needs of individual survivors. This personalized approach holds the potential to improve patient outcomes and overall quality of life while reducing healthcare costs. The system is user-friendly and easy to navigate, with participants generally expressing a neutral to slightly positive attitude towards frequent use. The high adherence rates across nearly all hospitals suggest that patients found the app easy to use and manage on a daily basis.
The PERSIST project’s tools align with the goals of the Precision Medicine Initiative (PMI) [49], which seeks to tailor medical treatments and preventive strategies to an individual's unique genetic and environmental profile. This technology has the potential to revolutionize cancer survivorship care by enabling personalized and adaptive care plans, leading to improved patient outcomes, enhanced patient satisfaction, and significant cost savings for healthcare systems. By facilitating seamless coordination among healthcare providers and promoting healthy behaviors through mHealth apps and smart bracelets, this technology can help prevent chronic diseases and reduce long-term healthcare costs.
Overall, the PERSIST project’s approach represents a key opportunity to improve survivorship care for cancer patients and transition towards more personalized medicine strategies.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org.

Author Contributions

Conceptualization, Umut Arioz, Urška Smrke, Valentino Šafran, Simon Lin, Jama Nateqi, Dina Bema, Inese Polaka, Krista Arcimovica, Gaetano Manzo, Yvan Pannatier, Shaila Calvo-Almeida, Maja Ravnik, Matej Horvat, Vojko Flis, Ariadna Mato Montero, Beatriz Calderón Cruz, José Aguayo Arjona, Marcela Chavez, Patrick Duflot, Valérie Bleret, Catherine Loly, Tunç Cerit, Kadir Uguducu and Izidor Mlakar; Data curation, Umut Arioz, Urška Smrke, Simon Lin, Dina Bema, Inese Polaka, Krista Arcimovica, Gaetano Manzo, Yvan Pannatier, Shaila Calvo-Almeida, Matej Horvat, Ariadna Mato Montero, Beatriz Calderón Cruz, José Aguayo Arjona, Marcela Chavez, Patrick Duflot, Valérie Bleret, Catherine Loly, Tunç Cerit, Kadir Uguducu and Izidor Mlakar; Formal analysis, Urška Smrke, Dina Bema, Inese Polaka, Krista Arcimovica, Gaetano Manzo, Yvan Pannatier, Shaila Calvo-Almeida, Maja Ravnik, Matej Horvat, Ariadna Mato Montero, Beatriz Calderón Cruz, José Aguayo Arjona, Marcela Chavez, Patrick Duflot, Valérie Bleret, Catherine Loly, Tunç Cerit and Izidor Mlakar; Funding acquisition, Izidor Mlakar; Investigation, Umut Arioz, Urška Smrke, Simon Lin, Dina Bema, Inese Polaka, Krista Arcimovica, Gaetano Manzo, Yvan Pannatier, Shaila Calvo-Almeida, Maja Ravnik, Matej Horvat, Vojko Flis, Ariadna Mato Montero, Beatriz Calderón Cruz, José Aguayo Arjona, Marcela Chavez, Valérie Bleret, Catherine Loly and Izidor Mlakar; Methodology, Umut Arioz, Urška Smrke, Valentino Šafran, Simon Lin, Jama Nateqi, Dina Bema, Inese Polaka, Krista Arcimovica, Gaetano Manzo, Yvan Pannatier, Shaila Calvo-Almeida, Maja Ravnik, Matej Horvat, Vojko Flis, Ariadna Mato Montero, Beatriz Calderón Cruz, José Aguayo Arjona, Marcela Chavez, Patrick Duflot, Valérie Bleret, Catherine Loly, Tunç Cerit, Kadir Uguducu and Izidor Mlakar; Project administration, Umut Arioz, Urška Smrke, Simon Lin, Jama Nateqi, Dina Bema, Vojko Flis, Marcela Chavez, Kadir Uguducu and Izidor Mlakar; Resources, Dina Bema, Inese Polaka, Krista Arcimovica, Maja Ravnik, Matej Horvat, Vojko Flis, Ariadna Mato Montero, Beatriz Calderón Cruz, José Aguayo Arjona, Marcela Chavez, Valérie Bleret, Catherine Loly and Izidor Mlakar; Software, Umut Arioz, Valentino Šafran, Simon Lin, Jama Nateqi, Gaetano Manzo, Yvan Pannatier, Shaila Calvo-Almeida, Patrick Duflot, Tunç Cerit, Kadir Uguducu and Izidor Mlakar; Supervision, Jama Nateqi, Vojko Flis and Izidor Mlakar; Validation, Urška Smrke, Gaetano Manzo, Yvan Pannatier, Shaila Calvo-Almeida, Maja Ravnik, Matej Horvat, Ariadna Mato Montero, Beatriz Calderón Cruz, José Aguayo Arjona, Marcela Chavez, Valérie Bleret and Catherine Loly; Visualization, Valentino Šafran, Dina Bema, Inese Polaka, Krista Arcimovica, Gaetano Manzo, Yvan Pannatier, Shaila Calvo-Almeida, Patrick Duflot and Tunç Cerit; Writing – original draft, Umut Arioz and Izidor Mlakar; Writing – review & editing, Umut Arioz, Urška Smrke, Valentino Šafran, Simon Lin, Jama Nateqi, Dina Bema, Inese Polaka, Krista Arcimovica, Gaetano Manzo, Yvan Pannatier, Shaila Calvo-Almeida, Maja Ravnik, Matej Horvat, Vojko Flis, Ariadna Mato Montero, Beatriz Calderón Cruz, José Aguayo Arjona, Marcela Chavez, Valérie Bleret, Catherine Loly, Tunç Cerit, Kadir Uguducu and Izidor Mlakar. All authors conceived of the idea for the intervention for cancer survivors and led the application for funding for the project. DB, IP, KA, AML, MM, SN, SM, AMM, BCC, JAA, MC, PD, VB, CL led the patient recruitment, data collection and the execution of clinical trials in four different clinical centers. UA, IM, US, VS developed the MRAST framework. TC, KU developed the mHealth application. UA, IM, US, VS, SL, JN, TC, KU, GM, YV, SCA contributed to the development of CDSS. Data analysis and results of the questionnaires were prepared by all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research project is partially funded by the European Union’s Horizon 2020 research and innovation program, project PERSIST (Grant Agreement No. 875406). The funding source approved the funding of project PERSIST of which this clinical study is a part of and ensured the funds for the implementation. The funding source had no role in the design of this study and will not have a role beyond progress monitoring and evaluation of the quality of the project PERSIST during the execution, analyses, interpretation of the data of this clinical study, or decision to submit results.

Institutional Review Board Statement

This study has been approved by relevant ethical committees in Belgium (Institutional Ethics Committee of CHU de Liege, approval ref. no: 2020/248), Latvia (Riga Eastern Clinical University Hospital Support Foundation Medical and Biomedical Research Ethics Committee, approval ref. no: 8-A/20), Slovenia (National Ethics Committee, approval ref. no. 0120–352/2020/5) and Spain (Regional Institutional Review Board, approval ref. no. 2020/394). A written informed consent was obtained from each participant.
Registration: Registration No: ISRCTN97617326. Clinical study to assess the outcomes of a patient-centred survivorship care plan enhanced with big data and artificial intelligence technologies. https://doi.org/10.1186/ISRCTN97617326.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

The deidentified data presented in this study is available on request from the corresponding author due to ethical reasons.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ANOVA Analysis of variance
ASR Automatic Speech Recognition
CASE-Cancer Communication and Attitudinal Self-Efficacy scale for cancer
CDSS Clinical Decision Support System
CHU Centre Hospitalier Universitaire De Liege
CI Confidence Interval
CTCs Circulating Tumor Cells
DHI Digital health intervention
HL7 FHIR Health Level 7 Fast Healthcare Interoperability Resources
ICD International Classification of Diseases
MRAST Multimodal Risk Assessment and Symptom Tracking
PAM Patient Activation Measure
PERSIST Acronym of project ‘Patients-centered SurvivorShIp care plan after Cancer treatments based on Big Data and Artificial Intelligence technologies’
PMI Precision Medicine Initiative
PREMs Patient Reported Experience Measures
PROMs Patient-reported outcome measures
REUH Riga East Clinical University Hospital
SERGAS Complejo Hospitalario Universitario de Ourense
SUS System Usability Scale
UKCM University Medical Centre Maribor
UL University of Latvia

References

  1. Chan, R. J.; Crawford-Williams, F.; Crichton, M.; Joseph, R.; Hart, N. H.; Milley, K.; Druce, P.; Zhang, J.; Jefford, M.; Lisy, K.; et al. Effectiveness and Implementation of Models of Cancer Survivorship Care: An Overview of Systematic Reviews. J. Cancer Surviv. 2023, 17, 197–221. [Google Scholar] [CrossRef] [PubMed]
  2. Shaffer, K. M.; Turner, K. L.; Siwik, C.; Gonzalez, B. D.; Upasani, R.; Glazer, J. V.; Ferguson, R. J.; Joshua, C.; Low, C. A. Digital Health and Telehealth in Cancer Care: A Scoping Review of Reviews. Lancet Digit. Health 2023, 5, e316–e327. [Google Scholar] [CrossRef] [PubMed]
  3. Bradbury, K.; Steele, M.; Corbett, T.; Geraghty, A. W. A.; Krusche, A.; Heber, E.; Easton, S.; Cheetham-Blake, T.; Slodkowska-Barabasz, J.; Müller, A. M.; et al. Developing a Digital Intervention for Cancer Survivors: An Evidence-, Theory- and Person-Based Approach. Npj Digit. Med. 2019, 2. [Google Scholar] [CrossRef] [PubMed]
  4. Hübner, J.; Welter, S.; Ciarlo, G.; Käsmann, L.; Ahmadi, E.; Keinki, C. Patient Activation, Self-Efficacy and Usage of Complementary and Alternative Medicine in Cancer Patients. Med. Oncol. 2022, 39, 192. [Google Scholar] [CrossRef] [PubMed]
  5. Huang, Q.; Wu, F.; Zhang, W.; Stinson, J.; Yang, Y.; Yuan, C. Risk Factors for Low Self-care Self-efficacy in Cancer Survivors: Application of Latent Profile Analysis. Nurs. Open 2022, 9, 1805–1814. [Google Scholar] [CrossRef]
  6. Mazanec, S. R.; Sattar, A.; Delaney, C. P.; Daly, B. J. Activation for Health Management in Colorectal Cancer Survivors and Their Family Caregivers. West. J. Nurs. Res. 2016, 38, 325–344. [Google Scholar] [CrossRef]
  7. Albrecht, K.; Droll, H.; Giesler, J. M.; Nashan, D.; Meiss, F.; Reuter, K. Self-efficacy for Coping with Cancer in Melanoma Patients: Its Association with Physical Fatigue and Depression. Psychooncology. 2013, 22, 1972–1978. [Google Scholar] [CrossRef]
  8. Barlow, J. H.; Bancroft, G. V.; Turner, A. P. Self-Management Training for People with Chronic Disease: A Shared Learning Experience. J. Health Psychol. 2005, 10, 863–872. [Google Scholar] [CrossRef]
  9. Moradian, S.; Maguire, R.; Liu, G.; Krzyzanowska, M. K.; Butler, M.; Cheung, C.; Signorile, M.; Gregorio, N.; Ghasemi, S.; Howell, D. Promoting Self-Management and Patient Activation Through eHealth: Protocol for a Systematic Literature Review and Meta-Analysis. JMIR Res. Protoc. 2023, 12, e38758. [Google Scholar] [CrossRef]
  10. Hailey, V.; Rojas-Garcia, A.; Kassianos, A. P. A Systematic Review of Behaviour Change Techniques Used in Interventions to Increase Physical Activity among Breast Cancer Survivors. Breast Cancer 2022, 29, 193–208. [Google Scholar] [CrossRef]
  11. Aapro, M.; Bossi, P.; Dasari, A.; Fallowfield, L.; Gascón, P.; Geller, M.; Jordan, K.; Kim, J.; Martin, K.; Porzig, S. Digital Health for Optimal Supportive Care in Oncology: Benefits, Limits, and Future Perspectives. Support. Care Cancer 2020, 28, 4589–4612. [Google Scholar] [CrossRef] [PubMed]
  12. Elkefi, S.; Trapani, D.; Ryan, S. The Role of Digital Health in Supporting Cancer Patients’ Mental Health and Psychological Well-Being for a Better Quality of Life: A Systematic Literature Review. Int. J. Med. Inf. 2023, 176, 105065. [Google Scholar] [CrossRef] [PubMed]
  13. Marthick, M.; McGregor, D.; Alison, J.; Cheema, B.; Dhillon, H.; Shaw, T. Supportive Care Interventions for People With Cancer Assisted by Digital Technology: Systematic Review. J. Med. Internet Res. 2021, 23, e24722. [Google Scholar] [CrossRef] [PubMed]
  14. Burbury, K.; Wong, Z.; Yip, D.; Thomas, H.; Brooks, P.; Gilham, L.; Piper, A.; Solo, I.; Underhill, C. Telehealth in Cancer Care: During and beyond the COVID -19 Pandemic. Intern. Med. J. 2021, 51, 125–133. [Google Scholar] [CrossRef] [PubMed]
  15. Powley, N.; Nesbitt, A.; Carr, E.; Hackett, R.; Baker, P.; Beatty, M.; Huddleston, R.; Danjoux, G. Effect of Digital Health Coaching on Self-Efficacy and Lifestyle Change. BJA Open 2022, 4, 100067. [Google Scholar] [CrossRef]
  16. Van Der Hout, A.; Holtmaat, K.; Jansen, F.; Lissenberg-Witte, B. I.; Van Uden-Kraan, C. F.; Nieuwenhuijzen, G. A. P.; Hardillo, J. A.; Baatenburg De Jong, R. J.; Tiren-Verbeet, N. L.; Sommeijer, D. W.; et al. The eHealth Self-Management Application ‘Oncokompas’ That Supports Cancer Survivors to Improve Health-Related Quality of Life and Reduce Symptoms: Which Groups Benefit Most? Acta Oncol. 2021, 60, 403–411. [Google Scholar] [CrossRef]
  17. Courneya, K. S. Physical Activity and Cancer Survivorship: A Simple Framework for a Complex Field. Exerc. Sport Sci. Rev. 2014, 42, 102–109. [Google Scholar] [CrossRef]
  18. Bruggeman, A. R.; Kamal, A. H.; LeBlanc, T. W.; Ma, J. D.; Baracos, V. E.; Roeland, E. J. Cancer Cachexia: Beyond Weight Loss. J. Oncol. Pract. 2016, 12, 1163–1171. [Google Scholar] [CrossRef]
  19. Blanchard, C. M.; Courneya, K. S.; Stein, K. Cancer Survivors’ Adherence to Lifestyle Behavior Recommendations and Associations With Health-Related Quality of Life: Results From the American Cancer Society’s SCS-II. J. Clin. Oncol. 2008, 26, 2198–2204. [Google Scholar] [CrossRef]
  20. Henry-Amar, M.; Busson, R. Does Persistent Fatigue in Survivors Relate to Cancer? Lancet Oncol. 2016, 17, 1351–1352. [Google Scholar] [CrossRef]
  21. Soto-Ruiz, N.; Escalada-Hernández, P.; Martín-Rodríguez, L. S.; Ferraz-Torres, M.; García-Vivar, C. Web-Based Personalized Intervention to Improve Quality of Life and Self-Efficacy of Long-Term Breast Cancer Survivors: Study Protocol for a Randomized Controlled Trial. Int. J. Environ. Res. Public. Health 2022, 19, 12240. [Google Scholar] [CrossRef] [PubMed]
  22. Merluzzi, T. V.; Pustejovsky, J. E.; Philip, E. J.; Sohl, S. J.; Berendsen, M.; Salsman, J. M. Interventions to Enhance Self-efficacy in Cancer Patients: A Meta-analysis of Randomized Controlled Trials. Psychooncology. 2019, 28, 1781–1790. [Google Scholar] [CrossRef] [PubMed]
  23. Ekstedt, M.; Schildmeijer, K.; Wennerberg, C.; Nilsson, L.; Wannheden, C.; Hellström, A. Enhanced Patient Activation in Cancer Care Transitions: Protocol for a Randomized Controlled Trial of a Tailored Electronic Health Intervention for Men With Prostate Cancer. JMIR Res. Protoc. 2019, 8, e11625. [Google Scholar] [CrossRef] [PubMed]
  24. Patients-centered SurvivorShIp care plan after Cancer treatments based on Big Data and Artificial Intelligence technologies. https://cordis.europa.eu/project/id/875406 (accessed 2023-06-01).
  25. Mlakar, I.; Lin, S.; Aleksandraviča, I.; Arcimoviča, K.; Eglītis, J.; Leja, M.; Salgado Barreira, Á.; Gómez, J. G.; Salgado, M.; Mata, J. G.; et al. Patients-Centered SurvivorShIp Care Plan after Cancer Treatments Based on Big Data and Artificial Intelligence Technologies (PERSIST): A Multicenter Study Protocol to Evaluate Efficacy of Digital Tools Supporting Cancer Survivors. BMC Med. Inform. Decis. Mak. 2021, 21, 243. [Google Scholar] [CrossRef]
  26. Safran, V.; Arioz, U.; Mlakar, I. HL7 FHIR Healthcare Digital System for Patient Data Incorporation & Visualization; Greece, 2022.
  27. González-Castro, L.; Chávez, M.; Duflot, P.; Bleret, V.; Martin, A. G.; Zobel, M.; Nateqi, J.; Lin, S.; Pazos-Arias, J. J.; Del Fiol, G.; et al. Machine Learning Algorithms to Predict Breast Cancer Recurrence Using Structured and Unstructured Sources from Electronic Health Records. Cancers 2023, 15, 2741. [Google Scholar] [CrossRef]
  28. Arioz, U.; Smrke, U.; Plohl, N.; Mlakar, I. Scoping Review on the Multimodal Classification of Depression and Experimental Study on Existing Multimodal Models. Diagnostics 2022, 12, 2683. [Google Scholar] [CrossRef]
  29. Rojc, M.; Ariöz, U.; Šafran, V.; Mlakar, I. Multilingual Chatbots to Collect Patient-Reported Outcomes. In Chatbots - The AI-Driven Front-Line Services for Customers; Babulak, E., Ed.; IntechOpen, 2023. [CrossRef]
  30. Lin, S.; Nateqi, J.; Weingartner-Ortner, R.; Gruarin, S.; Marling, H.; Pilgram, V.; Lagler, F. B.; Aigner, E.; Martin, A. G. An Artificial Intelligence-Based Approach for Identifying Rare Disease Patients Using Retrospective Electronic Health Records Applied for Pompe Disease. Front. Neurol. 2023, 14, 1108222. [Google Scholar] [CrossRef]
  31. Manzo, G.; Calvaresi, D.; Jimenez-del-Toro, O.; Calbimonte, J.-P.; Schumacher, M. Cohort and Trajectory Analysis in Multi-Agent Support Systems for Cancer Survivors. J. Med. Syst. 2021, 45, 109. [Google Scholar] [CrossRef]
  32. Manzo, G.; Pannatier, Y.; Duflot, P.; Kolh, P.; Chavez, M.; Bleret, V.; Calvaresi, D.; Jimenez-del-Toro, O.; Schumacher, M.; Calbimonte, J.-P. Breast Cancer Survival Analysis Agents for Clinical Decision Support. Comput. Methods Programs Biomed. 2023, 231, 107373. [Google Scholar] [CrossRef]
  33. Calvo-Almeida, S.; Serrano-Llabrés, I.; Cal-González, V. M.; Piairo, P.; Pires, L. R.; Diéguez, L.; González-Castro, L. Multichannel Fluorescence Microscopy Images CTC Detection: A Deep Learning Approach; Virtual Conference, 2024; p 030007. [CrossRef]
  34. Krasny-Pacini, A.; Evans, J. Single-Case Experimental Designs to Assess Intervention Effectiveness in Rehabilitation: A Practical Guide. Ann. Phys. Rehabil. Med. 2018, 61, 164–179. [Google Scholar] [CrossRef]
  35. Pope, Z.; Lee, J. E.; Zeng, N.; Lee, H. Y.; Gao, Z. Feasibility of Smartphone Application and Social Media Intervention on Breast Cancer Survivors’ Health Outcomes. Transl. Behav. Med. 2019, 9, 11–22. [Google Scholar] [CrossRef] [PubMed]
  36. M Quintiliani, L.; Mann, D. M.; Puputti, M.; Quinn, E.; Bowen, D. J. Pilot and Feasibility Test of a Mobile Health-Supported Behavioral Counseling Intervention for Weight Management Among Breast Cancer Survivors. JMIR Cancer 2016, 2. [Google Scholar] [CrossRef] [PubMed]
  37. Wolf, M. S.; Chang, C.-H.; Davis, T.; Makoul, G. Development and Validation of the Communication and Attitudinal Self-Efficacy Scale for Cancer (CASE-Cancer). Patient Educ. Couns. 2005, 57, 333–341. [Google Scholar] [CrossRef] [PubMed]
  38. Brooke, J. Sus: A “quick and Dirty’usabilit. Usability Eval. Ind. 1996, No. 189.3, 189–194. [Google Scholar]
  39. Hibbard, J. H.; Stockard, J.; Mahoney, E. R.; Tusler, M. Development of the Patient Activation Measure (PAM): Conceptualizing and Measuring Activation in Patients and Consumers. Health Serv. Res. 2004, 39, 1005–1026. [Google Scholar] [CrossRef] [PubMed]
  40. Alpert, J. M.; Amin, T. B.; Zhongyue, Z.; Markham, M. J.; Murphy, M.; Bylund, C. L. Evaluating the SEND eHealth Application to Improve Patients’ Secure Message Writing. J. Cancer Educ. 2024. [Google Scholar] [CrossRef]
  41. Pomey, M.-P.; Nelea, M. I.; Normandin, L.; Vialaron, C.; Bouchard, K.; Côté, M.-A.; Duarte, M. A. R.; Ghadiri, D. P.; Fortin, I.; Charpentier, D.; et al. An Exploratory Cross-Sectional Study of the Effects of Ongoing Relationships with Accompanying Patients on Cancer Care Experience, Self-Efficacy, and Psychological Distress. BMC Cancer 2023, 23, 369. [Google Scholar] [CrossRef]
  42. Baik, S. H.; Oswald, L. B.; Buscemi, J.; Buitrago, D.; Iacobelli, F.; Perez-Tamayo, A.; Guitelman, J.; Penedo, F. J.; Yanez, B. Patterns of Use of Smartphone-Based Interventions Among Latina Breast Cancer Survivors: Secondary Analysis of a Pilot Randomized Controlled Trial. JMIR Cancer 2020, 6, e17538. [Google Scholar] [CrossRef]
  43. Hibbard, J. H.; Mahoney, E. R.; Stockard, J.; Tusler, M. Development and Testing of a Short Form of the Patient Activation Measure. Health Serv. Res. 2005, 40, 1918–1930. [Google Scholar] [CrossRef]
  44. Ng, Q. X.; Liau, M. Y. Q.; Tan, Y. Y.; Tang, A. S. P.; Ong, C.; Thumboo, J.; Lee, C. E. A Systematic Review of the Reliability and Validity of the Patient Activation Measure Tool. Healthcare 2024, 12, 1079. [Google Scholar] [CrossRef]
  45. Lewis, J. R. The System Usability Scale: Past, Present, and Future. Int. J. Human–Computer Interact. 2018, 34, 577–590. [Google Scholar] [CrossRef]
  46. Bauer, A. M.; Iles-Shih, M.; Ghomi, R. H.; Rue, T.; Grover, T.; Kincler, N.; Miller, M.; Katon, W. J. Acceptability of mHealth Augmentation of Collaborative Care: A Mixed Methods Pilot Study. Gen. Hosp. Psychiatry 2018, 51, 22–29. [Google Scholar] [CrossRef] [PubMed]
  47. Clare, L.; Wu, Y.-T.; Teale, J. C.; MacLeod, C.; Matthews, F.; Brayne, C.; Woods, B.; CFAS-Wales study team. Potentially Modifiable Lifestyle Factors, Cognitive Reserve, and Cognitive Function in Later Life: A Cross-Sectional Study. PLOS Med. 2017, 14, e1002259. [Google Scholar] [CrossRef] [PubMed]
  48. Santiago, J. A.; Potashkin, J. A. Physical Activity and Lifestyle Modifications in the Treatment of Neurodegenerative Diseases. Front. Aging Neurosci. 2023, 15, 1185671. [Google Scholar] [CrossRef]
  49. Collins, F. S.; Varmus, H. A New Initiative on Precision Medicine. N. Engl. J. Med. 2015, 372, 793–795. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated