Preprint
Review

This version is not peer-reviewed.

The Ethical Double-Edged Sword: A Framework for Dignity-by-Design in Gerontological Assistive Technologies

Submitted:

29 December 2025

Posted:

30 December 2025

You are already at the latest version

Abstract
The institutional drive to deploy digital assistive technologies—from IoT monitoring to AI companions—as a solution to the ageing care crisis functions as an ethical double-edged sword. This article argues that beyond isolated risks, these technologies introduce a systemic tension where gains in safety and efficiency often come at the cost of autonomy, human connection, and equity. We propose a critical framework that diagnoses four interconnected dimensions of this tension: (1) the erosion of privacy and autonomy through pervasive surveillance; (2) the risk of dehumanization in high-tech, low-touch interactions; (3) the "digital grey divide" as a social determinant of health; and (4) the perpetuation of "coded ageism" through algorithmic bias. To bridge the gap between ethical principle and practice, the framework translates this diagnosis into a practical roadmap for "Dignity-by-Design." It operationalizes person-centred care through three actionable shifts: moving from compliance to commitment, replacing static consent with dynamic engagement, and establishing the lived experience of older adults and caregivers as a core design standard via participatory action research. Ultimately, this work provides a critical tool for researchers, developers, and policymakers to guide the ethically-aligned implementation of technologies that truly enhance autonomy, foster trust, and uphold dignity in geriatric care.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

The global imperative to address demographic ageing has transcended public health policy to become a driver of industrial standardization. Governments and international bodies are aggressively promoting "gerontechnology" as the default solution to the care crisis, evidenced by the push for safety standards like ISO 13482:2014 for personal care robots and the strategic expansion of the "Silver Economy" in nations like China, following the State Council’s 2024 directives on developing the elderly care industry [1]. However, this institutional drive toward automation faces a critical paradox. As noted by Ferreira, Latorre, and Nieto-Escamez [2], the integration of these technologies functions as a "double-edged sword": the same systems designed to ensure physical safety can simultaneously erode personal autonomy and reshape the caregiver-recipient relationship. We are witnessing a tension where the "logic of efficiency" increasingly clashes with the "logic of care," creating environments where older adults may be physically safer but ethically compromised.
Given the pressing nature of this tension, the current academic literature remains insufficient to guide a truly ethical implementation. Existing reviews often suffer from thematic isolation, analyzing privacy risks [3] or algorithmic accuracy [4] in silos, without addressing how these factors reinforce one another. Furthermore, a significant "implementation gap" persists: while principles like "Ethics by Design" are theoretically lauded, they are rarely operationalized in real-world clinical environments, often resulting in mere "ethics washing" where checklists substitute for meaningful stakeholder engagement [5]. This technocentric view frequently ignores how structural biases—specifically ageism—are embedded into the design and deployment of digital health solutions. This leads to systems that categorize the decline of older adults as an inevitable biological fate rather than a condition that can be meaningfully managed [6,7].
This article addresses these deficiencies by proposing a Critical Framework for Ethically-Aligned Assistive Technologies. Unlike previous studies that catalogue risks individually, we analyze the systemic interconnection of four fundamental pillars: (1) Privacy & Autonomy, examining how the Internet of Medical Things (IoMT) surveillance redefines the concept of "home"; (2) Dehumanization, exploring the emotional cost of replacing human touch with automated interaction; (3) The Digital Grey Divide, arguing that unequal access to technology is now a determinant of health; and (4) Algorithmic Bias, demonstrating how data gaps perpetuate ageist healthcare outcomes. Consequently, our framework culminates in a practical roadmap that seeks to bridge the noted implementation gap, translating participatory principles into actionable guidance for developers and policymakers to restore human dignity to the center of the technological equation.

2. The Four Dimensions of the Ethical Double-Edged Sword

To understand the full impact of digitalization in geriatrics, we must move beyond isolated technical metrics. The following sections analyze four interconnected dimensions where the promise of efficiency frequently collides with the imperative of human dignity.

2.1. Privacy and Autonomy: The Surveillance Edge of the Care Double-Edged Sword

The first dimension of this conflict emerges from the very mechanism that promises safety. The Internet of Medical Things (IoMT) and Ambient Assisted Living (AAL) ecosystems represent the "efficient edge" of the sword, offering continuous protection through environmental sensors, wearables, and GPS tracking. However, the operational logic of these systems is inherently extractive: to ensure physical safety, they require the granular, non-stop capture of intimate behavioral data [8]. This creates a paradox where the technology designed to enable "ageing in place" simultaneously dismantles the spatial and psychological privacy that defines the home as a sanctuary, effectively transforming domestic life into a data-driven clinical environment [7].
Beneath this promise of safety lies an architecture of pervasive observation that fundamentally transforms the living space. The deployment of these technologies effectively creates what has been critically termed, following Holmes [9], a panoptic structure within care settings, where the structural possibility of constant observation compels self-regulation. Drawing on Foucault’s analysis of disciplinary surveillance, this paradigm establishes a profound power asymmetry: the older adult, aware of being watched but unable to see the observers, internalizes the "gaze" of the sensors. This results in a "chilling effect," where residents may feel discouraged from spontaneous behavior for fear of being flagged as abnormal by the continuous monitoring system, thus modifying their routines to conform to perceived expectations. This erosion of autonomy is intrinsically linked to pervasive privacy concerns—the foremost issue identified by older adults in smart home surveillance, which undermines their sense of security and control within their own homes [10].
Consequently, the pillar of autonomy is compromised, often under the guise of benevolence. This erosion manifests not just theoretically but in the lived experience of older adults. Recent qualitative research on fall detection cameras—a quintessential panoptic technology in the home—reveals that while older adults acknowledge their functional need for care, they simultaneously express significant anxiety regarding the loss of control over the surveillance apparatus itself. This anxiety, a direct psychological correlate of the disciplinary 'gaze', frequently leads to active resistance against adoption [11] . This tension is epitomized by the 'fiction of informed consent' in digital care. The paradigm of continuous ambient surveillance is fundamentally at odds with the reality of fluctuating decision-making capacity in older and cognitively impaired populations, making static, one-time consent processes ethically problematic [12]. Instead of safeguarding autonomy, these processes often degenerate into a procedural formality that serves institutional liability more than genuine patient agency [10]. Empirical accounts reveal this fiction clearly, with individuals reporting they sign documents 'out of desperation' to secure care access, demonstrating how the consent mechanism itself can become coercive [12]. Therefore, assent to monitoring is often not a free choice but a coercive compromise enforced by a 'Safety-Autonomy Grid', where the only alternative to being watched is being labeled a risk and potentially facing abandonment.
Ultimately, this dimension reveals that privacy violations in geriatric care are not merely technical accidents, but systemic features of the current digital model. By prioritizing biological survival through surveillance, we risk inflicting a subtle "moral injury," stripping the older adult of the autonomy essential for a dignified life. This erosion of autonomy within the private sphere paves the way for a second, equally profound ethical harm: the dehumanization that can occur when automated interaction is prioritized over human contact..

2.2. The High-Tech, Low-Touch Paradox: Dehumanization and Emotional Resonance

In response to the so-called "epidemic of loneliness" among older adults, the market has seen a surge in the development of Socially Assistive Robots (SARs) and AI companions. These devices frequently incorporate biomimetic (anthropomorphic or zoomorphic) features designed to elicit empathetic responses and trigger caregiving instincts. However, their therapeutic efficacy often relies on a mechanism of robotic deception. As analyzed in the specialized literature, these interactions can depend on the user—often cognitively vulnerable—attributing sentience or reciprocal emotional capacity to the machine, a process driven by anthropomorphism. This creates a reality disjuncture that poses a severe ethical dilemma: is it permissible to ground the emotional well-being of older adults in a "false reality," or does this therapeutic deception fundamentally undermine their right to authentic, unmediated social interaction? [13].
Even when overt deception is not the primary intent, these technologies are fundamentally limited by their capacity for "synthetic empathy." Scholars argue that while AI agents can perform sentiment analysis to simulate concern, they face an in-principle obstacle to achieving the emotional and motivational empathy that arises from a shared subjective experience, which is required for genuine reciprocity [14]. AI companions, lacking any lived experience, can only perform emotional responses rather than experience them, making authentic mutual exchange impossible [15]. This gap is empirically demonstrated: studies show individuals report significantly lower empathy for stories attributed to AI narrators compared to human ones, underscoring the tangible deficit in emotional resonance [16].
The widespread deployment of these substitutes risks creating the core "High-Tech, Low-Touch" paradox. By delegating the labor of companionship to machines, care systems may inadvertently increase social isolation, treating psychological needs as "problems to be managed" by automation. This instrumentalization complements the surveillance logic analyzed earlier: if the first dimension reduces the home to a data field, this second dimension reduces companionship to a behavioral management task. Ultimately, it validates efficiency at the cost of profound human disconnection, transforming the older adult from a subject of care into an object of maintenance.

2.3. The Digital Grey Divide as a Social Determinant of Health

The third dimension of the double-edged sword reveals how digitalization can institutionalize inequity. While the previous sections analyzed the risks faced by those within the digital system, the "Digital Grey Divide" functions as an active mechanism of exclusion that systematically denies access to the modern healthcare apparatus. In an era where essential services—from telemedicine appointments to electronic prescriptions—are "digital by default," the lack of technological access has metastasized from a mere inconvenience into a critical "super social determinant of health" [17].
This phenomenon exemplifies the 'digital inverse care law': the populations with the highest burden of disease and greatest need for care are precisely those least able to navigate the digital pathways now required to receive it. This structural mismatch is well-documented, particularly for telemedicine and digital health services [18]. As analyzed in a recent scoping review [19], digital exclusion in older adults is a prevalent, multi-causal phenomenon intrinsically linked to healthcare digital equity and one that predisposes individuals to broader social exclusion. This creates a compounding effect where older adults face a 'double burden of exclusion': they are simultaneously isolated from societal participation due to physical or social factors and barred from essential health resources due to the digitization of access points.
Crucially, this divide is not a passive state but an active generator of pathology. A multi-country longitudinal study published in Health Data Science indicates a robust association between digital exclusion and the onset of depressive symptoms in older adults. The findings suggest that for the unconnected elderly, the digital barrier acts as an amplifier of loneliness, independent of other socioeconomic factors [20]. By designing systems that require digital literacy as a prerequisite for care, institutions are effectively enacting a policy of "structural ageism," rendering the non-digital population invisible.
This dimension of exclusion fundamentally shapes the ethical landscape: it determines who is subjected to the surveillance and dehumanization risks analyzed previously, and who is omitted entirely from the datasets that train the AI models we examine next.

2.4. Coded Ageism: Algorithmic Bias as a Mirror of Exclusion

The final dimension of the ethical double-edged sword directly challenges the pervasive myth of technological neutrality. While Artificial Intelligence is often heralded as an objective arbiter in healthcare, evidence reveals it frequently functions as a "mirror of exclusion," encoding and amplifying structural ageism into clinical practice. This "Digital Ageism" arises when AI models are trained on datasets that systematically underrepresent the physiological, cognitive, and social complexity of older adults, creating significant "data deserts" around the geriatric experience [21]. These representational gaps are not neutral voids; they translate directly into errors in diagnosis and risk management when algorithms are deployed in real-world clinical settings.
The clinical consequences are stark. A recent scoping review of machine learning models applied to multidimensional geriatric assessment data—precisely the kind of tools used for complex care—found a critical methodological flaw: while these models show promise, not a single reviewed study had undergone external validation. This absence severely questions their accuracy and reliability when deployed in diverse, real-world clinical settings [22]. A key failure mechanism is the reliance on physiological baselines derived from younger, healthier cohorts. This leads algorithms to pathologize normal ageing or, conversely, to dismiss genuine pathology as "just old age." The result is a pattern of disparate outcomes: older adults may face higher rates of false negatives in fall prediction systems or receive inaccurate—and often more conservative—risk stratifications in critical care, directly impacting their treatment pathways [4].
Ultimately, this dimension completes a pernicious "loop of inequity" that crystallizes the systemic nature of the double-edged sword. The Digital Grey Divide (Section 2.3) prevents many older adults from contributing data to digital health ecosystems. In turn, the resulting biased models deliver suboptimal care, which can reinforce the false narrative that the elderly are "too complex" for precision medicine. Far from being a neutral tool, the algorithm thus becomes an agent of "Coded Ageism," automating and legitimizing the marginalization of the very demographic that stands to benefit most from technological innovation [21]. This closed loop demonstrates that the four dimensions analyzed are not isolated risks, but interconnected facets of a single, flawed paradigm.

3. Bridging the Implementation Gap: A Framework for Ethical Alignment

The diagnosis presented in this article reveals a systemic failure: despite the proliferation of ethical guidelines, the digital experience of older adults remains characterized by surveillance, exclusion, and bias. To bridge this "implementation gap," we must operationalize the WHO’s framework of Person-Centred Care within the engineering lifecycle itself. Ethical alignment cannot be a retrospective "patch"; it must be the foundational architecture of gerontechnology. We propose three strategic shifts to achieve this:

3.1. Beyond Checklists: From Compliance to Commitment

Current ethical audits often function as bureaucratic "ethics washing," where abstract principles like "fairness" serve as liability shields rather than genuine design goals. To move from compliance to commitment, developers must adopt human-centered ethical frameworks as non-negotiable standards. This shift is reflected in industry calls for technology to be a 'critical enabler' of care that truly 'amplifies, not replaces, compassionate care' [23]. In this model, ethical values—such as privacy or dignity—are treated as technical requirements (KPIs) equivalent to latency or battery life. For instance, leading gerontechnology research quantifies 'fault-tolerant' design—which respects user autonomy and error—as the highest-priority design factor, demonstrating how ethical values can be systematically integrated [24]. Consequently, if a fall-detection algorithm achieves high accuracy but requires pervasive surveillance that violates privacy, it must be considered a failed product, not a technical success with ethical side-effects.

3.2. From "Informed Consent" to "Dynamic Engagement"

The traditional model of informed consent, a one-time signature on a static document, represents a "legal fiction" that fails in digital health ecosystems, particularly for older adults (as argued in section 2.1). For users with fluctuating cognitive capacity, such as those with early-stage dementia, this static model is not just inadequate but ethically obsolete, as it denies individuals the opportunity to remain engaged agents. To bridge this implementation gap, we advocate for a paradigm shift towards Dynamic Consent (DC). DC is an interactive, digital model that transforms consent into a continuous, supported dialogue through flexible interfaces, allowing users to review, manage, and adjust their data-sharing preferences over time.
This model is directly validated by recent research and protocols. A 2024 study on a dynamic consent application found that most participants could successfully manage personalized options for their health data, highlighting the model's usability and potential to safeguard autonomy [25]. Furthermore, a longitudinal study demonstrated the successful implementation of DC over a decade, showing it fosters a trust-based relationship and accommodates participants' evolving needs [26]. As validated by these protocols, these systems can adapt to the user’s literacy level, transforming consent from a legal waiver into a continuous, supported dialogue that respects the user's changing agency. This is especially critical in dementia care, where health literacy can be compromised [27], and ethical engagement requires person-centered approaches that support the individual's holistic personhood throughout the research and care process [28].

3.3. The "Lived Experience" as a Design Standard

To effectively counter the "High-Tech, Low-Touch" paradox and dismantle "Coded Ageism," the gerontechnology industry must undergo a fundamental paradigm shift: from designing for older adults to designing with them. This necessity is underscored by a critical gap between the proliferation of digital health tools and their meaningful impact. A systematic review of 171 studies on co-designing digital health interventions reveals a persistent, systemic challenge: while participatory co-design is widely acknowledged as a key advancement, it simultaneously remains a core obstacle, with fragmented approaches and a lack of consensus directly hindering the efficacy and long-term uptake of solutions [29]. This indicates that many technologies fail to resonate because their development processes are not sufficiently rooted in the lived realities of their intended users.
To bridge this implementation gap, Participatory Action Research (PAR) must become the non-negotiable industry standard for validation. A critical interpretive synthesis confirms that although PAR is recognized as vital for equitable engagement with older adults, its full potential is often unrealized, as they are seldom positioned as genuine, decision-making partners in the research process [30]. Moving from consultation to true partnership requires a deliberate methodology. A realist review elucidates that successful co-creation depends on actionable contexts such as facilitating strong relationships of trust, adopting flexible and creative research protocols, and ensuring a genuine balance of power among all stakeholders [31]. These mechanisms ensure that design is a collaborative act, not a tokenistic gesture.
This principled approach is operationalized in specific research. A 2025 case study on a digital intervention for loneliness demonstrated that iterative co-design—engaging older adults as active partners across all phases of development—is critical for ensuring usability, cultural relevance, and adoption [32]. This alignment with deep, person-centred methodology is now being codified in policy. Australia's Aged Care Data and Digital Strategy 2024–2029 establishes being "Person-centred" as its foundational principle, mandating that digital initiatives must be shaped by and directly benefit older people [33]. Therefore, technology can only be truly person-centred if the person has actively shaped the tool intended to serve them.

4. Discussion

4.1. Policy Implications: From "Safety Standards" to "Dignity Standards"

Current regulatory pathways for digital health, such as Europe’s Medical Device Regulation (MDR), prioritize clinical safety, data security, and functional efficacy. A pioneering attempt to go further is Germany’s Digital Health Act (Digitale-Versorgung-Gesetz or DVG) framework, which mandates proof of a "positive healthcare effect" (positive Versorgungseffekte) for reimbursement [34]. However, our analysis reveals this is still insufficient for cognitive-assistive technologies. Although the DVG framework recognizes 'patient sovereignty' as a potential structural benefit, it frames it as an optional pathway rather than a mandatory baseline. Consequently, the assessment prioritizes general medical or structural improvements—such as symptom reduction or care efficiency—without systematically requiring evidence that the technology upholds user autonomy or avoids the 'dehumanization' risks diagnosed in this article. Consequently, a device that effectively monitors an older adult while making them feel controlled or isolated could still qualify for public funding. The necessity of this shift is supported by emerging ethical frameworks. The EU's principles for Trustworthy AI, for instance, already include respect for human autonomy and the prevention of harm beyond the physical [35]. However, these principles have not yet been translated into binding funding or assessment criteria for medical devices, leaving a gap between ethics and policy.
To bridge this ethical implementation gap, we propose that policymakers evolve from funding functional efficiency to incentivizing "Dignity-by-Design." This requires expanding reimbursement criteria to include dignity-related key performance indicators (KPIs). Future regulations could make funding contingent on providing evidence of participatory co-design with older adults, the implementation of dynamic consent models for users with fluctuating capacity, and the demonstration of positive impacts on perceived autonomy. This shift would align economic incentives with ethical imperatives, motivating manufacturers to adopt the framework outlined in Section 3 as a condition of market access.

4.2. Industrial Implications: The "Trust Economy" and Technology Abandonment

For the technology industry, the ethical double-edged sword represents a significant commercial risk, manifested most clearly in technology abandonment. Historically, abandonment rates for assistive devices have consistently ranged between 30% and 40% [36,37]. This challenge persists in the digital era: a recent systematic review of assistive technology for dementia care identifies that adoption is critically hindered by 'ethical' and 'emotional' concerns—specifically privacy, stigma, and perceived loss of autonomy—affecting both patients and caregivers [38]. These domain-specific barriers exemplify the broader phenomenon our framework theorizes: 'ethical dissonance.' Consequently, users intuitively reject systems they perceive as intrusive (a threat to autonomy) or emotionally hollow (failing to address stigma or anxiety), leading to abandonment that is often misattributed merely to low digital literacy.
In the burgeoning global 'Silver Economy'—a market segment focused on aging populations projected to reach €5.7 trillion in the EU by 2025 [39]—trust is the ultimate currency. However, the persistent gap in co-design means many existing solutions fail to resonate, becoming tools of necessity rather than choice [29]. Adopting the roadmap in Section 3 is, therefore, not merely a compliance exercise but a core commercial strategy. Companies that proactively implement Value-Sensitive Design—a framework proven to align technology with what end-users truly value [40]—can differentiate themselves by creating products that older adults choose to use, rather than products they feel obligated to accept. This shift, which aligns with policy visions of technology enhancing person-centred care [33], turns the industrial paradigm from selling isolated monitoring hardware to offering integrated systems that demonstrably enhance well-being, transforming dignity from a 'cost center' into a key value proposition.

4.3. Limitations and Future Research

While the proposed framework provides a critical roadmap, its implementation faces practical hurdles. First, Dynamic Consent in cases of advanced dementia remains a legal and operational frontier. Future research must develop 'relational assent' protocols—building on the established ethical imperative to seek assent from adults with decisional incapacity [41]—that balance real-time behavioral cues (enabled by emerging monitoring technologies) with pre-defined care preferences, which are known to be both strong and often overlooked in people with dementia [42]. Such protocols would operationalize the shared decision-making processes that are essential yet challenging in dementia care [43].
Second, while Participatory Action Research (PAR) is ideal for deep engagement, its resource-intensive nature often clashes with the fast-paced reality of industrial environments. To bridge this gap, we propose a focused research agenda on "Ethics-in-Agile" methodologies. This agenda should develop scalable, lean co-design tools that embed stakeholder values into iterative development cycles without sacrificing authenticity. This approach is supported by emerging frameworks like Agile Worth-Oriented Systems Engineering (AWOSE), which provides a concrete process model for integrating ethical analysis —through instruments like Worth Maps— into agile sprints [44]. The inherent structures of agile development, such as flat hierarchies and iterative reflection, are in fact argued to be conducive to embedding continuous ethical deliberation [45]. Therefore, the goal is to translate the core commitment of PAR into practical, 'reflection scaffolding' [46] that fits within the rhythm of industrial development.
Finally, to compel systemic change, the abstract harms of "coded ageism" must be quantitatively demonstrated. Emerging models explicitly identify ageism as a documented barrier to digital engagement [47]. Yet more granular empirical data are needed. Longitudinal studies are urgently required to correlate ethically aligned design with hard outcomes, such as sustained user retention —where digital health studies already report a median completion rate of only 48% [48]—and trust metrics, a gap that recent systematic reviews also highlight [49]. This empirical substrate is essential for advocating the regulatory and reimbursement reforms suggested in this discussion.

5. Conclusions

This article has argued that the rapid deployment of cognitive-assistive technologies for older adults represents a profound ethical double-edged sword, creating a system where surveillance erodes autonomy, automated interaction risks dehumanization, access divides determine care, and algorithms codify bias. In response, we have proposed a critical framework that operationalizes ethical principles into a concrete roadmap for ethically-aligned innovation. By advocating for a shift from static consent to Dynamic Consent and from paternalistic design to Participatory Action Research (PAR), we provide a pathway to move beyond compliance toward a genuine 'Dignity-by-Design' paradigm—a paradigm realized through ethical commitment, dynamic engagement, and genuine co-design.
The imperative of this shift is both practical and profound. For policymakers, it necessitates redefining the metrics of success, making evidence of preserved autonomy and identity a prerequisite for public funding. For industry, it reveals that overcoming 'ethical dissonance' is the foundation of commercial sustainability in the Silver Economy, where user trust becomes the ultimate driver of adoption. While some may view this ethical alignment as a constraint, it is in fact the precondition for sustainable innovation in an aging world.
Ultimately, the future of gerontechnology hinges on designing systems that honor aging as a human experience. We call upon researchers, developers, and regulators to adopt this new standard of accountability, one measured not in data points alone, but in the sustained dignity and empowered agency of older adults. As we face a global demographic shift, integrating dignity into the digital fabric of care is no longer an optional ethical consideration—it is a political, economic, and moral necessity.

Declaration of Generative AI and AI-Assisted Technologies in the Writing Process

During the preparation of this work the authors used Gemini (Google) in order to refine the grammatical accuracy and readability of the manuscript. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.

Author Contributions

Conceptualization, F.N.-E. and C.F.; writing—review and editing, F.N.-E. and C.F.; project administration, F.N.-E.; funding acquisition, F.N.-E. All authors have read and agreed to the published version of the manuscript.”

Funding

This research was supported by the University of Almería through the Proyectos de Fortalecimiento de Centros de Investigación (Grant number P_FORT_CENTROS_2023/04) and UALtransfierE-2023, project number TRFE-SI-2023/008. Both grants are part of the Research and Transfer Plan of the University of Almeria, funded by “Consejería de Universidad, Investigación e Innovación de la Junta de Andalucía” within the program 54A “Scientific Research and Innovation” and by the ERDF Andalusia 2021-2027 Program, within the Specific Objective RSO1.1 "Developing and improving research and innovation capabilities and assimilating advanced technologies".

Acknowledgments

The authors would like to thank the University of Almería and the Federal Institute of Education, Science, and Technology of Rio Grande do Sul (IFRS) for the institutional support provided during this research.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AAL Ambient Assisted Living
AI Artificial Intelligence
AWOSE Agile Worth-Oriented Systems Engineering
DC Dynamic Consent
DVG Digitale-Versorgung-Gesetz (Digital Health Act)
EU European Union
IoMT Internet of Medical Things
IoT Internet of Things
ISO International Organization for Standardization
KPI Key Performance Indicator
MDR Medical Device Regulation
PAR Participatory Action Research
SAR Socially Assistive Robot
WHO World Health Organization

References

  1. ISO 13482:2014 Available online: https://www.iso.org/standard/53820.html (accessed on 16 December 2025).
  2. Ferreira, C.P.; Latorre, P.; Nieto-Escamez, F. Integration of Innovative Technologies in Geriatric Care: Balancing Efficiency and Human Dignity. Aging Advances 2025, 2, 121. [CrossRef]
  3. Nankya, M.; Mugisa, A.; Usman, Y.; Upadhyay, A.; Chataut, R. Security and Privacy in E-Health Systems: A Review of AI and Machine Learning Techniques. IEEE Access 2024, 12, 148796–148816. [CrossRef]
  4. Nazer, L.H.; Zatarah, R.; Waldrip, S.; Ke, J.X.C.; Moukheiber, M.; Khanna, A.K.; Hicklen, R.S.; Moukheiber, L.; Moukheiber, D.; Ma, H.; et al. Bias in Artificial Intelligence Algorithms and Recommendations for Mitigation. PLOS Digital Health 2023, 2, e0000278. [CrossRef]
  5. Mittelstadt, B. Principles Alone Cannot Guarantee Ethical AI. Nat Mach Intell 2019, 1, 501–507. [CrossRef]
  6. Rosales, A.; Fernández-Ardèvol, M. Ageism in the Era of Digital Platforms. Convergence 2020, 26, 1074–1087. [CrossRef]
  7. Peine, A.; Neven, L. The Co-Constitution of Ageing and Technology – a Model and Agenda. Ageing & Society 2021, 41, 2845–2866. [CrossRef]
  8. Mittelstadt, B. Ethics of the Health-Related Internet of Things: A Narrative Review. Ethics Inf Technol 2017, 19, 157–175. [CrossRef]
  9. Holmes, D. From Iron Gaze to Nursing Care: Mental Health Nursing in the Era of Panopticism. J Psychiatr Ment Health Nurs 2001, 8, 7–15. [CrossRef]
  10. Campbell, J.P.; Buchan, J.; Chu, C.H.; Bianchi, A.; Hoey, J.; Khan, S.S. User Perception of Smart Home Surveillance Among Adults Aged 50 Years and Older: Scoping Review. JMIR mHealth and uHealth 2024, 12. [CrossRef]
  11. Zhang, W.; Yin, J.; Chan, K.I.; Sun, T.; Jin, T.; Jeung, J.; Gong, J. Beyond Digital Privacy: Uncovering Deeper Attitudes toward Privacy in Cameras among Older Adults. International Journal of Human-Computer Studies 2024, 192, 103345. [CrossRef]
  12. Diaz, A.; Birck, C.; Bradshaw, A.; Georges, J.; Lamirel, D.; Moradi-Bachiller, S.; Gove, D. Informed Consent in Dementia Research: How Public Involvement Can Contribute to Addressing “Old” and “New” Challenges. Front. Dement. 2025, 4. [CrossRef]
  13. Boch, A.; Thomas, B.R. Human-Robot Dynamics: A Psychological Insight into the Ethics of Social Robotics. International Journal of Ethics and Systems 2024, 41, 101–141. [CrossRef]
  14. Montemayor, C.; Halpern, J.; Fairweather, A. In Principle Obstacles for Empathic AI: Why We Can’t Replace Human Empathy in Healthcare. AI & Soc 2022, 37, 1353–1359. [CrossRef]
  15. Mlonyeni, P.M.T. Personal AI, Deception, and the Problem of Emotional Bubbles. AI & Soc 2025, 40, 1927–1938. [CrossRef]
  16. Shen, J.; DiPaola, D.; Ali, S.; Sap, M.; Park, H.W.; Breazeal, C. Empathy Toward Artificial Intelligence Versus Human Experiences and the Role of Transparency in Mental Health and Social Support Chatbot Design: Comparative Study. JMIR Mental Health 2024, 11, e62679. [CrossRef]
  17. Hanebutt, R.; Mohyuddin, H. The Digital Domain: A “Super” Social Determinant of Health. Prim Care 2023, 50, 657–670. [CrossRef]
  18. Davies, A.R.; Honeyman, M.; Gann, B. Addressing the Digital Inverse Care Law in the Time of COVID-19: Potential for Digital Technology to Exacerbate or Mitigate Health Inequalities. J Med Internet Res 2021, 23, e21726. [CrossRef]
  19. Ge, H.; Li, J.; Hu, H.; Feng, T.; Wu, X. Digital Exclusion in Older Adults: A Scoping Review. International Journal of Nursing Studies 2025, 168, 105082. [CrossRef]
  20. Wang, J.; Lu, X.; Ngai, S.B.C.; Xie, L.; Liu, X.; Yao, Y.; Jin, Y. Digital Exclusion and Depressive Symptoms among Older People: Findings from Five Aging Cohort Studies across 24 Countries. Health Data Sci 2025, 5, 0218. [CrossRef]
  21. Chu, C.; Donato-Woodger, S.; Khan, S.S.; Shi, T.; Leslie, K.; Abbasgholizadeh-Rahimi, S.; Nyrup, R.; Grenier, A. Strategies to Mitigate Age-Related Bias in Machine Learning: Scoping Review. JMIR Aging 2024, 7, e53564. [CrossRef]
  22. Mangio, A.M.; Miller, C.; Jayan, L.; Ben-Dekhil, S.; Dao-Tran, T.-H.; Dendere, R. Machine Learning in Geriatric Care: A Scoping Review of Models Using Multidimensional Assessment Data. Int J Med Inform 2025, 207, 106181. [CrossRef]
  23. Montgomery, A. ‘We’Re Back, Baby’: 6 Senior Living Leaders Predict Memory Care’s Future. Senior Housing News 2025.
  24. Zheng, W.-Q.; Cheung, S.-M.; Wang, X. Developing Age-Friendly Spaces through a Gerontechnological Lens: A Systemic Framework Based on FDM-DANP Analysis. Front Med (Lausanne) 2025, 12, 1681486. [CrossRef]
  25. Lee, A.R.; Koo, D.; Kim, I.K.; Lee, E.; Yoo, S.; Lee, H.-Y. Opportunities and Challenges of a Dynamic Consent-Based Application: Personalized Options for Personal Health Data Sharing and Utilization. BMC Med Ethics 2024, 25, 92. [CrossRef]
  26. Mascalzoni, D.; Melotti, R.; Pattaro, C.; Pramstaller, P.P.; Gögele, M.; De Grandi, A.; Biasiotto, R. Ten Years of Dynamic Consent in the CHRIS Study: Informed Consent as a Dynamic Process. Eur J Hum Genet 2022, 30, 1391–1397. [CrossRef]
  27. Lo, R.Y. Uncertainty and Health Literacy in Dementia Care. Tzu Chi Med J 2019, 32, 14–18. [CrossRef]
  28. Molony, S.L.; Fazio, S.; Sanchez, R.; Montminy, J.; Rulison, M.; McGuire, Rev.D.; Feinn, R.; Jeon, S.; Montesano, R.; Prophater, L.; et al. Applying Person-Centered Research Ethics in the Design of Dementia-Specific Measures. J Aging Stud 2023, 65, 101139. [CrossRef]
  29. Duffy, A.; Boroumandzad, N.; Sherman, A.L.; Christie, G.; Riadi, I.; Moreno, S. Examining Challenges to Co-Design Digital Health Interventions With End Users: Systematic Review. Journal of Medical Internet Research 2025, 27, e50178. [CrossRef]
  30. Corrado, A.M.; Benjamin-Thomas, T.E.; McGrath, C.; Hand, C.; Laliberte Rudman, D. Participatory Action Research With Older Adults: A Critical Interpretive Synthesis. Gerontologist 2020, 60, e413–e427. [CrossRef]
  31. Høiseth, M.; Nakrem, S. Co-Creation in Digital Health Services for Older People: A Realist Review of Enabling Factors and Barriers. Design for Health 2024, 8, 390–408. [CrossRef]
  32. Eklund, C.; Zander, V.; Gusdal, A.K.; Åkerlind, C.; Wågert, P. von H. Co-Designing a Digital Solution for Decreasing Loneliness and Social Isolation Among Older People in Sweden: Explorative Study. JMIR Formative Research 2025, 9, e78213. [CrossRef]
  33. Australian Government Department of Health, D. and A. About the Aged Care Data and Digital Strategy Available online: https://www.health.gov.au/our-work/aged-care-data-and-digital-strategy/about-the-aged-care-data-and-digital-strategy (accessed on 21 December 2025).
  34. Gerke, S.; Stern, A.D.; Minssen, T. Germany’s Digital Health Reforms in the COVID-19 Era: Lessons and Opportunities for Other Countries. npj Digit. Med. 2020, 3, 94. [CrossRef]
  35. High-Level Expert Group on Artificial Intelligence Ethics Guidelines for Trustworthy AI; European Commission: Brussels, 2019;
  36. Phillips, B.; Zhao, H. Predictors of Assistive Technology Abandonment. Assistive technology : the official journal of RESNA 1993, 5, 36–45. [CrossRef]
  37. Federici, S.; Meloni, F.; Borsci, S. The Abandonment of Assistive Technology in Italy: A Survey of Users of the National Health Service. European journal of physical and rehabilitation medicine 2016, 52, 516–526. [CrossRef]
  38. Boyle, L.D.; Husebo, B.S.; Vislapuu, M. Promotors and Barriers to the Implementation and Adoption of Assistive Technology and Telecare for People with Dementia and Their Caregivers: A Systematic Review of the Literature. BMC Health Serv Res 2022, 22, 1573. [CrossRef]
  39. European Commission Silver Economy Study - Final Report; Publications Office of the European Union, 2018;
  40. Felber, N.A.; Lipworth, W.; Tian, Y.J.A.; Duong, V.; Wangmo, T. Addressing Value Tensions in the Design of Technologies to Support Older Persons (AgeTech) Using Responsible Research and Innovation and Value Sensitive Design. Sci Eng Ethics 2025, 31, 17. [CrossRef]
  41. The Ethical Importance of Assent in Adults with Decisional Incapacity; Washington DC, USA, 2024;
  42. Wehrmann, H.; Michalowsky, B.; Lepper, S.; Mohr, W.; Raedke, A.; Hoffmann, W. Priorities and Preferences of People Living with Dementia or Cognitive Impairment – A Systematic Review. Patient Prefer Adherence 2021, 15, 2793–2807. [CrossRef]
  43. Liu, H.; Lou, V.W.Q.; Mo, T. Determinants of Shared Decision-Making between People with Dementia and Informal Caregivers: A Systematic Review. Patient Education and Counseling 2025, 137, 108815. [CrossRef]
  44. Strenge, B.; Schack, T. AWOSE - A Process Model for Incorporating Ethical Analyses in Agile Systems Engineering. Sci Eng Ethics 2020, 26, 851–870. [CrossRef]
  45. Zuber, N.; Kacianka, S.; Gogoll, J.; Pretschner, A.; Nida-Rümelin, J. Empowered and Embedded: Ethics and Agile Processes 2021.
  46. Alix Agile Ethics: Managing Ethical Complexity in Technology. Medium 2019.
  47. Huang, H.; Chen, L. The Potential Negative Effects of Ageism on Digital Engagement of Older Adults: A Meta-Analysis. Gerontologist 2025, 65, gnaf217. [CrossRef]
  48. Daniore, P.; Nittas, V.; von Wyl, V. Enrollment and Retention of Participants in Remote Digital Health Studies: Scoping Review and Framework Proposal. J Med Internet Res 2022, 24, e39910. [CrossRef]
  49. Catapan, S. de C.; Sazon, H.; Zheng, S.; Gallegos-Rejas, V.; Mendis, R.; Santiago, P.H.R.; Kelly, J.T. A Systematic Review of Consumers’ and Healthcare Professionals’ Trust in Digital Healthcare. NPJ Digit Med 2025, 8, 115. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated