Preprint
Essay

This version is not peer-reviewed.

From Asking “What” to “Says Who”: Examining Social, Political, and Economic Forces Shaping Artificial Intelligence in Healthcare

Submitted:

26 September 2025

Posted:

26 September 2025

You are already at the latest version

Abstract
Artificial intelligence (AI) is widely portrayed as a transformative innovation poised to revolutionise healthcare in the 21st century. Promoted by technology companies, this forward-looking narrative is often adopted uncritically by decision-makers, despite limited evidence of AI’s clinical effectiveness in real-world contexts and the persistence of numerous regulatory, professional, organisational, ethical, and governance challenges. In healthcare systems characterised by pluralism and complexity, the trajectory of AI is shaped by the strategic decisions and actions of diverse stakeholders who mobilise their social, political, and economic power and resources to influence dominant narratives about AI and to advance specific visions of the future of healthcare. This paper critically examines such influences by specifically focusing on how market imperatives, algorithmic logic, and the entrenched marginalisation of certain populations impact technology development and care delivery. It aims to investigate how particular actors and power dynamics affect AI research, commercialisation, and sustainable integration into healthcare systems. Because AI can reinforce and deepen existing power imbalances and inequalities, this paper calls for public policies grounded in social justice, fairness, and the public interest. These policies should aim to mitigate systemic harms and promote equity in healthcare.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  

Introduction

“It is essential to keep pushing questions (…) from the abstract ‘What?’ to the socially concrete ‘Says who’?”
[1]
While artificial intelligence (AI) is expected to revolutionise 21st-century healthcare systems [2,3,4,5,6,7,8,9], this enthusiasm is largely fuelled by a futuristic narrative shaped by commercial and industry actors and often adopted as such by healthcare decision-makers [6,10,11,12,13]. As a result, healthcare systems are being urged to integrate AI technologies, even as evidence of genuine clinical and/or social added value remains limited and numerous regulatory, professional, organisational, ethical, and governance issues remain unaddressed [5,14,15,16]. Because healthcare systems are inherently pluralistic and complex, the technical development and clinical implementation of AI healthcare technologies are largely moulded and driven by social, political, and economic influences wielded by various stakeholders, each with their own particular visions, expectations, objectives, and agendas for the technology (e.g., governments and public authorities, healthcare organisations, technology firms, investors and market actors, academic researchers, healthcare professionals, patients, civil society) [12,17,18,19,20,21,22,23]. Consequently, AI is not merely a “black box” system of hardware and software executing a number of functions. It is also “the expression of a social world” [24] and these “potent socio-political forces that steer the trajectory of the development of technology [are] unevenly distributed across society” [22].
In parallel with technical efforts to develop “white box” AI (e.g., explainability and transparency of algorithms), scholars are also shedding light on the complex social, political, and economic forces underpinning AI production and the networks of actors who actively shape the trajectories of its development and impacts [22,25,26]. In an effort to contribute to this critical work, this paper explores and analyses how market imperatives, algorithmic rationality, and the entrenched marginalisation of certain populations impact which AI healthcare technologies are developed, what kind of care they deliver, and who is cared for. It specifically seeks to investigate how particular actors and power dynamics affect AI research, commercialisation, and sustainable integration into healthcare systems.

Market Imperatives

Shaping AI Research in Healthcare

When AI technologies are researched and developed in a healthcare system context, tensions may arise between healthcare systems’ duty of care to patients and companies’ financial responsibilities to their shareholders [26,27,28,29,30,31]. As a result, certain industry actors may seek to shape what is known about their AI technologies and influence decision-making processes related to their approval, procurement, and integration within healthcare systems.
For instance, Hilton et al. (2024) reported that “AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily” [32]. The current lack of strong legal and regulatory obligations to share clinical safety, effectiveness, and efficacy information allow some AI companies to exert control over what is known about their technologies [33]. They implement these restrictions by adopting deployment strategies and setting conditions that limit independent and transparent assessments of their technologies’ actual performance, limitations, and security risks [26,31,33,34,35]. This control is maintained not only through comprehensive confidentiality agreements that prevent employees and contractors from disclosing information about potential risks or malfunctions, but also through public–private partnerships (e.g., collaborations between companies and public hospitals) that are frequently framed as mutually beneficial mechanisms for aligning innovation with healthcare system priorities and needs [27,28,31,32,33,35]. Previous research on such partnerships has shown that some AI companies exert tight control over the findings researchers are allowed to publish [33]. In Alami et al. (2025), a clinician-researcher observed that “the founders of private companies are accountable to the people who finance them. (...) Doctors are accountable to their patients: ethics. So, there’s a clash. [If] the technology doesn’t work as expected, the academic researcher is going to publish, to say so and to express reservations/concerns. But the company is going to oppose it. (...) It’s crazy, because there’s a clause in the contract that says: ‘right of first refusal’. So, you can no longer publish your academic results” [33]. In such arrangements, companies retain veto power over disclosures concerning the risks and limitations of their technologies [6,26,33,34]. Consequently, only “accredited experts” are authorised to communicate performance-related information [14,34].
This information asymmetry leaves independent researchers with limited room for manoeuvre, capacity, and resources to conduct oversight and audits, or to propose alternative technologies better aligned with healthcare-system priorities and needs [26,31,34,35,36,37]. As a result, users (e.g., patients, clinicians) and healthcare systems must rely on vendors’ assurances and accept the level of risk that companies deem reasonable [6,38]. This dependency materially constrains their decision-making and limits their ability to respond effectively to incidents such as data breaches, security failures, or software malfunctions [6,27,28,34,35,39]. While such an operational model may have minimal consequences in other domains (e.g., algorithmic recommendations on streaming platforms [40]), errors in AI systems used in healthcare can have serious, and potentially fatal, consequences for many individuals [5,14,27,32,34,36,38].
This control over information on AI performance is commonly framed as necessary to protect trade secrets and preserve competitive advantage. It may enable some companies to increase market share, attract investment, accelerate development, and deliver shareholder returns [26,31,35,41,42]. By contrast, although healthcare systems are bound by a duty of care to patients, AI firms appear to be subject primarily to legal requirements concerning safety and data protection, with no equivalent professional or ethical obligations toward patients or healthcare providers [6,33,43,44,45]. In this context, it is unlikely that patients’ needs and the long-term sustainability of the healthcare system will consistently be prioritised over financial objectives (e.g., exit strategies) [6,13,26,33,43,44,46,47,48].

Shaping the Sociotechnical Imaginaries Around AI

In addition to controlling information about their technologies, many AI companies deploy public-relations campaigns to shape public discourse and reinforce brand image [49,50,51,52,53,54]. Westerstrand et al. (2024) noted that “discourse around emerging technologies is a prerequisite for the spread of innovation and can thus be seen as constructive of technology. Therefore, what kind of discourse we produce affects the direction which [the] technology development is heading. Besides dictating the contents of the discourse, gaining control over who gets to talk, when, where and how” [55,56]. As mistrust and criticism grow, industry and financial actors seek to shape more positive narratives by reframing public conversations about AI [51,52,53,54,55,56,59,60]. In the absence of public confidence, the sector’s ability to market its products would be jeopardised [57,58,59].
By emphasising research findings that support the safety, effectiveness, and efficacy of their technologies, companies promote the view that benefits outweigh costs and risks and that rapid deployment is necessary to realise the promised value. Because “there is no time to waste” [60], this selective use of evidence fuels a storyline in which AI appears not only desirable but also indispensable for addressing an imminent “care shortage” [61,62,63]. By continually steering the discourse and promoting this vision, the industry cultivates a sense of AI’s inevitability [59,64]. This perceived inevitability marginalises alternative solutions and locks healthcare systems into specific technological trajectories presented as unavoidable [59,62,64,65]. Such rhetoric increases the likelihood that the projected future will materialise. By amplifying AI’s promises [59,64], sociotechnical imaginaries persuade stakeholders, especially governments, to expand investment and align priorities with industrial interests [64].
Sloane et al. (2021) stressed that the narrative trap embedded in dominant AI discourse channels collective imagination along predetermined paths while marginalising alternative viewpoints [66]. When control over AI development remains concentrated within industry, its trajectory tends to follow market-driven logics. As Hart (2001) noted, “no market will ever shift corporate investment from where it is most profitable to where it is most needed” [67]. This dynamic contributes to the further financialisation and commodification of care, potentially reducing quality, undermining the public good, and reinforcing the notion that some lives hold greater value than others [68,69].
In this vein, some political and economic actors argued that economic growth should take precedence over individual rights [70,71]. According to Germani (2023), the main challenge lies not in the technology itself, but in the fact that “it is the same people that today seek to privatise care, impose austerity, and defund research” who are now shaping its future in healthcare systems [68]. In the same vein, certain decision-makers and financial actors actively sustain the narrative and imaginary that industrial and technological policies no longer exist in the neoliberal era [72]. However, such policies clearly persist, albeit in reconfigured forms. They have been redesigned to reinforce the private sector, promote private substitutes for public services, and consolidate corporate influence over healthcare investment outcomes [72].
This narrative closure also deepens hermeneutical injustice by depriving marginalised groups of the interpretive resources, both individual and collective, needed to articulate and make sense of their social and symbolic experiences with AI [38,46,73,74]. Victims of this hermeneutical marginalisation, these populations are excluded from the processes that define key concepts, and therefore cannot fully grasp or articulate how AI affects their lives [73]. Like many medical technologies, AI is not only designed to “produce health” (e.g., diagnosing or treating disease), but also functions as an artefact that shapes and structures the very understanding of what constitutes “health” [38,75]. It also influences how healthcare systems address the social and systemic determinants affecting the health of individuals and groups [38,75]. As Hoff (2023) pointed out, “although the corporate actors that articulate these imaginaries do not have any democratic legitimacy, they influence how sociotechnical futures are imagined and eventually how citizens define notions such as health, quality of care and a healthy body” [61].

Algorithmic Rationality

The development of AI technologies in healthcare tends to concentrate on a narrow set of clinical domains, particularly those involving image analysis or signal quantification (e.g., radiology, dermatology, cardiology) [6,13,46]. These areas are more amenable to standardisation and quantification, making them especially attractive to companies seeking rapid access to healthcare markets [6,13,46]. Promoters of these technologies frequently emphasise the capacity of algorithms to deliver precise, personalised, and rational decisions through their ability to “measure, classify, and calculate” in order to produce predictable outcomes [46,77]. This emphasis reflects a broader logic linked to the biopoliticisation of life, where populations are primarily understood through statistical measures and indicators [46,78,79,80]. In this perspective, life is reduced to data that can be modelled and optimised by algorithms. Individuals are increasingly defined by standardised quantitative metrics and treated as subjects to be managed, rather than as holders of social and political rights, as well as agency [46,78,79,80,81]. Briggs (2016) referred to this as “the fetishisation of statistical measures, images, and means of converting [genetic, biological and physiological] samples into information as privileged sources of evidence” [82]. This approach privileges technical and computational forms of knowledge, often at the expense of lived experience, structural conditions, and the relational dimensions of care [46,78,79,80].
Reducing human bodies to quantifiable biological parameters and simplifying complex social realities into measurable indicators may have far-reaching implications for healthcare systems [46]. Although developers and advocates claim that AI can account for population complexity, they often do so through a narrow mathematical lens [83]. Concepts such as equity and discrimination are frequently reduced to statistical indicators or superficial compliance exercises [84,85]. In this algorithmic framework, social interactions and intersubjective relationships are no longer seen as necessary to describe or conceptualise, since outcomes are presumed to have already been modelled from the data traces individuals leave behind [46,77]. This process, referred to as “epistemological flattening”, reduces rich and situated social realities to supposedly neutral and clean signals, which are then used to support what is presented as “objective decision-making”. Unlike traditional research paradigms that require scholars to articulate hypotheses and clarify their theoretical foundations, algorithmic reasoning often operates without an explicit epistemological position [86]. As Amoore (2020) pointed out, “though this arrangement of probabilities contains within it a multiplicity of doubts in the model, the algorithm nonetheless condenses this multiplicity to a single output. A decision beyond doubt” [87,88]. These simplifications help normalise the idea that generalisation, even at the expense of nuance and contextual specificity, is an essential component of AI in healthcare. This reflects what Campolo and Crawford (2020) called “enchanted determinism”, a discourse that promotes the image of superhuman accuracy and insight, while simultaneously emphasising the inability to fully understand or explain how AI outcomes and decisions are produced [86].
Additionally, the widespread use of the term “bias” in AI research and discourse often dilutes or obscures other critical notions such as prejudice, injustice, discrimination, segregation, sexism, racism, and ageism. This dilution risks disconnecting these concepts from the lived experiences of individuals and communities, particularly their relationships to health and illness [66]. As a result, structural inequalities are frequently reframed as technical or logistical challenges inherent to technology, rather than recognised as systemic problems. Such an algorithmic framing shifts attention away from the deeper structural roots of inequality, including power relations, systemic injustices, and the socio-political and ideological forces that shape people’s health trajectories and lived realities. For instance, when biases or errors in AI systems affect ethnic or cultural minorities, they are often attributed to a lack of representative data rather than recognised as manifestations of the historical and ongoing marginalisation and invisibilisation of these populations. In this context, it is important to recall that AI systems in healthcare are partly rooted in the biomedical tradition. They extend pre-1980s practices in which evidence-based medicine, largely shaped by clinical trials, commonly assumed that treatments effective for White adult male populations would be equally effective for all individuals regardless of gender, age or ethnicity [76]. Grounded in hierarchical knowledge systems, these technologies tend to reproduce existing forms of social stratification (e.g., ethnicity, gender, age), and reflect the values and assumptions embedded in the institutional contexts in which they are developed [83,89]. They do so by superimposing the biological stratification encoded in data onto already established social hierarchies [46,68]. In practice, this contributes to a form of biomedical citizenship in which individuals and/or groups who meet the dominant measures and indicators are recognised as “sanitary citizens”, while others are implicitly relegated to the status of “unsanitary subjects” [76].

Entrenched Marginalisation of Certain Populations

Individuals and/or groups classified as “unsanitary subjects” or “reasonable risk” are often the same populations that have historically been regarded as not counting in the same way as others. Consequently, they are at risk of (once again) becoming experimental subjects for technologies conceived and developed elsewhere, or “victims of undone or un-doable science” [38]. These people are also systematically excluded from the arenas in which symbolic capital is accumulated, as well as from the processes through which knowledge about the promises and risks of AI in healthcare is produced and circulated [76].
To paraphrase Briggs (2005), the linguistic ideologies underpinning the promises and risks of AI in healthcare help to define who will and who will not benefit from its promise of value [76]. The language “naturally” distinguishes citizens who stand to gain from the technology (e.g., those with the “right indicators and parameters”, “right colour”, “right gender/sex”, “right age”, “right language” or “right socioeconomic profile”) from those who do not. This discourse fuels imaginaries and communication processes that create categories, subjectivities, social relations, and hierarchical positions that ultimately determine who will benefit from AI healthcare technologies and who will be deemed “acceptable errors and accidents”, framed as “reasonable risks”, to enable a larger segment of the population to benefit from the promises of the technology [14,38,76].
According to Rafanelli (2022), “if already-privileged people are disproportionately represented among the creators of AI systems, and if their experiences are disproportionately represented in the data used to train them, one could even argue that AI will reinforce existing unjust hierarchies” [74]. In this context, the broader system of AI research and knowledge dissemination implicitly defines who is considered “more human” and raises the fundamental question of “who science is designed to serve or save” [90]. Tat et al. (2020) reported that “if the algorithm was designed to optimise hospital resources, high-income White patients might be selected to receive the majority of hospital resources, further deepening the divide in access to care for minority and underserved groups” [91]. For instance, according to a study in Switzerland, among patients presenting with chest pain, men are 2.5 times as likely as women to be referred to a cardiologist [91,92]. Similarly, according to a systematic review and meta-analysis conducted in the United States (USA), Black patients presenting to emergency departments are 40% less likely than White patients to receive analgesics [91,93]. These disparities have been attributed, in part, to the underrepresentation of these populations in cardiology research and to concerns that much of current evidence-based medicine may not apply to them [91,94]. Moreover, another study found that, even after controlling for socioeconomic status and health insurance coverage, Black and Latino patients received lower quality medical care than White patients [95,96]. Obermeyer and Topol (2021) also report that a widely used classification system for diagnosing knee osteoarthritis was originally developed using data from coal miners, predominantly White men, in the United Kingdom during the 1950s [97]. Such historical and structural biases may be embedded in the datasets used to train AI systems, which could perpetuate and, in some cases, intensify existing health disparities and discriminatory practices [91,98]. In this regard, the large-scale deployment of these technologies in populations for whom their effectiveness has not been rigorously validated raises important ethical and clinical concerns [99]. For example, one study found that a widely used AI system exhibited significant racial bias: the algorithm systematically assigned higher risk scores to White patients, leading to their more frequent selection for additional care, despite Black patients having similar or greater health needs [91,100]. Similarly, Nordling et al. (2019) reported that an AI system used postal codes as a predictive variable for prolonged hospital stays, introducing systemic bias by disproportionately associating certain low income and predominantly Black neighbourhoods with higher risk, which in turn led to unequal treatment recommendations [91,101].
This aligns with Briggs (2005) who argued that “the centralisation of statistics in the Euro-American metropole is characterised by an ironic circularity: the more these artifacts declare their transparency and objectivity, the more they become bearers of fragmented histories that speak of global inequalities of race, gender, class, and nation (…), constructing individuals, populations, and nations as producers, disseminators, or receivers of biomedical information—or as ignorant bystanders” [76]. In other words, the assumption that AI is autonomous, apolitical, neutral, or objective, and somehow independent of the cultural, socio-political, and economic contexts in which it is developed, obscures the complex processes of (de)contextualisation that shape its production and deployment. It also disregards the historical forces and institutional structures that condition its integration into healthcare systems [76]. Furthermore, this perspective tends to ignore, intentionally or not, that many health problems experienced by populations are not primarily biomedical in nature. Rather, they are the outcomes of deeply entrenched structural violence and systemic injustices that deprive communities and people of their socio-political rights and basic material conditions necessary for well-being. These include, but are not limited to, substandard housing, inadequate access to transportation, limited educational opportunities, environmental pollution, and poor food environments such as food deserts or food swamps [46,102].

Conclusions

This paper has examined several key social, political, and economic influences shaping both the vision and the trajectories of AI technologies within healthcare systems. By shedding light on the market imperatives, algorithmic rationality, and marginalisation of certain populations currently underpinning the development and implementation of AI technologies in healthcare systems, it argued that deconstructing these dimensions is essential to understanding the complex and unprecedent transformations that AI introduces within healthcare. These systems often reproduce entrenched dynamics that have historically shaped the development of medicine, technology, and institutional authority, while simultaneously reinforcing the marginalisation and invisibilisation of certain population groups. Given the potential of AI to deepen existing power imbalances and social inequalities, this paper calls for public policies that are grounded in the principles of social justice, fairness, and the public interest. Such policies should seek to mitigate systemic harms while fostering more equitable healthcare outcomes for all.

Declaration of conflicting interests

The authors declare that they have no competing interests.

Ethics approval and consent to participate

N/A

Guarantor

HA

Declaration of generative AI and AI-assisted technologies in the writing process

During the preparation of this work, the authors used ChatGPT in order to improve the readability and language of the manuscript (e.g., to check grammar, spelling, references, and English usage). After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the published article.

Author Contributions

HA and LR wrote the first draft of the manuscript and received input from MN, MS, EJP, RPS, and MAAA. All authors reviewed and approved the final manuscript.

Funding

HA received start-up funds from the University of Montreal’s School of Public Health and Public Health Research Center, and support from IVADO. The sponsors/funders had no role in data analysis or manuscript preparation

Data Availability Statement

All the data and documents used are public. They are cited in the references.

Acknowledgments

We thank Prof. Jean-Paul Fortin (Laval University, Quebec, Canada) for his insightful comments and feedback. The ideas presented in the text are those of the authors. They do not necessarily reflect the position of their organisations.

References

  1. Berger, P.; Luckmann, T. The social construction of reality. In Social theory re-wired; Routledge, 2016; pp. 110–122. [Google Scholar]
  2. Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine 2019, 25(1), 44–56. [Google Scholar] [CrossRef]
  3. Dicuonzo, G.; Donofrio, F.; Fusco, A.; Shini, M. Healthcare system: Moving forward with artificial intelligence. Technovation 2022, 2022, 102510. [Google Scholar] [CrossRef]
  4. Petersson, L.; Larsson, I.; Nygren, J. M.; Nilsen, P.; Neher, M.; Reed, J. E.; et al. Challenges to implementing artificial intelligence in healthcare: a qualitative interview study with healthcare leaders in Sweden. BMC Health Services Research 2022, 22(1), 1–16. [Google Scholar] [CrossRef]
  5. Alami, H.; Lehoux, P.; Auclair, Y.; de Guise, M.; Gagnon, M.; Shaw, J.; et al. Artificial Intelligence and Health Technology Assessment: Anticipating a New Level of Complexity. Journal of Medical Internet Research 2020, 22(7), e17707. [Google Scholar] [CrossRef]
  6. Alami, H.; Lehoux, P.; Papoutsi, C.; Shaw, S. E.; Fleet, R.; Fortin, J.-P. Understanding the integration of artificial intelligence in healthcare organisations and systems through the NASSS framework: a qualitative study in a leading Canadian academic centre. BMC Health Services Research 2024, 24(1), 701. [Google Scholar] [CrossRef]
  7. Roppelt, J. S.; Kanbach, D. K.; Kraus, S. Artificial intelligence in healthcare institutions: A systematic literature review on influencing factors. Technology in Society 2024, 76, 102443. [Google Scholar] [CrossRef]
  8. Esmaeilzadeh, P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artificial Intelligence in Medicine 2024, 151, 102861. [Google Scholar] [CrossRef] [PubMed]
  9. Krittanawong, C. The rise of artificial intelligence and the uncertain future for physicians. European Journal of Internal Medicine 2018, 48, e13–e14. [Google Scholar] [CrossRef] [PubMed]
  10. Lehoux, P.; Rocha de Oliveira, R.; Rivard, L.; Silva, H. P.; Alami, H.; Mörch, C. M.; et al. A Comprehensive, Valid, and Reliable Tool to Assess the Degree of Responsibility of Digital Health Solutions That Operate With or Without Artificial Intelligence: 3-Phase Mixed Methods Study. Journal of Medical Internet Research 2023, 25, e48496. [Google Scholar] [CrossRef]
  11. Matheny, M.; Israni, S. T.; Ahmed, M.; Whicher, D. Artificial intelligence in health care: The hope, the hype, the promise, the peril;National Academy of Medicine prepublication. 2020, pp. 94–97. Available online: https://nam.edu/wp-content/uploads/2021/07/4.3-AI-in-Health-Care-title-authors-summary.pdf.
  12. Alami, H.; Lehoux, P.; Denis, J.-L.; Motulsky, A.; Petitgand, C.; Savoldelli, M.; et al. Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. Journal of Health Organization and Management 2021, 35(1), 106–114. [Google Scholar] [CrossRef]
  13. Alami, H.; Lehoux, P.; Shaw, S. E.; Niang, M.; Malas, K.; Fortin, J.-P. To what extent can digital health technologies comply with the principles of responsible innovation? Practice- and policy-oriented research insights regarding an organisational and systemic issue. International Journal of Health Policy and Management 2024, 13, 8061. [Google Scholar] [CrossRef]
  14. Alami, H.; Rivard, L.; Lehoux, P.; Hoffman, S. J.; Cadeddu, S. B. M.; Savoldelli, M.; et al. Artificial intelligence in health care: laying the foundation for responsible, sustainable, and inclusive innovation in low- and middle-income countries. Globalization and Health 2020, 16(1), 52. [Google Scholar] [CrossRef] [PubMed]
  15. Sharon, T. When digital health meets digital capitalism, how many common goods are at stake? Big Data & Society 2018, 5(2), 2053951718819032. [Google Scholar] [CrossRef]
  16. Matheny, M. E.; Whicher, D.; Israni, S. T. Artificial intelligence in health care: a report from the National Academy of Medicine. JAMA 2020, 323(6), 509–510. [Google Scholar] [CrossRef] [PubMed]
  17. Alami, H.; Gagnon, M.-P.; Fortin, J.-P. Some multidimensional unintended consequences of telehealth utilization: a multi-project evaluation synthesis. International Journal of Health Policy and Management 2019, 8(6), 337. [Google Scholar] [CrossRef] [PubMed]
  18. Alami, H.; Fortin, J.-P.; Gagnon, M.-P.; Pollender, H.; Têtu, B.; Tanguay, F. The challenges of a complex and innovative telehealth project: a qualitative evaluation of the eastern Quebec Telepathology network. International Journal of Health Policy and Management 2018, 7(5), 421. [Google Scholar] [CrossRef]
  19. Alami, H.; Fortin, J.-P.; Gagnon, M.-P.; Lamothe, L.; Ahmed, M. A. A.; Roy, D. Cadre stratégique pour soutenir l’évaluation des projets complexes et innovants en santé numérique. Santé Publique 2020, 32(2), 221–228. [Google Scholar] [CrossRef]
  20. Greenhalgh, T.; Procter, R.; Wherton, J.; Sugarhood, P.; Shaw, S. The organising vision for telehealth and telecare: discourse analysis. BMJ Open 2012, 2(4). [Google Scholar] [CrossRef]
  21. Orlikowski, W. J.; Gash, D. C. Technological frames: making sense of information technology in organizations. ACM Transactions on Information Systems (TOIS) 1994, 12(2), 174–207. [Google Scholar] [CrossRef]
  22. Cugurullo, F. The obscure politics of artificial intelligence: a Marxian socio-technical critique of the AI alignment problem thesis; AI and Ethics, 2024; pp. 1–13. [Google Scholar] [CrossRef]
  23. Winner, L. Autonomous technology: Technics-out-of-control as a theme in political thought; MIT Press, 1978. [Google Scholar]
  24. Nye, D. E. Technology matters: Questions to live with; MIT Press, 2007. [Google Scholar]
  25. Lindgren, S. Handbook of critical studies of artificial intelligence; Edward Elgar Publishing, 2023. [Google Scholar]
  26. Abdalla, M.; Abdalla, M. The grey hoodie project: Big tobacco, big tech, and the threat on academic integrity. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society; 2021; pp. 287–297. [Google Scholar] [CrossRef]
  27. Bak, M. A.; Horbach, D.; Buyx, A.; McLennan, S. A scoping review of ethical aspects of public-private partnerships in digital health. npj Digital Medicine 2025, 8(1), 129. [Google Scholar] [CrossRef]
  28. Sharon, T. Blind-sided by privacy? Digital contact tracing, the Apple/Google API and big tech’s newfound role as global health policy makers. Ethics and Information Technology 2021, 23 Suppl 1, 45–57. [Google Scholar] [CrossRef]
  29. Dambrin, C.; Lambert, C.; Sponem, S. Control and change—Analysing the process of institutionalisation. Management Accounting Research 2007, 18(2), 172–208. [Google Scholar] [CrossRef]
  30. Lehoux, P.; Pacifico Silva, H.; Pozelli Sabio, R.; Roncarolo, F. The unexplored contribution of responsible innovation in health to sustainable development goals. Sustainability 2018, 10(11), 4015. [Google Scholar] [CrossRef]
  31. Moynihan, R.; Bero, L.; Hill, S.; Johansson, M.; Lexchin, J.; Macdonald, H.; et al. Pathways to independence: towards producing and using trustworthy evidence. BMJ 2019, 367, l6576. [Google Scholar] [CrossRef] [PubMed]
  32. Hilton, J.; Kokotajlo, D.; Kumar, R.; Nanda, N.; Saunders, W.; Wainwright, C.; et al. A Right to Warn about Advanced Artificial Intelligence. 2024. Available online: https://righttowarn.ai/.
  33. Alami, H.; Rivard, L.; Lehoux, P.; Ahmed, M. A. A.; Soubra, R.; Rouquet, R.; et al. Conflicts and complexities around intellectual property and value sharing of AI healthcare solutions in public-private partnerships: A qualitative study. SSM – Health Systems Journal 2025, 5(12), 100093. [Google Scholar] [CrossRef]
  34. Longpre, S.; Kapoor, S.; Klyman, K.; Ramaswami, A.; Bommasani, R.; Blili-Hamelin, B.; et al. A safe harbor for AI evaluation and red teaming. arXiv 2024, arXiv:2403.04893. [Google Scholar] [CrossRef]
  35. Marelli, L.; Testa, G.; Van Hoyweghen, I. Big Tech platforms in health research: Re-purposing big data governance in light of the General Data Protection Regulation’s research exemption. Big Data & Society 2021, 8(1), 20539517211018783. [Google Scholar] [CrossRef]
  36. Ahmed, N.; Wahed, M.; Thompson, N. C. The growing influence of industry in AI research. Science 2023, 379(6635), 884–886. [Google Scholar] [CrossRef]
  37. Buolamwini, J.; Gebru, T. Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency 2018, 81(X), 1–15. Available online: https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.
  38. Kumaki, H. Reasonable exposure: Nuclear infrastructure and technopolitics of health and well-being. Center for East Asian Studies, University of Colorado. 2021. Available online: https://www.colorado.edu/cas/sites/default/files/attached-files/living_in_paradox_-_hiroko_kumaki_.pdf.
  39. Alami, H.; Gagnon, M.-P.; Fortin, J.-P. Telehealth in light of cloud computing: Clinical, technological, regulatory and policy issues. Journal of the International Society for Telemedicine and eHealth 4 2016, e5(1–7. Available online: https://journals.ukzn.ac.za/index.php/JISfTeH/article/view/138.
  40. Margetts, H.; Dorobantu, C. Rethink government with AI. Nature 2019, 568(7751), 163–165. [Google Scholar] [CrossRef]
  41. Kim, S.-H. Social movements and contested sociotechnical imaginaries in South Korea. In Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power; 2015; pp. 152–173. [Google Scholar] [CrossRef]
  42. De Togni, G. Staging the Robot: Performing Techno-Politics of Innovation for Care Robotics in Japan. In An International Journal; East Asian Science, Technology and Society, 2024; pp. 1–18. [Google Scholar] [CrossRef]
  43. World Health Organization. Ethics and governance of artificial intelligence for health. 2021. Available online: https://www.who.int/publications/i/item/9789240029200.
  44. Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 2019, 1(11), 501–507. [Google Scholar] [CrossRef]
  45. Alami, H.; Gagnon, M.-P.; Fortin, J.-P.; Kouri, R. La télémédecine au Québec: état de la situation des considérations légales, juridiques et déontologiques. European Research in Telemedicine/La Recherche Européenne en Télémédecine 2015, 4(2), 33–43. [Google Scholar] [CrossRef]
  46. Alami, H.; Rivard, L.; de Oliveira, R. R.; Lehoux, P.; Cadeddu, S. B. M.; Savoldelli, M.; et al. Guiding pay-as-you-live health insurance models toward responsible innovation in health. Journal of Participatory Medicine 2020, 12(3), e19586. [Google Scholar] [CrossRef] [PubMed]
  47. Alami, H.; Lehoux, P.; Shaw, S. E.; Papoutsi, C.; Rybczynska-Bunt, S.; Fortin, J.-P. Virtual care and the inverse care law: Implications for policy, practice, research, public and patients. International Journal of Environmental Research and Public Health 2022, 19(17), 10591. [Google Scholar] [CrossRef] [PubMed]
  48. Alami, H.; Rivard, L.; Lehoux, P.; Ag Ahmed, M. A.; Fortin, J.-P.; Fleet, R. Integrating environmental considerations in digital health technology assessment and procurement: Stakeholders’ perspectives. Digital Health 9 2023, 20552076231219113. [Google Scholar] [CrossRef] [PubMed]
  49. Hao, K. Big Tech’s guide to talking about AI ethics. MIT Technology Review. 2021. Available online: https://www.technologyreview.com/2021/04/13/1022568/big-tech-ai-ethics-guide/.
  50. Koniakou, V. From the “rush to ethics” to the “race for governance” in artificial intelligence. Information Systems Frontiers 2023, 25(1), 71–102. [Google Scholar] [CrossRef]
  51. Slee, T. Dubber, M., Pasquale, F., Das, S., Eds.; The incompatible incentives of private-sector AI. In The Oxford Handbook of Ethics of AI; Oxford University Press, 2020; pp. 106–123. [Google Scholar] [CrossRef]
  52. Asaro, P. M. AI ethics in predictive policing: From models of threat to an ethics of care. IEEE Technology and Society Magazine 2019, 38(2), 40–53. [Google Scholar] [CrossRef]
  53. Kannelønning, M. S. Navigating uncertainties of introducing artificial intelligence (AI) in healthcare: The role of a Norwegian network of professionals. Technology in Society 76 2024, 102432. [Google Scholar] [CrossRef]
  54. Galanos, V. Expectations and expertise in artificial intelligence: Specialist views and historical perspectives on conceptualisation, promise, and funding; University of Edinburgh, 2023. [Google Scholar] [CrossRef]
  55. Swanson, E. B.; Ramiller, N. C. The organizing vision in information systems innovation. Organization Science 1997, 8(5), 458–474. [Google Scholar] [CrossRef]
  56. Westerstrand, S.; Westerstrand, R.; Koskinen, J. Talking existential risk into being: A Habermasian critical discourse perspective to AI hype. AI and Ethics 2024, 4(3), 713–726. [Google Scholar] [CrossRef]
  57. Winfield, A. F.; Jirotka, M. Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 2018, 376(2133), 20180085. [Google Scholar] [CrossRef]
  58. Phan, T.; Goldenfein, J.; Mann, M.; Kuch, D. Economies of virtue: The circulation of ‘ethics’ in Big Tech. Science as Culture 2022, 31(1), 121–135. [Google Scholar] [CrossRef]
  59. Stamboliev, E.; Christiaens, T. How empty is trustworthy AI? A discourse analysis of the ethics guidelines of trustworthy AI; Critical Policy Studies, 2024; pp. 1–18. [Google Scholar] [CrossRef]
  60. van der Wel, K. J. Imagining artificial intelligence: Uncovering the culturally specific imaginations surrounding AI in the Dutch national AI strategy. Utrecht University. 2021. Available online: https://studenttheses.uu.nl/bitstream/handle/20.500.12932/39093/20210203_MasterThesis_VanderWel.pdf?sequence=1&isAllowed=y.
  61. Hoff, J.-L. Unavoidable futures? How governments articulate sociotechnical imaginaries of AI and healthcare services. Futures 148 2023, 103131. [Google Scholar] [CrossRef]
  62. Bareis, J.; Katzenbach, C. Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values 2022, 47(5), 855–881. [Google Scholar] [CrossRef]
  63. Markham, A. The limits of the imaginary: Challenges to intervening in future speculations of memory, data, and algorithms. New Media & Society 2021, 23(2), 382–405. [Google Scholar] [CrossRef]
  64. Kannelønning, M. S. Contesting futures of artificial intelligence (AI) in healthcare: Formal expectations meet informal anticipations. Technology Analysis & Strategic Management 2024, 36(11), 3845–3856. [Google Scholar] [CrossRef]
  65. Konrad, K.; Van Lente, H.; Groves, C.; Selin, C. Felt, U., Fouché, R., Miller, C. A., Smith-Doerr, L., Eds.; Performing and governing the future in science and technology. In The handbook of science and technology studies; MIT Press, 2016; pp. 465–493. [Google Scholar]
  66. Sloane, M.; Chowdhury, R.; Havens, J. C.; Lazovich, T.; Rincon Alba, L. AI and procurement – A primer. 2021. Available online: https://archive.nyu.edu/bitstream/2451/62255/2/AI%20and%20Procurement%20Primer%20Summer%202021.pdf.
  67. Hart, J. T. Why are doctors so unhappy? Unhappiness will be defeated when doctors accept full social responsibility. BMJ 2001, 322(7298), 1363–1364. [Google Scholar] [PubMed]
  68. Germani, A. The politics of artificial intelligence in healthcare: Diagnosis and treatment. In AI and Society; Chapman and Hall/CRC, 2023; pp. 33–48. [Google Scholar] [CrossRef]
  69. Pellegrino, E. D. The commodification of medical and health care: The moral consequences of a paradigm shift from a professional to a market ethic. The Journal of Medicine and Philosophy 1999, 24(3), 243–266. [Google Scholar] [CrossRef]
  70. Paul, R. Paul, R., Carmel, E., Cobbe, J., Eds.; The politics of regulating artificial intelligence technologies: A competition state perspective. In Handbook on Public Policy and Artificial Intelligence (Forthcoming); Edward Elgar Publishing, 2023. [Google Scholar] [CrossRef]
  71. Padden, M.; Öjehag-Pettersson, A. Protected how? Problem representations of risk in the General Data Protection Regulation (GDPR). Critical Policy Studies 2021, 15(4), 486–503. [Google Scholar] [CrossRef]
  72. AI Now Institute. AI Policy as Industrial Policy, with Amy Kapczynski and Jeremias Adams-Prassl - AI Now Salons. 2023. Available online: https://ainowinstitute.org/general/ai-policy-as-industrial-policy-with-amy-kapczynski-and-jeremias-adams-prassl-ai-now-salons.
  73. Fricker, M. Epistemic injustice: Power and the ethics of knowing; Oxford University Press, 2007. [Google Scholar]
  74. Rafanelli, L. M. Justice, injustice, and artificial intelligence: Lessons from political theory and philosophy. Big Data & Society 2022, 9(1), 20539517221080676. [Google Scholar] [CrossRef]
  75. Winner, L. Tavani, H., Ed.; Do artifacts have politics? In Computer ethics; Routledge, 2017; pp. 177–192. [Google Scholar]
  76. Briggs, C. L. Communicability, racial discourse, and disease. Annual Review of Anthropology 2005, 34(1), 269–291. [Google Scholar] [CrossRef]
  77. Jamet, R.; Truchon, K. The pandemic, artificial intelligence, and algorithmic governmentality. In Re-Globalization; Routledge, 2022; pp. 117–126. [Google Scholar] [CrossRef]
  78. Lipp, B.; Maasen, S. Techno-bio-politics: On interfacing life with and through technology. NanoEthics 2022, 16(1), 133–150. [Google Scholar] [CrossRef]
  79. Bird, G.; Lynch, H. Introduction to the politics of life: A biopolitical mess. European Journal of Social Theory 2019, 22(3), 301–316. [Google Scholar] [CrossRef]
  80. Foucault, M. Security, territory, population: Lectures at the Collège de France; Springer, 2007. [Google Scholar]
  81. Faraj, S.; Pachidi, S.; Sayegh, K. Working and organizing in the age of the learning algorithm. Information and Organization 2018, 28(1), 62–70. [Google Scholar] [CrossRef]
  82. Briggs, C. L. Ecologies of evidence in a mysterious epidemic. Medicine Anthropology Theory 2016, 3(2). [Google Scholar] [CrossRef]
  83. Greenhalgh, T. Commentary: Without values, complexity is reduced to mathematics. Journal of Evaluation in Clinical Practice 2025, 31(1), e14263. [Google Scholar] [CrossRef]
  84. Cath, C. Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 2018, 376(2133), 20180080. [Google Scholar] [CrossRef]
  85. Lipton, Z. C.; Steinhardt, J. Troubling trends in machine learning scholarship: Some ML papers suffer from flaws that could mislead the public and stymie future research. Queue 2019, 17(1), 45–77. [Google Scholar] [CrossRef]
  86. Campolo, A.; Crawford, K. Enchanted determinism: Power without responsibility in artificial intelligence; Engaging Science, Technology, and Society, 2020. [Google Scholar] [CrossRef]
  87. Amoore, L. Cloud ethics: Algorithms and the attributes of ourselves and others; Duke University Press, 2020. [Google Scholar]
  88. Wester, M. Procurement circuit under machine learning political order: Governance of, through, and for AI. Doctoral dissertation, Concordia University, 2023. Available online: https://spectrum.library.concordia.ca/id/eprint/992057/1/Wester_MA_S2023.pdf.
  89. Roy, V.; Hamilton, D.; Chokshi, D. A. Health and political economy: Building a new common sense in the United States. Health Affairs Scholar 2024, 2(5), qxae041. [Google Scholar] [CrossRef]
  90. Pai, M.; Abimbola, S. Science should save all, not just some. Science 2024, 384(6690), 581. [Google Scholar] [CrossRef]
  91. Tat, E.; Bhatt, D. L.; Rabbat, M. G. Addressing bias: Artificial intelligence in cardiovascular medicine. The Lancet Digital Health 2020, 2(12), e635–e636. [Google Scholar] [CrossRef]
  92. Clerc Liaudat, C.; Vaucher, P.; De Francesco, T.; Jaunin-Stalder, N.; Herzig, L.; Verdon, F.; et al. Sex/gender bias in the management of chest pain in ambulatory care. Women's Health 14 2018, 1745506518805641. [Google Scholar] [CrossRef] [PubMed]
  93. Lee, P.; Le Saux, M.; Siegel, R.; Goyal, M.; Chen, C.; Ma, Y.; et al. Racial and ethnic disparities in the management of acute pain in US emergency departments: Meta-analysis and systematic review. The American Journal of Emergency Medicine 2019, 37(9), 1770–1777. [Google Scholar] [CrossRef] [PubMed]
  94. Tahhan, A. S.; Vaduganathan, M.; Greene, S. J.; Alrohaibani, A.; Raad, M.; Gafeer, M.; et al. Enrollment of older patients, women, and racial/ethnic minority groups in contemporary acute coronary syndrome clinical trials: A systematic review. JAMA Cardiology 2020, 5(6), 714–722. [Google Scholar] [CrossRef] [PubMed]
  95. Briggs, C. L. Incommunicable: Decolonizing perspectives on language and health. American Anthropologist 2024, 126(1), 83–95. [Google Scholar] [CrossRef]
  96. Nelson, A. Unequal treatment: Confronting racial and ethnic disparities in health care. Journal of the National Medical Association 2002, 94(8), 666. [Google Scholar] [PubMed]
  97. Obermeyer, Z.; Topol, E. J. Artificial intelligence, bias, and patients' perspectives. The Lancet 2021, 397(10289), 2038. [Google Scholar] [CrossRef]
  98. Cau, R.; Pisu, F.; Suri, J. S.; Saba, L. Addressing hidden risks: Systematic review of artificial intelligence biases across racial and ethnic groups in cardiovascular diseases. European Journal of Radiology 2024, 111867. [Google Scholar] [CrossRef]
  99. Jones, O.; Matin, R.; Van der Schaar, M.; Bhayankaram, K. P.; Ranmuthu, C.; Islam, M.; et al. Artificial intelligence and machine learning algorithms for early detection of skin cancer in community and primary care settings: A systematic review. The Lancet Digital Health 2022, 4(6), e466–e476. [Google Scholar] [CrossRef]
  100. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366(6464), 447–453. [Google Scholar] [CrossRef]
  101. Nordling, L. A fairer way forward for AI in health care. Nature 2019, 573(7775), S103–S105. [Google Scholar] [CrossRef]
  102. Lupton, D. Quantifying the body: Monitoring and measuring health in the age of mHealth technologies. Critical Public Health 2013, 23(4), 393–403. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated