Preprint
Article

This version is not peer-reviewed.

Ethical Challenges in the Current Digital Landscape: AI, Robotics and Quantum Computing

Submitted:

06 May 2025

Posted:

07 May 2025

You are already at the latest version

Abstract
The rapid advancement of Artificial Intelligence (AI), robotics, and quantum computing presents significant ethical and societal challenges. Consequently, the impact on society requires reflection and a thorough analysis of implications that will require adaptation and coping strategies. Among those actions, it becomes necessary to explore the emerging ethical dilemmas arising from these technologies, including issues of fairness, accountability, privacy, security, the future of employment and the implications for education and workforce development. It is also necessary to explore how AI can perpetuate biases, how AI surveillance threatens privacy, and the risks posed by quantum computing to digital security. The lack of transparency in AI decision-making and the potential for weaponisation are also key concerns. Furthermore, the disruptive impact of automation on labour markets must be analysed along with a fundamental rethinking of education to equip future generations with ethical reasoning and adaptability. As a result, the study concludes that it becomes necessary to emphasise the need for proactive ethical considerations, anticipatory regulation, and global cooperation to ensure these powerful technologies serve humanity's broader interests.
Keywords: 
;  ;  ;  
Subject: 
Engineering  -   Other

1. Introduction

As artificial intelligence and robotics rapidly evolve, they challenge existing ethical frameworks, moral values, and legal systems, making it increasingly urgent to rethink how society governs and integrates these technologies. From diverse areas of industry to the routine decisions of daily life, long-standing values are undergoing profound change. This transformation calls for a critical reassessment and adaptation of laws and regulations that have remained largely unchanged for decades. Since AI systems are increasingly integrated into critical decision-making processes, concerns mount in sectors from healthcare, finance, to law enforcement, and others that may significantly impact humans and ecosystems. Meanwhile, robotics is on a fast-progressing path, reshaping fields such as manufacturing, logistics, and personal assistance. More recently, quantum computing emerged as a transformative technology with the potential to revolutionise problem-solving in fields from cryptography to drug discovery. However, as these technologies grow more sophisticated and autonomous, they give rise to profound ethical concerns, ranging from fairness and accountability to privacy and security, eliciting critical questions that affect everyone, creating widely impacting concerns such as the future of employment. AI models trained on historical data can inherit and amplify societal biases, leading to discriminatory outcomes in hiring, lending, and law enforcement [1]. These biases tend to be exacerbated by the lack of diversity in AI training data and in the composition of working teams, which can result in technologies that fail to consider the needs of everyone, with potential adverse effects on marginalised groups [2]. Additionally, AI-driven surveillance systems present significant threats to personal privacy, as mass data collected by governments and corporations increasingly blur the line between security and individual rights [3]. Similar concerns extend to quantum computing, which, while still in its early stages, will increasingly present complex challenges to digital security. The ability of quantum computers to break, once considered secure, cryptographic schemes poses a significant risk to the integrity of online transactions, encrypted communications, and national security systems, creating an urgent need for post-quantum cryptographic solutions [4]. Another fundamental challenge is the degree of accountability in AI decision-making. Many AI systems operate as opaque “black boxes,” making it difficult for users, regulators, and even developers to understand how decisions are made [5]. This lack of transparency raises ethical and legal concerns, particularly in high-stakes applications such as autonomous vehicles, predictive policing, and automated medical diagnostics [6]. When an AI-driven car causes an accident or an algorithm incorrectly denies someone access to healthcare or parole, determining responsibility remains a significant legal and moral dilemma [7]. Furthermore, the potential for AI to be weaponised, whether through autonomous military systems or algorithmic manipulation of public opinion, raises urgent ethical concerns about security and governance in an increasingly digital world [8]. Quantum computing introduces a new kind of risk with additional complexity levels in computation, making it even more difficult to assess the reasoning behind most algorithmic outputs. The development of quantum machine learning increases the complexity of all those risks, further compromising transparency and fairness, as it operates on fundamentally different principles than classical computing, leading to ethical dilemmas that have yet to be fully explored [9]. Beyond these ethical challenges, AI and robotics are increasingly reshaping labour markets. The automation of routine tasks has already disrupted traditional industries, displacing workers in manufacturing, logistics, and customer service roles [10]. While some argue that AI will create new job opportunities in data science, software engineering, robotics and AI ethics, the transition requires substantial reskilling efforts, and a large percentage of workers may be unable to adapt [11]. The concern is not only for blue-collar jobs; white-collar professions such as legal researchers, medical diagnostics, and financial analysts are also becoming increasingly automated, posing challenges for traditionally high-skilled workers [12]. Additionally, physical jobs are being transformed as AI-driven robots become more skilled at performing tasks in agriculture, construction, and even elderly care, raising concerns about economic inequality and workforce stratification [13]. Humans may still be necessary, but in minor numbers compared to the current labour market. The introduction of quantum computing will likely accelerate these disruptions, particularly in fields that depend on fast and complex data processing, such as finance, logistics, and pharmaceutical research. With its potential to optimise supply chains, simulate new materials, and accelerate AI training, quantum technology could create new industries while simultaneously certain traditional roles will become obsolete, demanding an even greater emphasis on lifelong learning and workforce adaptability [14]. These technological shifts urge a fundamental rethinking of education and workforce development. The traditional educational model, which emphasises specialisation and static career paths, is becoming outdated as exponential technologies impose a new societal organisation caused by their skills, accessibility, adaptability and pervasiveness. Humans need time to learn, to adapt and incur more costs, all those aspects resulting in additional difficulties to cope with the rapid technological changes. Future generations must be prepared with an increased preponderance of critical thinking, adaptability, and interdisciplinary knowledge to navigate an AI, robotic and quantum-driven economy. Schools and universities must emphasise problem-solving, computational literacy, and ethical reasoning, ensuring that students are not only proficient in technical skills but also understand the broader ethical and societal implications of AI, robotics, and quantum computing [15]. Lifelong learning was recommended in the past, and has now become a necessity, as workers must continually update their skills to remain relevant in an ever-evolving job market. Governments, businesses, and educational institutions must work together to create policies that facilitate continuous learning and support workers in transition [16]. However, uncertainty will be installed permanently as technological advancements will continue to disrupt industries faster than regulatory frameworks and educational systems can adapt. The traditional notion of a lifelong career is not valid anymore, and individuals may need to pivot across multiple disciplines throughout their working lives, challenging existing labour policies and social safety nets [17]. Addressing these issues requires a holistic approach, balancing technological innovation with ethical considerations and proactive policy interventions to ensure that AI, robotics, and quantum computing serve the broader interests of humanity, now and for the future. These considerations lead us to the following research question. How can ethical frameworks be restructured to address the accountability and transparency challenges posed by the convergence of AI, robotics, and quantum computing?

2. Ethical and Social Concerns in AI and Robotics

The ethical and social implications of AI and robotics continue to escalate in scope and complexity. As these technologies expand into healthcare, finance, education, and law enforcement, they bring systemic risks that are not just technical but deeply human and societal.
One of the primary concerns is the lack of clear ethical standards embedded in governance and in the development of the new infrastructures required by technological advancements. According to the European Commission, ethical considerations in AI must be integrated from the ground up, from data acquisition to algorithmic design and deployment [18]. Systems lacking this integration often reflect structural injustices or biased sources embedded in training data or the developers’ assumptions.
A second concern is the opacity and unaccountability of algorithmic systems. Mittelstadt et al. [19] have shown that algorithmic processes often lack transparency and traceability, leading to ‘black box’ decision-making that users and regulators cannot interrogate. These opaque mechanisms are particularly problematic in high-risk domains such as predictive policing, medical diagnostics, and social services, where outcomes have real consequences for individuals.
The third issue is the fragmentation of AI ethics guidelines. A global review by Jobin, Ienca, and Vayena [20] revealed that with over 80 sets of ethical AI principles published worldwide, they differ significantly in their focus and enforceability. Most remain aspirational and lack legal or institutional mechanisms for implementation, which undermines their practical value.
On top of all the mentioned concerns, ethical tensions exist between innovation and governance. The drive for competitive advantage, especially among tech giants and governments, has created a race for performance that often sidelines ethical reflection. In military and surveillance contexts, for example, ethical questions are frequently subordinated to strategic or commercial interests. It is necessary, therefore to analyse the implications of those different technologies and how it will impact society, ethics governance and, ultimately, humans.

3. Quantum Computing and Its Emerging Ethical Challenges

Quantum computing is still in its early stages, but its future impact could be more disruptive than that of AI itself. In 2019, Arute et al. [21]demonstrated quantum supremacy by performing a computation in minutes that would have taken classical supercomputers thousands of years. This breakthrough underscores the urgency of developing ethical and regulatory frameworks for a technology that could reshape the foundations of a digital society.
Cybersecurity becomes, for multiple reasons, a primary concern. Quantum computers threaten to break most current cryptographic protocols, potentially rendering secure communications, from banking to healthcare, vulnerable to attack. While post-quantum cryptography is being actively developed, geopolitical and ethical implications of such vulnerability remain severe.Quantum technologies also raise questions of equity and access. Once a small number of actors dominate quantum innovation, the imbalance in computational power could exacerbate global digital gaps, concentrating decision-making capacity in already powerful nations and tech corporations. Floridi and Cowls [22] propose a unified ethical framework: beneficence, non-maleficence, autonomy, justice, and explicability, that could serve as a baseline for such emerging technologies.
Another concern involves the lack of interpretability in quantum algorithms, particularly in quantum machine learning. The problem exists since models evolve beyond human comprehension and, for that same reason, ensuring transparency and accountability will become increasingly difficult as humans are excluded from a possible solution.
One of the most urgent ethical threats of quantum computing lies in its capacity to break classical encryption protocols, which form the foundation of modern communication, data protection, and financial security. Once quantum processors surpass a certain threshold, widely used algorithms such as RSA and ECC will become obsolete, potentially exposing everything from government communications to personal health data to unauthorised access. As Bernstein and Lange (2017) have shown, the transition to post-quantum cryptography is both necessary and urgent to prevent a global cryptographic collapse [23]. On another angle, the convergence of quantum computing with the Internet of Things (IoT) and the Internet of Persons (IoP) poses new risks in domains where human behavioural, physiological, and even emotional signals are increasingly harvested by ambient systems. Luis-Ferreira and Jardim-Goncalves (2013) proposed a behavioural framework capable of capturing emotional information within IoT environments [24], while in a related work, they discuss how models inspired by neurophysiology and brain activity underpin the evolution of IoP systems that directly integrate sensorial, cognitive, and affective layers of human experience into networked infrastructures [25]. If emotional and physiological data streams become accessible to quantum-powered systems, whether by malicious actors or under-regulated platforms, the potential for psychological profiling, surveillance, and behavioural manipulation could reach unprecedented dimensions. This scenario raises critical questions about the ethical boundaries of the interaction with human data and demands pre-emptive governance frameworks for quantum-age bio-cyber systems.
Finally, quantum technologies are expected to accelerate the development of artificial intelligence, particularly through advances in quantum machine learning and optimisation. This convergence likely amplifies long-standing ethical concerns, including algorithmic opacity, autonomy, and systemic inequality, while simultaneously introducing novel challenges such as quantum unpredictability, data integrity risks, and geopolitical imbalance. As Arute et al. (2019) demonstrated in their quantum supremacy experiment, quantum processors have the potential to outperform classical systems exponentially, underscoring the urgency for ethical frameworks that can anticipate disruptive shifts in computational power [21]. Following these concerns, it is urgent to understand the consequences for society, in particular, how education and workforce must be empowered to become part of it, overcoming the risk of excluding humans from a future balanced society.

4. Implications for Education and Workforce Development

The ethical challenges described above are caused by a deep perturbation in the foundations of education and employment. The current model, built for stability and linear progression, is unprepared for the volatility and interdisciplinarity of the digital era.
Dignum [26], argues that ethics must become a core component of education in computer science, engineering, and business curricula. This is not just about compliance, but about cultivating a mindset of responsibility, human-centred design, and societal foresight.
Mittelstadt et al. [19] further reinforce the need for algorithmic literacy among not just developers but also end-users, regulators, and decision-makers. As AI and robotics influence more sectors, public understanding becomes essential for democratic oversight.
From a governance perspective, IBM [27] foresees a quantum readiness with an impact on educational pathways that include quantum theory, ethics, and computational reasoning.
Thus, the future of education must not only respond to market needs but also promote interdisciplinary adaptability, where students learn to pivot between technology, ethics, and human values. Life-long learning must be institutionalised, supported by labour policies that encourage mid-career retraining and cross-sector mobility.

5. Discussion: Reframing Ethics for Convergent Technologies

In this discussion, we analyse how traditional ethical approaches must be reconfigured to address the convergence of artificial intelligence, robotics, and quantum computing. The rapid evolution and interconnection of these technologies demand frameworks that overcome fragmented guidelines and recognise the inherent limits of explainability and accountability with an increasing number of opaque systems. It is essential to also consider emergent threats such as mass surveillance using affective data and to recognise the fundamental role of education in creating an infrastructure for ethical governance.

5.1. From Fragmentation to Cohesion: Rethinking Ethical Guidelines

The ethical guidelines for emerging technologies have proliferated in recent years, yet they often remain fragmented and narrowly focused on individual disciplines or specific corporate and national interests. As mentioned before [20] , a study identified more than 80 sets of AI ethics guidelines worldwide, illustrating the diverse normative perspectives, from corporate social responsibility frameworks to government-issued directives, currently informing how AI systems are developed and deployed. This multiplicity, while indicative of an increased global awareness of ethical concerns, can also lead to “ethical arbitrage,” whereby organisations strategically select the set of guidelines of their convenience, that impose the fewest constraints, weakening their overall impact.
A major challenge, therefore, is establishing a unified ethical charter capable of serving as a broadly accepted reference across jurisdictions, industries, and academic disciplines. Such a charter must align the interests of varied stakeholders and also accommodate the rapid evolution and convergence of technologies like AI, robotics, and quantum computing. A promising starting point for this effort is offered by Floridi and Cowls’ framework, which emphasises five fundamental principles: beneficence, non-maleficence, autonomy, justice, and explicability [22]. By situating each of these principles at the core of technology policy and governance, the framework helps unify disparate ethical concerns, shifting focus from isolated disciplinary contexts to an integrated perspective on human-centred technology.
An important strength of such a unifying approach is that it reduces “ethical silos,” where separate teams, corporations, or countries might develop guidelines in isolation, leading to inconsistencies in how core values (e.g., privacy, fairness, safety) are interpreted and enforced. In contrast, a cohesive, principles-based model encourages technology developers, regulators, and policymakers to operate within a shared normative landscape. This approach facilitates dialogue and mutual understanding among diverse stakeholders while providing a structure that can adapt to the dynamic nature of AI, robotics, and quantum computing. Furthermore, the cross-cutting nature of these principles reinforces the idea that ethics must be integrated at each stage of technological design and deployment, whether it concerns algorithmic fairness in AI, risk-assessment mechanisms in autonomous robots, or data encryption methods in quantum systems.
Ultimately, the convergence of technologies raises new ethical complexities that exceed the scope of single-issue guidelines. By establishing a unified ethical charter rooted in widely agreed-upon principles, stakeholders can better anticipate and address emergent challenges. Cohesion in ethics frameworks thus becomes essential to ensure that innovations in AI, robotics, and quantum computing serve collective human values and do not inadvertently exacerbate inequality, infringe on civil liberties, or undermine public trust in these transformative technologies.

5.2. The Limits of Explainability in AI and Quantum Systems

Explainability has long been advocated as a key requirement for ethical technology design, enabling stakeholders to understand, trust, and contest automated decisions. In classical AI, Burrell (2016) identifies three sources of opacity: intentional (proprietary secrecy), illiterate (lack of domain knowledge), and intrinsic (complexity of models), all of which can obscure how an algorithm arrives at its output [5]. When we extend these concerns to quantum systems, explainability faces uniquely insurmountable barriers rooted in the very foundations of quantum mechanics. This is motivated by several motives.
First, quantum information is encoded in superposed states of qubits, where a system simultaneously occupies multiple configurations until measurement collapses it to a single outcome. This collapse is inherently probabilistic, meaning that the same input can yield different outputs on repeated runs. Unlike classical models, where a decision can, in principle, be retraced via a deterministic computation graph, quantum circuits generate outcomes through interference patterns that have no one-to-one mapping to intermediate logical steps. Schuld and Petruccione (2018) note that the non-commutative algebra governing qubit operations renders internal states largely inaccessible to direct inspection, making it most difficult even for quantum specialists to reconstruct the rationale behind a given inference [9].
Second, entanglement, the differentiator characteristic of quantum advantage, binds qubits in correlations that defy classical intuition. A measurement on one qubit instantaneously affects its entangled partner, regardless of physical separation. While this property enables powerful algorithms, it also scatters causal responsibility across multiple system components. In complex quantum machine learning models such as variational quantum circuits, entangled subroutines can subtly influence final outputs in ways that cannot be decomposed into human-comprehensible steps, as described by Biamonte et al. (2017) in their survey of quantum-enhanced learning [28].
Third, because quantum states cannot be cloned or perfectly copied (the no-cloning theorem), one cannot freely duplicate internal states for analysis without disturbing them. Quantum state tomography offers a partial remedy, reconstructing an approximation of the state by many measurements, but this approach incurs exponential resource costs as system size grows. Schuld and Killoran (2019) demonstrate that even for modestly sized quantum feature maps, the overhead of tomography makes post-hoc explainability impractical for near-term quantum devices [29].
Taken together, these quantum-specific phenomena place hard ceilings on explainability. Whereas classical systems can gradually be pruned, approximated, or subjected to local-interpreter techniques (e.g., LIME or SHAP), no analogous toolbox yet exists for quantum circuits. Rather than assuming full transparency, regulators and practitioners must acknowledge these intrinsic limits and develop complementary safeguards: for example, mandated Quantum Algorithm Impact Assessments that document the intended behaviour and known failure modes of quantum models; standardized logging of circuit architectures and parameter settings; and hybrid audits that combine classical surrogates with statistical validation of quantum outputs. Recognising and addressing the practical impossibility of a complete “white-box” understanding of quantum systems is crucial for any ethical governance framework that seeks to harness quantum computing responsibly.

5.3. Accountability in the Quantum Age

Determining responsibility for decisions made by autonomous systems in the era of convergent technologies is a formidable challenge. As systems become more complex and decisions more distributed, traditional legal and moral frameworks that rely on clear lines of causation and intent lose their effectiveness. Scholars such as Bathaee (2018) have argued that the conventional models of accountability are insufficient to deal with the opaque black-box nature of AI, combined with the unpredictable influence of quantum computing processes. This calls for an expanded model of accountability that incorporates distributed responsibility and algorithmic traceability. In such a framework, liability cannot be attributed solely based on immediate human oversight but should be shared across designers, developers, and institutional systems that maintain these technologies. The development of audit trails and mechanisms for evaluation, coupled with robust regulatory oversight, will be essential to manage the multifaceted nature of accountability in the quantum age.

5.4. Surveillance and Affective Data: Ethical Red Lines

The integration of AI and quantum capabilities in processing vast amounts of emotional and physiological data introduces unprecedented risks to privacy and personal autonomy. Affective computing systems draw on facial expressions, vocal tone, heart rate variability, galvanic skin response, and other biometric signals to infer an individual’s emotional state and even deeper traits such as personality, political views, or mental health status. Kosinski, Stillwell, and Graepel (2013) demonstrated that simple digital footprints can predict highly sensitive personal attributes, such as intelligence or sexual orientation, with accuracy far beyond human judgment [30]. When coupled with the exponentially greater processing power of quantum-enhanced algorithms, these inferences can be executed in real time and at population scale, enabling continuous, large-scale surveillance that not only tracks behaviour but anticipates and influences it.
Calvo and D’Mello’s survey of affect detection methods highlights the transformative potential and risks of fusing physiological and contextual data to model human emotion [31]. Whereas today’s systems already raise concerns about covert data collection, such as inferring stress or deception from micro-expressions, quantum-accelerated analysis could immediately process what now requires hours of batch processing into instantaneous insights. Under such conditions, corporate or state actors might deploy ambient sensors and IoT devices to harvest affective signals ubiquitously, producing dynamic psychological profiles that adapt with every interaction. This degree of intrusion far exceeds established notions of privacy and autonomy, threatening to normalise a world in which our innermost feelings become publicly exposed.
In response, clear ethical red lines must be engraved in policy and practice. First, any collection of affective or biometric data must be based on truly informed, revocable consent; users must understand what data are gathered and also be aware of the full scope of inferences drawn and their potential uses. Second, sensitive inferences, such as mental-health diagnoses or personality profiling, should be explicitly prohibited unless employed for critical, supervised interventions (e.g., medical diagnostics under strict clinical oversight). Third, transparency mandates must require providers to disclose when affective-surveillance systems are active and to publish independent audits of their accuracy and decision-making processes. Finally, data-minimisation principles should restrict retention of raw biometric streams to the shortest time necessary, with robust encryption and access controls preventing unauthorised selling or reuse.
Without these safeguards, the convergence of AI, robotics, and quantum computing risks entrenching power imbalances, eroding civil liberties, and transforming private inner life into a surveillance frontier, an outcome fundamentally at odds with human dignity and self-determination.

5.5. Education as Infrastructure for Ethical Governance

The effective governance of convergent technologies depends fundamentally on education’s ability to keep pace with rapid innovation and to cultivate the ethical capacities of all stakeholders. Traditional educational models, rooted in siloed disciplines and linear career trajectories, struggle to address the speed and interdisciplinarity of AI, robotics, and quantum computing. Curricula in computer science and engineering often focus on technical competencies, leaving students unprepared to deal with questions of fairness, accountability, or privacy. Educators themselves lack the training or resources to integrate ethical reasoning into complex, technology-heavy courses, while institutional inertia and rigid accreditation standards further slow the so-needed curricular reform.
At the same time, education represents the most powerful lever for societal adaptation. Embedding ethical reasoning as a core learning outcome, rather than as an elective topic, ensures that future developers, policymakers, and end-users approach technology with a reflexive, human-centred mindset. Dignum (2008) argues that ethical considerations must be intertwined into every stage of technical education, from the design of algorithms to the deployment of autonomous systems, so that students internalise values of responsibility and social insight [26].
Beyond formal degree programmes, lifelong learning ecosystems are critical. UNESCO’s “Futures of Education” initiative highlights the need for continuous upskilling in computational ethics and digital citizenship to navigate an ever-shifting landscape of technological risks and opportunities (UNESCO, 2019) [14]. Public education campaigns, short courses, and industry–academia partnerships can extend ethical literacy to mid-career professionals, civil servants, and, in general, to the broader public.
Interdisciplinary approaches, such as case-based learning that brings together computer scientists, philosophers, social scientists, and legal scholars, foster the thought space necessary to examine real-world dilemmas. Cox (2021) demonstrates how literature-based “design fictions” can help students explore the societal impacts of AI and robotics in higher education, prompting critical reflection on both intended and unintended consequences [15].
By transforming education into an infrastructure for ethical governance, societies can build collective resilience against the novel threats posed by convergent technologies, whether in the form of privacy-invasive surveillance, algorithmic bias, or cryptographic vulnerability. A well-educated populace, fluent in both the capabilities and the limits of these systems, will be better equipped to demand accountability, shape better policies, and ensure that technological progress aligns with democratic values and human dignity.
Figure 1. Ethical Challenges in a society driven by AI, Robotics and Quantum Computing.
Figure 1. Ethical Challenges in a society driven by AI, Robotics and Quantum Computing.
Preprints 158587 g001

6. Theoretical Foundations for Ethical Analysis

In our study, a pluralistic ethical approach is adopted, combining deontological ethics, consequentialism, and virtue ethics, to capture the multifaceted dilemmas posed by AI, robotics, and quantum computing. Deontological ethics requires adherence to duties, rights, and universal moral rules, thus grounding critiques of surveillance practices that infringe on individual autonomy or discriminatory algorithmic decisions. Consequentialism, by contrast, evaluates the balance of benefits and harms, proving indispensable when assessing quantum breakthroughs whose societal advantages, such as accelerated drug discovery, must be weighed against existential risks to data security and economic inequity. Virtue ethics turns attention to the character, intentions, and institutional cultures that shape technological design, arguing that transparency, integrity, and civic-mindedness must be cultivated among developers, regulators, and corporate leaders to foster trust and social responsibility.
Yet, the pace of technological innovation often exceeds the capacity of these classical frameworks to offer timely guidance. Societal norms, legal systems, and educational curricula tend to evolve incrementally, leaving a widening gap between what technology makes possible and what existing ethics and regulations can manage. To bridge this divide, the ethical analysis must be augmented by meta-ethical and governance strategies that continuously recalibrate moral judgments in the face of emerging realities. “Real-time technology assessment” (Guston & Sarewitz, 2002) introduces mechanisms for ongoing oversight and reflexive learning, integrating stakeholder feedback, scientific foresight, and normative debate so that policies can adapt as quantum-enhanced AI systems evolve [32]. Likewise, the “responsible innovation” framework (Stilgoe, Owen, & Macnaghten, 2013) insists that ethical appraisals occur upstream, during research and development, through inclusive dialogues that surface societal values, anticipate unintended consequences, and steer technological trajectories accordingly [33].
Through this layered methodology, where deontological duties, utilitarian calculations, and virtues of character are interleaved with anticipatory governance and responsible innovation, society gains a dynamic ethical compass rather than being based solely on static prescriptions. This evolving equilibrium of moral reasoning ensures that as AI, robotics, and quantum computing push into uncharted domains, our ethical frameworks remain responsive, resilient, and rooted in shared human values (Mittelstadt et al., 2016) [34].

7. Methodology

The methodological approach employed in this study is qualitative and interdisciplinary, integrating normative ethical analysis with empirical case evaluations and stakeholder insights. We began with a systematic review of relevant literature, drawing on academic databases such as Scopus and Web of Science. Our inclusion criteria prioritised peer-reviewed articles published since 2015 that address ethical dilemmas in AI, robotics, and quantum computing. This review was guided by a structured search strategy, using keywords related to fairness, accountability, transparency, and privacy, and involved iterative screening of titles, abstracts, and full texts to synthesise both theoretical perspectives and empirical findings into an integrated ethical framework.
In addition to the literature review, the study conducts comparative analyses of several case studies. These include high-visibility autonomous vehicle incidents (e.g., fatal crashes involving self-driving prototypes), large-scale AI surveillance programs deployed in urban environments, and corporate and governmental “quantum readiness” initiatives such as IBM’s public roadmaps and national quantum strategies.
Finally, an ecosystem analysis was employed to identify the roles and responsibilities of key actors, technology developers, platform operators, regulatory agencies, civil-society groups, and end users. Through analysis of policy documents and organisational charters, we traced the flow of decision-making authority and resource control, yielding insights into power dynamics and the distribution of ethical accountability. As a result of the literature review, case study analysis, ethical coding, and relevant stakeholder identification, our methodology offers a comprehensive, multi-angled perspective that bridges ethical theory and actionable policy recommendations.

8. Policy Implications

The accumulated risk posed by AI, robotics, and quantum computing requires a proactive policy response that anticipates ethical challenges rather than simply reacting, probably too late, to them. First, there is an urgent need for the establishment of unified ethical standards that bridge current normative fragmentation and inadequacy. The heritage from international bioethics treaties, a predominant digital ethics agreement, could serve as a legal and regulatory reference across diverse systems and jurisdictions. Such a standard would harmonise principles related to fairness, transparency, and accountability (Floridi and Cowls) [22].
Second, transparency is a top critical asset. Policymakers should establish as practice that technology developers assume comprehensive Algorithmic Impact Assessments and report on model architectures and training data. This level of transparency is essential not only for public accountability but also for facilitating regulatory oversight and redress mechanisms.
Third, there must be a combined effort to protect data privacy, especially in the context of post-quantum vulnerabilities. Regulations should be enforced to ensure that sensitive information, especially affective and biometric data, is rigorously safeguarded through advanced encryption protocols and monitored under stringent compliance regimes [4].
Fourth, educational reforms must have as a priority to develop a workforce capable of navigating complex ethical landscapes. A collaborative effort among academic institutions, industry, and government is needed to design interdisciplinary curricula that ensure the needed competencies in technical, ethical, and legal aspects. Such reforms would promote a culture of continuous learning and resilience to face the rapid technological advancement.
Finally, a framework for global collaboration is essential. The establishment of an International Digital Ethics Council (DEC), modelled on the structure of organisations like the Intergovernmental Panel on Climate Change, would facilitate the sharing of best practices, enable coordinated responses to emergent crises, and ensure that ethical standards evolve alongside technological innovation. Next, Figure 2 represents the main areas to be studied, developed and recommended by a DEC to promote societal resilience.
Entities such a DEC may depart from these areas and elaborate actualised recommendations by following further developments in AI, Robotics and Quantum Computing.

9. Conclusions

In this study, a myriad of ethical challenges emerge at the intersection of AI, robotics, and quantum computing. It follows as an immediate consequence that traditional, single-issue guidelines cannot keep pace with technologies that amplify one another’s capabilities and risks. The next result is that fragmented ethics frameworks must give way to a cohesive charter grounded in shared principles, beneficence, non-maleficence, autonomy, justice, and explicability, to prevent “ethical arbitrage” and ensure consistent protections across jurisdictions. The limits of explainability in both deep learning and quantum circuits call for new oversight tools, such as quantum algorithm impact assessments and standardised audit trails, rather than reliance on full “white-box” transparency. Accountability must be reframed to distribute responsibility across developers, institutions, and regulatory bodies, while clear red lines around affective and biometric surveillance are imperative to safeguard privacy and autonomy against real-time, large-scale profiling.
Central to all these responses is education: only by embedding ethical reasoning throughout technical curricula, fostering lifelong learning in computational ethics, and creating interdisciplinary spaces for dialogue can societies cultivate the collective capacity to govern convergent technologies. As innovation accelerates, anticipatory governance and responsible innovation offer dynamic mechanisms to align research trajectories with societal values. The challenge is urgent but surmountable. By embracing cohesive ethics, adaptive oversight, and an empowered citizenry, we can steer these powerful technologies toward human prosperity and avoid entrenching new forms of inequity. The decisions we make today will determine whether this digital convergence becomes our greatest triumph or our most profound test.

Author Contributions

Both authors had the same level of contribution for all sections.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 2022, 54, 1–35. [Google Scholar] [CrossRef]
  2. Hajian, S.; Bonchi, F.; Castillo, C. Algorithmic Bias. In Proceedings of the Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; ACM: New York, NY, USA, 2016; pp. 2125–2126. [Google Scholar]
  3. Shoshana Zuboff The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power - Book - Faculty & Research - Harvard Business School. Available online: https://www.hbs.edu/faculty/Pages/item.aspx?num=56791 (accessed on 20 March 2025).
  4. Bernstein, D.J.; Lange, T. Post-quantum cryptography. Nat. 2017 5497671 2017, 549, 188–194. [Google Scholar] [CrossRef]
  5. Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc. 2016, 3. [Google Scholar] [CrossRef]
  6. Bathaee, Y. The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard J. Law \& Technol. 2018, 31, 889. [Google Scholar]
  7. Koopman, P.; Wagner, M. Autonomous Vehicle Safety: An Interdisciplinary Challenge. IEEE Intell. Transp. Syst. Mag. 2017, 9, 90–96. [Google Scholar] [CrossRef]
  8. Bhatnagar, S.; Cotton, T.; Brundage, M.; Avin, S.; Clark, J.; Toner, H.; Eckersley, P.; Garfinkel, B.; Dafoe, A.; Scharre, P.; et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. 2018. [CrossRef]
  9. Schuld, M.; Petruccione, F. Supervised Learning with Quantum Computers. 2018. [CrossRef]
  10. Frey, C.B.; Osborne, M.A. The future of employment: How susceptible are jobs to computerisation? Technol. Forecast. Soc. Change 2017, 114, 254–280. [Google Scholar] [CrossRef]
  11. Vemuri, V.K. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, by Erik Brynjolfsson and Andrew McAfee. J. Inf. Technol. Case Appl. Res. 2014, 16, 112–115. [Google Scholar] [CrossRef]
  12. Acemoglu, D.; Restrepo, P. Robots and Jobs: Evidence from US Labor Markets. Journal of Political Economy 2020, 128, 2188–2244. [Google Scholar] [CrossRef]
  13. OECD Employment Outlook 2019. 2019. [CrossRef]
  14. Futures of Education: learning to become - UNESCO Digital Library. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000370801 (accessed on 20 March 2025).
  15. Cox, A.M. Exploring the impact of Artificial Intelligence and robots on higher education through literature-based design fictions. Int. J. Educ. Technol. High. Educ. 2021, 18, 1–19. [Google Scholar] [CrossRef]
  16. authorCorporate:UNESCO Futures of Education: learning to become. 2019. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000370801 (accessed on 20 March 2025).
  17. The Future of Jobs; Davos, 2016. Available online: https://www.weforum.org/reports/the-future-of-jobs (accessed on 20 March 2025).
  18. Ethics guidelines for trustworthy AI | Shaping Europe’s digital future. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 20 March 2025).
  19. Mittelstadt, B.D.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The ethics of algorithms: Mapping the debate. Big Data Soc. 2016, 3. [Google Scholar] [CrossRef]
  20. Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019 19 2019, 1, 389–399. [Google Scholar] [CrossRef]
  21. Arute, F.; Arya, K.; Babbush, R.; Bacon, D.; Bardin, J.C.; Barends, R.; Biswas, R.; Boixo, S.; Brandao, F.G.S.L.; Buell, D.A.; et al. Quantum supremacy using a programmable superconducting processor. Nat. 2019 5747779 2019, 574, 505–510. [Google Scholar] [CrossRef]
  22. Floridi, L.; Cowls, J. A Unified Framework of Five Principles for AI in Society. Harvard Data Sci. Rev. 2019, 1. [Google Scholar] [CrossRef]
  23. Bernstein, D.J.; Lange, T. Post-quantum cryptography---dealing with the fallout of physics success. Cryptol. ePrint Arch. 2017. Available online: https://eprint.iacr.org/2017/314 (accessed on 20 March 2025).
  24. Luis-Ferreira, F.; Jardim-Goncalves, R. A behavioral framework for capturing emotional information in an internet of things environment. AIP Conf. Proc. 2013, 1558, 1368–1371. [Google Scholar] [CrossRef]
  25. Luis-Ferreira, F.; Jardim-Goncalves, R. Internet of Persons and Things inspired on Brain Models and Neurophysiology. Comput. Methods Soc. Sci. 2013, 1, 45–55. [Google Scholar]
  26. Westra, J.; Hasselt, H. Van; Dignum, V.; Dignum, F. On-line adapting games using agent organizations. 2008 IEEE Symp. Comput. Intell. Games, CIG 2008 2008, 243–250. [CrossRef]
  27. Make quantum readiness real | IBM. Available online: https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/quantum-readiness (accessed on 20 March 2025).
  28. Biamonte, J.; Wittek, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S. Quantum machine learning. Nature 2017, 549, 195–202. [Google Scholar] [CrossRef]
  29. Schuld, M.; Killoran, N. Quantum Machine Learning in Feature Hilbert Spaces. Phys. Rev. Lett. 2019, 122, 040504. [Google Scholar] [CrossRef]
  30. Kosinski, M.; Stillwell, D.; Graepel, T. Private traits and attributes are predictable from digital records of human behavior. Proc. Natl. Acad. Sci. U. S. A. 2013, 110, 5802–5805. [Google Scholar] [CrossRef]
  31. Calvo, R.A.; D’Mello, S. Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications. IEEE Trans. Affect. Comput. 2010, 1, 18–37. [Google Scholar] [CrossRef]
  32. Guston, D.H.; Sarewitz, D. Real-time technology assessment. Technol. Soc. 2002, 24, 93–109. [Google Scholar] [CrossRef]
  33. Stilgoe, J.; Owen, R.; Macnaghten, P. Developing a framework for responsible innovation. Res. Policy 2013, 42, 1568–1580. [Google Scholar] [CrossRef]
  34. Krafft, P.M.; Young, M.; Kominers, S.D. The ethics of algorithms: Mapping the debate. Big Data \& Soc. 2020, 7, 1–21. [Google Scholar] [CrossRef]
Figure 2. DEC areas for societal resilience in an AI, Robotics and Quantum Computing landscape.
Figure 2. DEC areas for societal resilience in an AI, Robotics and Quantum Computing landscape.
Preprints 158587 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated