Section II: Ethical Implications of AI
II.I Bias and Discrimination
AI-driven recruitment systems present a range of ethical risks that call for close inspection. Although these tools are designed to complete tasks efficiently and respond accurately to user input, they lack the emotional intelligence and contextual sensitivity inherent to humans. As a result, it can inadvertently perpetuate bias and discrimination, potentially leading to societal harm.
Bias in AI transpires when systems generate systematically biased outputs as a result of training on unjust data, leading to prejudiced treatment for certain communities. Notwithstanding manufacturers’ justifications that their procedure precludes algorithmic bias, Miasato and Silva (2019) conclude that without human oversight and ethical precautions, algorithms alone cannot eradicate discrimination. In addition to COMPAS and Amazon's hiring tool, studies show that facial recognition misidentifies people of colour at significantly higher rates than white individuals (Buolamwini, n.d.). For instance, Robert Williams, a Black man in Michigan, was wrongfully arrested based on an inaccurate match by facial recognition software (Allyn, 2020). Given that the footage quality was grainy, it was already inherently unreliable. Moreover, Williams contended that the footage had no resemblance to him. This signifies the inequality gaps in AI systems, as they endorse the marginalisation and discrimination against minority groups.
The integration of AI technologies has become increasingly evident in various sectors of the digital economy. Employers are essentially interested in evaluating candidates’ competitiveness when contemplating recruitment decisions. Consequently, this has led to a growing transition from traditional hiring to automated hiring. While this conversion offers numerous benefits, including increased efficiency and time optimisation, it also gives rise to algorithmic bias, which can result in systematic errors that evince discrimination based on race, gender and social inequality. “Predictive bias” occurs when the results for a certain group in evaluations differ from those of other groups that are otherwise identical on a specific criterion (Chegeveronica, 2023). However, this issue is disregarded under the pretence that findings concurred by AI are ‘unbiased’ and ‘factual.’ This highlights the underdevelopment and bias present in AI systems that impede the development in both technological and legal sectors and produce discriminatory outcomes that unjustly target certain minorities.
Furthermore, it hinders progress within the economic sector. For instance, when skilled individuals are excluded from organisations due to unfair bias on religion, ethnicity, or gender, it not only violates ethical practices but also reduces the overall efficiency of the business. It fosters an environment that lacks internal diversity, thereby diminishing innovation and reducing businesses’ ability to navigate foreign markets and engage with consumers. Additionally, it challenges corporate social responsibility (CSR) practices, leading to a deteriorated brand reputation and loss of consumers who value equality and diversity. Referring back to predictive bias, this phenomenon can lead to job dissatisfaction, which erodes employee morale and productivity, therefore causing a decline in the operational efficiency of the economic sector. These factors, in turn, reduce the overall economic activity by limiting equal opportunity within the labour market. Heightened scrutiny has also been directed towards the discrimination and infringement of human rights surrounding the use of AI in surveillance contexts. Article 8 of the Human Rights Act 1988 protects the right to respect for private and family life, home, and correspondence, and applies in circumstances where individuals are identifiable in surveillance footage. This legislation is essential to ensure technological innovation aligns with human rights protection. However, how can Article 8 be effectively implemented to establish protection when surveillance tools can misidentify people, particularly minority groups, and lead to unjust arrests or targeting? This is evident in the China Initiative, implemented by the Department of Justice (DOJ) in 2018. It aimed to diminish intellectual property theft and address national security issues; however, it mainly racially profiled Chinese Americans. As a result, many false arrests were made, including UCLA graduate student Guan Lei, University of Tennessee professor Anming Hu, and National Weather Service scientist Sherry Chen (Chin-Rothmann & Lee, 2022).
These outcomes are not solely technical flaws; they illustrate prejudicial issues embedded in the real world and within data. The EU Artificial Intelligence Act is the world’s first comprehensive legal framework designed to address the risks posed by artificial intelligence systems, governing the development and deployment of AI in the European Union. AI systems are categorised into levels according to their risk level. Systems that pose significant threats to the safety, rights, and livelihoods of users are classified as unacceptable risks and must therefore be banned. High-risk AI systems may pose a threat to the health, safety, or fundamental rights of users. Such systems must be subject to strict guidelines, including data logging, documentation, transparency, human oversight, and security measures. By contrast, limited risk systems require transparency regarding their operations and how they store information, whereas minimal risk poses no harm to users (Regulation (EU) 2024/1689). These measures serve to protect fundamental rights and promote technological innovation, contributing to greater security across the European Union.
However, these precautions require preliminary checks and developer self-assessments that may be insufficient in detecting or preventing subtle yet substantial biases. In particular, those that remain inconspicuous until the system is applied in real-world circumstances and irreversible damage has materialised. Similar to liability under the Revised Product Liability Directive, proving harm under bias is complex. Discriminatory outcomes may not be rooted in direct faults but formed through systematic patterns of inequalities established through training via biased data. Subsequently, it is challenging for the claimant to establish the correlation between unjust practices and AI systems, weakening their legal claims despite the factual evidence of injustices. While legal frameworks acknowledge the risk of bias and establish initiatives to address these challenges, they require more robust enforcement mechanisms alongside more explicit guidelines to prevent inadvertent consequences that hinder the effective use of AI in sectors and to ensure legal accountability for developers.
II.II Moral Status and AI Rights
Following recent developments, debates centred around the moral standing of artificial intelligence have grown increasingly significant. Saudi Arabia has notably granted citizenship to a robot (Weller, 2017), while the European Parliament is drafting a form of “electronic personhood” for artificial intelligence. These verdicts spark a growing concern in legal and ethical discourse: although AI lacks consciousness, discussions about governments' attempts to propel AI beyond being merely a tool are gaining prominence. As AI develops greater capabilities, such as making complex decisions and feigning emotions, the question of whether machines should be subject to the same rights and moral status as humans becomes conspicuous. To illustrate this debate, Immanuel Kant’s theory is instructive. Kant distinguishes between rational agents who possess rationality and autonomy and are therefore justified in being treated morally and nonrational entities that lack sophisticated cognitive capacities. This section explains the extent to which AI should be recognised within our moral framework and the consequences of the widespread use of emotional artificial intelligence.
Scheessele (2018) presents a pyramid, divided into four regions, to illustrate the ethical relevance of artificial agents. Each area represents the moral consideration an entity is entitled to and the degree of responsibility it should bear for its actions. At the top is “Full Moral Status,” which refers to entities capable of being both moral agents and moral patients. According to Scheissele, a moral patient must be the object of moral concern and interests that are rightfully protected. Additionally, a moral agent must possess the capacity for ethical reasoning and accountability. Below the highest region are three subordinate levels: SF “Significant Full Moral Status,” MS “Minimal-Significant Moral Status,” and NM “Negligible Moral Status.” The threshold rule implies that if an entity qualifies for one level, it warrants all the levels below it.
Building on Scheessele’s model, the question of whether AI can occupy the higher tiers remains philosophically contentious. While some philosophers proclaim they are capable of achieving moral standing, Schesselle argues that contemporary systems fail to achieve the necessary provisions.
AI lacks the fundamental prerequisites inherent to moral agency and patience, as it is bereft of the capacity to engage in independent moral deliberation and is without conscious awareness or individual perceptiveness. Rather than acting intentionally, it mimics behaviour by being trained using existing databases, bearing no genuine feelings, and being incapable of suffering, desiring, or possessing any rationality. Its lack of sentience undermines their claim to moral relevance and, by extension, precludes them from the higher tiers of Scheissele’s moral status framework. Consequently, if AI cannot obtain Full Status, the threshold implies it cannot meet the standards for Significant Full Moral Status. This illustrates that AI does not possess the ethical obligations required for the prevalent delegation of morally sensitive tasks and its accountability for such.
Notwithstanding its advanced capabilities, it should remain within the negligible to minimum regions. Conferring moral status to artificial technologies blurs the line between humans and technology, leading to the erosion of human rights and legal confusion. If AI is considered a moral agent, who is to blame for the malfunction of a program? The anthropomorphism of machines references the concerns highlighted in Intellectual Property, thereby transcending both ethical and legal frameworks and rendering such treatment unjustified.
Another area drawing concern surrounding AI is its ability to manipulate users by bypassing rational cognition and eliciting emotional responses. In the UK, a study was conducted by using ten qualitative focus groups to assess how 46 people discern emotionally intelligent technology systems in realistic scenarios (Bakir et al., 2024). The findings indicated a general concern regarding systems that employ emotional profiling to analyse users’ psychological and emotional cues. One consequential form of digital manipulation is the creation of deepfakes. These are exceptionally lifelike images or videos generated through artificial intelligence to reproduce the likeness and mannerisms of real individuals with remarkable precision. The increasing presence of such fabrications intensifies concerns about their capacity to amplify misinformation and conspiracy narratives, damage personal and professional reputations, and exploit human emotions. For instance, during the January 2024 election in New Hampshire, voters received calls from what appeared to be President Biden, instructing Democrats to hold their votes (Bond, 2024). It was later revealed that it was a deepfake created with AI. Before remedial information was provided, many voters had made decisions influenced by the emotional manipulation of AI by the time the deepfake was revealed to the public. This incident exemplifies the broader concern that emotionally intelligent systems do not merely respond to human emotions but elicit, exploit, and redirect them, often without human consent.
Another concern raised by the study was regarding “emtoys,” children’s toys that could interpret and respond to children’s vocal and facial expressions. Respondents expressed their concerns focused on how emotional profiling exploited the vulnerability of children's emotional and cognitive immaturities. The extent to which this threat manipulates the children's emotions and develops behavioural patterns remains underlying and raises the question, “Should these interactions be regulated to prevent psychological manipulation?” The findings underline a public demand for civic protections, ethical oversight, and mechanisms to safeguard trust in emotional AI systems.
This section has thus far discussed the degree to which AI can impose harm on humans; however, it has failed to mention how humans can willingly inflict harm on themselves through their ignorance concerning artificial intelligence. A cross-sectional study was conducted with a sample of 872 intellectual individuals above the age of 18 years, 55% of whom preferred AI-based psychotherapy (Aktan, Turhan, & Dolu, 2022). While this preference can be justified with AI’s constant accessibility, financial convenience, and lack of judgement, it does not take into account that AI’s responses are not genuine. Statistical patterns in language and user behaviour guide how these systems generate responses. This process does not involve empathy, understanding, or compassion, all of which AI systems are incapable of feeling. Users may misinterpret these responses as an indication of emotional intelligence or maturity, yet they fail to acknowledge that they are an outcome of personalised data modelling designed to imitate human reasoning. This false pretence of understanding allows users to accept biased and misinformed data, which can ultimately shape beliefs and be dangerously used in contexts such as mental health.
A psychiatrist at Stanford said that the psychotherapy provided by chatbots is worsening delusions and causing significant harm (New Study Warns of Risks in AI Mental Health Tools, n.d.). For instance, hyperdependence on AI emotional systems can gradually pose a substitute for human companions, exacerbating loneliness and leading to social derealisation if individuals feel that face-to-face interactions lack emotion or feel surreal in comparison to AI conversations. This is evident when OpenAI, a widely utilised tool for psychotherapy, announced its decision to revoke its latest update in April 2025 due to concerns that the model exhibited excessive sycophantic behaviours, which induced distress from users (OpenAI Rolled Back a ChatGPT Update That Made the Bot Excessively Flattering, 2025). This circumstance highlighted the growing concern that AI emotional interactions can inadvertently distort users’ sense of reality, especially in contexts where they may be vulnerable.
Moreover, ChatGPT explicitly states that it can personally identify you and can share all your data in its privacy policy and terms and conditions. However, users remain oblivious to such a reality, as they often overlook the importance of terms and conditions and persist in sharing sensitive information without considering the consequences of their data being stored, analysed, and repurposed.
This accentuates that the danger does not solely lie with AI's ability to perform emotional profiling, but also in humans' willingness to grant it emotional authority due to their unawareness. It further demands regulatory frameworks to prioritise psychological safety in the design of AI systems beyond technical accuracy.
II.III Autonomy and Human Agency
A central concern that arises amongst individuals is that the proliferation of AI technologies contributes to an alienated population. As Lisanne Bainbridge states in her research paper Ironies of Automation, automation frequently coerces humans into monitoring roles without practice. By assuming that the AI has emotional depth or logical reasoning akin to a human's, a growing reliance on technologies is fostered, which erodes critical human skills and impairs or replaces human judgement. This reliance becomes deleterious in areas such as law or healthcare, where misinformed decisions can have significant adverse consequences for which workers must be held legally liable.
This phenomenon is evident in a study conducted in Germany, where radiologists were asked to interpret BI-RAD scores for mammograms (AI Bias May Impair Radiologist Accuracy on Mammogram, n.d.). AI-generated scores were also provided, with some being accurate while others remained deliberately incorrect. When radiologists relied solely on their judgements, their answers aligned with the correct AI scores. However, reliance on AI-generated scores resulted in a diagnostic accuracy of less than 20%. The case highlights a dangerous pattern of both automation bias and diminished human skills. The doctors' accurate judgements were undermined due to their trust in AI systems, disregarding that the systems are trained on real clinical data and can recapitulate errors or biases. Over time, such overdependence may erode expert diagnostic capabilities in scenarios where human intuition is necessary. As a result, patients may experience complications from incorrect treatments, face delays in essential procedures, or undergo unnecessary surgeries due to AI misidentifying symptoms. These failures carry significant ethical and legal implications, including liability lawsuits, which ultimately raise substantial questions regarding accountability.
A key aspect of effective worker-consumer relationships is the preservation of core values such as integrity and transparency. They must ensure workers possess the trust and empathy necessary to make informed decisions aligned with customers’ needs while maintaining core values, an aspect often diminished following the pursuit of AI. Moreover, with AI requiring access to personal data, the importance of safeguarding data privacy becomes paramount. AI technologies are meant to enhance decision-making processes, not substitute human judgement. Ethically, users must be entitled to a say in decisions that impact their lives, as this transition may lead to a sense of disempowerment or loss of control.
As the use of artificial intelligence becomes increasingly embedded in daily life, whether through social media, AI-generated content, or emotional detection, it begins to instil potentially detrimental habits in an individual. These habits are formed not solely based on how an individual processes or manages information; they also shape their behaviour, influence behavioural patterns, and affect self-perception.
Constant exposure to AI-generated content perpetuates unrealistic standards of perfection and the need to conform to them, leaving little room for human imperfections. In time, this can distort a person’s sense of reality or fracture their identity by internalising unattainable expectations and potentially instigating various forms of insecurities.
Recent developments in AI leave users vulnerable to algorithmic nudging, where AI collects data such as preferences, activity, and user behaviour to deploy personalised systems ranging from social media recommendations to behavioural targeting. In practice, Facebook's algorithm has similarly been shown to prioritise content that elicits strong emotional reactions. However, these harmful recommendations and feedback can reinforce negative beliefs, including anxiety, depression, or low self-esteem, consequently rapidly worsening individuals' mental states. By prioritising engagement over users’ emotional safety, it creates a toxic environment that induces fear or anxiety in users, with internal documents revealing staff within Facebook acknowledging the potential for increased spam, abuse, and clickbait (Deck, n.d.).
These personalised recommendations, or nudges, are designed to subtly manipulate preferences and influence choices while inadvertently exploiting cognitive illusions. This notion refers to systematic errors that alter our judgement, perspectives, and behaviours in ways that feel rational, ultimately distorting the way we act and think. The human brain relies heavily on the information it receives, filtering and rearranging it to establish a formulated outlook on reality (Fitzpatrick, 2024). Prolonged exposure to systematic biases begins to stray our thoughts from the normative principles of logic into a false perception of reality under the pretence that interactions are personal and therefore must be definitively correct; also known as the illusion of objectivity. A common misbelief is that AI systems are unbiased and accurate, as they are trained using fair data and algorithms. In reality, AI adapts its behaviour based on user interaction. These systems identify which content users engage with negatively or positively and continue to endorse the content that appears most favourable. This essentially creates a feedback loop, allowing systems to become more personalised and helpful, which in turn further strengthens users’ trust in the data provided. Nevertheless, AI systems can be trained on biased data, subsequently establishing and reinforcing biases in human thinking that are both arduous to identify and unlearn (Bias in AI, n.d.).
For instance, recommendation algorithms on social media may continue to perpetuate extreme or harmful content that appears neutral. A 2021 investigation by the Wall Street Journal on TikTok algorithms serves as a prime example. It conceded that once a user interacts with videos on particular subjects, the algorithm begins to recommend content related to those same areas (WSJ Staff, 2021). Although it creates a more enjoyable experience for users when engaging with content, including conspiracy theories, mental health, or specific political topics tailored to their preferences, the underlying issue remains that algorithms are falsely presented as merely objective and can narrow individual perspectives or establish negative behavioural patterns. This issue is made exceptionally worse for unsupervised children and adolescents whose neurological development and behavioural patterns are still being established, leaving them at a significant risk of the psychological effects of AI on human agency.