Preprint
Article

This version is not peer-reviewed.

Riding AI to Utopia or Dystopia? NLP, LLM and News Informatics Insights for Artificial Intelligence Impacts on Education, Healthcare, Robotics and Careers, Changing Human Society

Submitted:

01 May 2026

Posted:

04 May 2026

You are already at the latest version

Abstract
Artificial Intelligence (AI) is accelerating societal transformation at an unprecedented pace, generating both utopian aspirations and dystopian anxieties. Human civilization has undergone fundamental changes through every technological revolution starting with the Industrial Age and continuing through the digital era as AI emerges as the next paradigm shift. This paper studies the public discourse on AI by analyzing extensive news headlines on AI using natural language processing (NLP) methods. Our research applies sentiment analysis and topic modeling to a global dataset across education, healthcare, robotics, careers, and society to identify the dominant narratives shaping public perception. Media coverage presents AI as a dual force that brings human benefits and existential dangers according to our research findings. By moving beyond the utopia-dystopia dichotomy, we show that AI's social effects will emerge from the dynamic relationship between governance systems, ethical protections, and human-enhancive AI (HEAI) frameworks. We provide practical insights about AI's future impact and present strategies for maximizing AI benefits while mitigating its risks.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  
Introduction
“All technology has the potential for both good and evil. But what matters is how we use it.”
— Tim Berners-Lee, Computer Scientist

Etymology of Utopia and Dystopia

For a long time, humanity has harbored a dual fascination with technological innovation, especially with the advent of AI. This tension between hope and fear is mirrored in literature, which explores both utopian and dystopian dimensions of imagination. Long before Thomas More coined the term utopia in his 1516 book Utopia (More, 1949), the concept had deep philosophical roots. Derived from the Greek ou-topos ("no place") and eu-topos ("good place"), utopia refers to an ideal realm free from pain, suffering, and inequality. Dystopia, on the contrary, is the abuse of technology, social oppression, or environmental destruction taken to the extreme. Its name combines the Greek δυσ ("bad") and τόπος ("place"). The term first appeared as “Dustopia” (UspeakGreek, 2023) in Lewis Henry Younge’s Utopia: or Apollo's Golden Days (Younge, 1747), and was later refined by John Stuart Mill in an 1868 Parliamentary speech (Hansard Commons), where he reframed utopia's prefix from ou- ("not") to eu- ("good"), making dystopia its antonym (Mill, 1988). This dichotomy plays out vividly in cinema. Blade Runner (1982) and 1984 (1984) depict futures where AI and authoritarian regimes suppress human freedom. The Matrix (1999) expands this idea with a simulated utopia masking a machine-dominated reality (Jackson & Paste Staff, 2023). The series Black Mirror warns of technological overreach through near-future cautionary tales (jmuwa, 2020). On the utopian side, Tomorrowland (2015) imagines a world shaped by human ingenuity and scientific discovery. Her (2013) explores AI-human relationships that oscillate between emotional fulfillment and troubling dependency. WALL-E (2008) contrasts a dystopian Earth ruined by consumerism with the hope of a renewed, human-centered society (Utopia & Dystopia, n.d.).

Framing AI utopian and Dystopian Narratives in News Headlines

As AI technologies, particularly NLP Large Language Models (LLMs), rapidly evolve, long-standing tensions between utopian and dystopian futures are increasingly reflected in media narratives (Cools et al., 2022). Our research applies NLP and LLMs to analyze AI-related news headlines, uncovering how media narratives shape our collective expectations; whether we are bracing for disaster or anticipating an era of boundless progress. Systematic analysis of this coverage exposes patterns reflecting both optimism—disease eradication, automation of mundane tasks, democratized knowledge and anxiety over job loss, eroded human agency, and technological dominance as seen in Figure 1 (Samuel et al., 2024; Khosla, 2024; Silver, 2023). Grouped clusters in Figure 1 illustrate different themes like technology, business, ethics, risks, media, and key industry players, giving a clear view of AI's public discourse. News media simultaneously highlight AI breakthroughs (Jumper et al., 2021; Service, 2018), like DeepMind’s AlphaFold, and raise concerns about issues such as bias, job loss, and surveillance (Dmitracova, 2025; The Guardian, 2024; Reynaud & Untersinger, 2024; Klepper & Swenson, 2023). Headlines serve both as historical records and tools that shape public expectations of technology. The influence of news media is evident in how coverage of natural disasters, such as earthquakes, influences public policy and technological development (Jamieson & Van Belle, 2019). Likewise, early newspaper coverage of automobile accidents played an important role in shaping traffic regulations and safety features, showing how media narratives can catalyze societal adaptation to new technologies (SafeTREC, n.d.; Gupta et al., 2021). AI coverage follows this pattern, influencing the push for ethical development and shaping the direction of innovation (Ouchchy et al., 2020). In contrast, news about AI-generated deepfakes and election interference has intensified concerns about AI’s ability to manipulate reality (Bond, 2024). Moreover, the rise of China’s DeepSeek (DeepSeek-AI et al., 2025) has sparked U.S. national security concerns, challenged American AI dominance, and reshaped the public perception of global AI geopolitics (Baptista, 2025; Rundle, 2025; Bratton, 2025). These conflicting narratives shape the public's collective expectations: are we on the brink of a technological utopia, or are we accelerating toward a dystopian crisis?

The Binary Fallacy: Beyond Pure Utopia and Dystopia in AI Development

"A dystopia is a utopia that's gone wrong."
Ursula K. Le Guin
As AI capabilities grow, public discussion often swings between extremes. On one hand, AI is seen as a force for good, enhancing human potential, accelerating scientific breakthroughs, boosting economies, and solving global issues. For example, machine learning models now outperform doctors in early disease detection (Bajwa et al., 2021). Reinforcement learning is used in logistics (Rolf et al., 2022), and natural language interfaces make information more accessible. Philosopher Nick Bostrom explores this hopeful future in Deep Utopia (Bostrom, 2024) where he examines what happens if AI improves our lives without harming them. In Superintelligence (Bostrom, 2014), he warned about AI’s dystopian risks. In Deep Utopia (2024), however, he envisions a “solved world” in which AI meets all material, intellectual, and emotional needs. This idea echoes predictions from leaders like Nvidia’s Jensen Huang and DeepMind’s Mustafa Suleyman who believe AI will democratize discovery and make expert knowledge widely available (The Week UK, 2024). Such views reflect old myths of abundance, like the Land of Cockaigne, reimagined for our tech-driven age (Cuthbertson, 2024). But these gains raise deeper questions: If AI replaces human labor, solves key problems, and extends life, what will we do? More importantly, what will define our purpose in a world where machines meet all needs? He warns of a "plastic utopia," where people risk becoming passive consumers of artificial satisfaction (Singal, 2024). This dilemma has historical roots; economist John Maynard Keynes predicted that technology would cut working hours to 15 per week (Wladawsky-Berger, 2017). The challenge, then, is managing AI’s risks while preserving human purpose and agency. On the other end is the dystopian view: AI drives job loss, inequality, surveillance, and existential threats. White-collar jobs in finance, customer support, and content creation are facing early automation-driven job losses (Smith, 2024). More worryingly, AI’s speed may outpace human oversight. Issues like autonomous weapons, deepfakes, disinformation, and artificial general intelligence (AGI) raise alarms about unintended consequences and loss of control. Eliezer Yudkowsky warns that poorly aligned superintelligent AI could eventually optimize for goals that disregard human survival entirely (Yudkowsky, 2023). Other concerns include reliance on AI, algorithmic bias, and AI’s ability to distort reality. As language models churn out content cheaply, the risk of reality distortion grows. These developments intensify concerns about whether AI can be aligned with human values while preserving agency and oversight.
Yet, as history has often shown with the technological revolution, the future will likely be neither a perfect utopia nor an absolute dystopia, but a paradoxical blend of both. The progress of an innovation unfolds in a continuum; marked by both breakthroughs and setbacks, empowerment and displacement, possibilities and risks. The development of nuclear technology in the mid-20th century simultaneously offered solutions to energy scarcity while posing existential risks through weaponization. More recently, the Internet democratized knowledge while giving rise to misinformation and cyber threats. The evolution of cars exemplifies this dual narrative, with early headlines both celebrating their promise and warning of associated dangers. This mirrors today’s AI coverage. Headlines from the 1900s about "horseless carriages" causing public panic mirror modern concerns about autonomous vehicles, demonstrating how media frames technological transitions (Winton, 2017). Therefore, news headlines serve as snapshots of these transformative moments, capturing both the euphoria surrounding AI’s potential and the fear of unforeseen consequences. Understanding these narratives helps us see how society adapts to disruptive change. The future depends not on technology alone, but on how we choose to govern, integrate, and share its benefits. Will we create an AI-powered utopia, freeing humanity from toil and scarcity? Or will we face a dystopia where AI tightens its grip, deepens inequality, and spirals out of control? The most realistic future may be what Kevin Kelly calls a "protopia": one of gradual improvement, not sudden transformation. In this view, AI will become embedded into everyday life, evolving to become safer, more ethical, and more useful over time (Shermer, 2024). Many researchers are focused on refining systems to serve human needs reliably, rather than chasing extremes. Rather than framing AI in binaries, we analyze media narratives and current trends to understand how public discourse reflects both its aspirations and anxieties. The concept of Human-Enhancive AI (HEAI) is central to this outlook (Kashyap et al., 2024). HEAI emphasizes a human-above-AI strategy to maximize human potential. The future is not established: it will be shaped by human choices through education, policy, technological design, and the philosophy we use, such as HEAI, to structure AI’s role in society.

The Myth of a Pure Utopia and the Low Probability of a Complete Dystopia

Achieving a perfect AI-driven utopia is highly unlikely due to several serious challenges. Biased training data can reinforce social inequalities (Zajko, 2022), and AI power concentrated in a few hands may increase surveillance, erode privacy, and foster authoritarian regimes (Randieri, 2023). Some warn that this could result in a stable but oppressive global regime, far from any utopian ideal. As AI grows more advanced, aligning superintelligence with human values becomes increasingly difficult (Mitchell, 2022). Technological limits also persist, with AI still lacking intuition, empathy, and ethical judgment, making it risky for governance. While automation may reduce routine work, it could also cause job loss before new roles are created (Nelson, 2024). Moreover, the increasing integration of AI into healthcare, law enforcement, and public policy may introduce new systematic biases rather than achieving the intended goal of eliminating prejudice. Furthermore, because the definition of a perfect society varies across cultures, AI solutions may benefit some groups while harming others (Randieri, 2023).
However, arguments against a dystopian AI future present several compelling counterpoints as well. AI systems, tools without consciousness or goals, reflect human design and intent (Li et al., 2021). Ethical safeguards, regulatory protocols, and human-in-the-loop designs aim to keep AI aligned with societal values. Much fear about AI comes from anthropomorphic bias, which is the tendency to assign human traits to machines. Since AI lacks self-awareness, many dystopian fears are more about human psychology than actual technology (O’Gieblyn, 2023). While it’s important to stay alert to potential dangers, the rise of a dystopian AI future seems unlikely. Through deliberate oversight, ethical consideration, and effective governance, society possesses the capacity to guide AI development toward outcomes that enhance human welfare while minimizing potential adverse effects.

A Pragmatic Perspective: Mixed Effects of AI

Rather than utopia or dystopia, the future will likely be a complex mix of benefits, disruptions, new opportunities, and risks. As shown in Figure 2, a pragmatic approach acknowledges AI’s dual nature, aiming to maximize its positive impact while minimizing potential harm.
  • Anticipated Positive Impacts and Emerging Opportunities in an AI-Augmented Future
By 2040, AI is projected to become as essential as the internet, transforming global economies, reshaping industries, and addressing long-standing societal challenges (Jensen, as cited in Rainie & Anderson, 2024). The International Data Corporation (IDC) estimates a $19.9 trillion contribution to the global economy by 2030, with AI investment returns of $4.60 per dollar (IDC, 2024). While concerns about job displacement persist, AI is also expected to create new employment opportunities,(WEF, 2025). IDC’s Future of Work Employees Survey found that only 3% of workers expect full automation of their roles, while 63% believe AI will enhance rather than replace their work (IDC, 2024). Productivity is expected to soar as AI automates repetitive tasks, enabling humans to focus on creative, strategic pursuits (Olorundare, as cited in Rainie & Anderson, 2024).
In healthcare, AI is already improving diagnostics, virtual care, and precision medicine, and may soon support organ production via 3D/4D printing. (Al-Saqaf, as cited in Rainie & Anderson, 2024). Education is benefiting from adaptive AI tutors (Silwal, as cited in Rainie & Anderson, 2024), while personal AI assistants democratize access to finance, mental health, and career planning (Herd, as cited in Rainie & Anderson, 2024). Governance is poised to become more transparent and efficient through AI-assisted policy modeling, real-time fact-checking, and predictive decision-making (Turner, as cited in Rainie & Anderson, 2024). Urban planning, agriculture, entertainment, and climate adaptation are similarly being transformed by AI through AI-enabled free public transit, robotic farming, and immersive virtual experiences (Bairathi, 2025; Jensen & Silwal, as cited in Rainie & Anderson, 2024). A notable frontier in AI research is the development of digital "twins"—virtual AI representations of individuals that will assist in decision-making, self-improvement, and lifelong learning (Spohrer, as cited in Rainie & Anderson, 2024). Ben Shneiderman, professor emeritus at the University of Maryland, stresses the need for human-centered AI that supports creativity and social connection. Similarly, Associate Professor Loianno predicts that by 2030, autonomous robots will collaborate, learn, and make high-level decisions with minimal human input, improving efficiency across sectors provided strong safety protocols are in place (Loianno, as cited in Ziegler, 2024). If paired with fair policies like universal basic income, AI-driven automation could help reduce inequality (Herd & Williams, as cited in Rainie & Anderson, 2024). Thus, the AI companion market, valued at USD 28.19 billion in 2024, is projected to grow at a 30.8% annual rate through 2030 (Grand View Research, 2024). Yet, these advancements raise profound ethical questions. Leaders like Dario Amodei and Sam Altman envision transformative impacts but caution against unchecked ambitions (Robison, 2024; Pierce, 2024). The choices made today, balancing opportunity with responsibility will shape whether AI drives shared prosperity or deepens inequality by 2040. Ultimately, the AI-augmented future presents a delicate balance between opportunity and responsibility (Jensen, Olorundare, & Al-Saqaf, as cited in Rainie & Anderson, 2024).
2.
Negative Effects and Emerging Risks in an AI-Augmented Future
Despite its promise, AI introduces significant risks. One major concern is techno-solutionism, the belief that AI can solve all social problems (Littman et al., 2021), leading to blind trust in automated decisions, and ignoring their biases. Examples include Amazon’s 2018 hiring tool that was scrapped for showing gender bias (Chang, 2023) and an algorithm by Optum that prioritized white patients over black patients based on cost assumptions. AI also threatens democratic institutions, enabling misinformation, deepfakes, and social media manipulation (Littman et al., 2021) as seen in cases like the 2016 U.S. election interference and a fake Pentagon explosion image. Its black-box nature limits transparency and accountability, while mass surveillance erodes privacy and empowers authoritarian regimes. Predictive policing, social credit systems, and mass data collection pose ethical and human rights challenges. Economically, AI could deepen inequality by displacing low-income workers and concentrating power in tech monopolies. It also raises IP concerns, as generative models borrow from copyrighted content without credit or compensation—a phenomenon dubbed the “Great Data Heist.” These risks are heightened by AI’s ability to produce unintended behaviors, especially in reinforcement learning models. Overreliance on AI in safety-critical applications is also risky, as models can hallucinate or provide incorrect information, which could lead to catastrophic consequences in military systems, autonomous vehicles, and healthcare diagnostics. (Littman et al., 2021; Rainie & Anderson, 2024). AI also enables impersonation scams, such as voice cloning, and can be used to find software vulnerabilities, increasing cybercrime risks. The Center for AI Safety warns that AI may boost the scale, speed, and success of cyberattacks, increasing geopolitical risks. Lastly, environmental impacts are substantial. AI training consumes vast energy and addressing these issues will require strong ethical standards, clear regulations, and greater transparency to guide AI toward equitable and responsible use (Littman et al., 2021; Rainie & Anderson, 2024; Samuel 2023).
Given these complexities, this research proposes moving beyond the binary of utopia vs. dystopia by introducing Human-Enhancing AI (HEAI), that is AI designed to amplify human potential while minimizing harm. Using NLP techniques like sentiment analysis and topic modeling, we track evolving media narratives around AI. As with past innovations like nuclear energy or the internet, AI's trajectory provokes both hope and fear. This transformative trajectory has raised critical questions about its societal implications and the future it is actively shaping (Samuel, Tripathi & Mema, 2024; Tripathi et al., 2025). The key question is not if AI will shape the future, but how we choose to guide it. This research aims to provide a data-driven perspective on exploring the emerging trajectories of the impacts of AI on human society and the choices that lie ahead. The rest of the research manuscript is structured as follows. The next section presents a very brief literature review across four key domains: education, healthcare, robotics, and careers. Additionally, we examine various societal aspects both within these domains and as a whole. Following the literature review, the methodology section outlines our data collection process and feature engineering. We then detail our exploratory data analysis (EDA) and NLP techniques, including sentiment analysis and topic modeling. Next, we discuss the statistical analysis conducted. Finally, we present the results, followed by a discussion and conclusion, emphasizing the significance of our findings in the context of HEAI.

Literature Review

The purpose of this literature review is to briefly introduce four key pillars of human society, to serve as an anchor and provide non-exhaustive context for our NLP and LLM based analyses.

Education

AI has significantly impacted education, offering both promising applications, such as automation and personalization, and complex challenges that require policy responses. Academic institutions have been reevaluating their systems to ensure intended learning outcomes are preserved in the face of rising usage of LLMs and generative AI tools (Chidipothu et al., 2025). AI tools help educators streamline administrative processes such as grading, lesson planning, attendance tracking, scheduling, and budgeting. This automation enables teachers to focus more on personalized instruction (Zaman, 2023). Furthermore, institutions can analyze large volumes of data using chatbots and analytics platforms to uncover insights that enhance both administrative efficiency and student performance (Ananyi et al., 2023). AI-powered Adaptive Learning Platforms customize instruction to match each student’s pace and proficiency by adjusting materials based on performance data, enhancing engagement, and accommodating diverse learning styles (Piocciochi & Alwabel, 2020; Dutta et al., 2024). Additionally, tools such as speech-to-text, language translation, and early innovations in computer vision and speech recognition have also improved accessibility, especially for individuals with disabilities (Bingham & Carrington, 2018).
On the negative side, algorithmic bias in AI models, often stemming from skewed training data, can disproportionately affect underrepresented racial and ethnic groups. For example, multiple predictive models performed worse for minority groups in academic success forecasting (Hu & Rangwala, 2020; Baker & Hawn, 2022). Overreliance on AI (Zhai, et al., 2024) can also hinder students’ cognitive development, including decision-making and critical thinking. A specific concern related to generative AI tools like ChatGPT is “hallucination,” where plausible but entirely false information is generated. Students are at risk of consuming and relying on such inaccurate content. As OpenAI itself warns, ChatGPT’s outputs may sound convincing yet be incorrect (Athaluri et al., 2023). This highlights the need to equip students with critical thinking skills and the ability to validate AI-generated responses using more reliable sources (Mhlanga, 2023). Moreover, the increased integration of AI in education may reduce human interaction in educational settings. A lack of meaningful connection with teachers and peers can negatively affect students’ motivation, social development, and overall educational experience (Al-Zahrani, 2024).

Healthcare

AI is transforming healthcare by enhancing diagnostics, personalization, accessibility, and operational efficiency. Algorithms can analyze complex datasets, ranging from medical images and genetic data to electronic health records (Alowais et al., 2023), to detect cancers and cardiovascular and neurological diseases with greater accuracy and speed. In mental health, AI identifies early indicators of conditions like depression, anxiety, and PTSD through behavioral data, speech patterns, and social media activity (Ettman & Galea, 2023). Virtual therapists and chatbots provide 24/7 support, particularly in underserved regions where mental health services are limited (Cross et al., 2024). Additionally, AI enables personalized treatment plans tailored to individual genetic profiles and medical histories, improving outcomes while minimizing side effects (Hill, 2024). On the operational side, AI streamlines administrative tasks such as scheduling and billing, and improves resource allocation and patient flow, thereby reducing costs and enhancing care delivery (Hoose & Králiková, 2024).
Despite its efficiency, the integration of AI into healthcare raises ethical, technical, and regulatory challenges. AI systems lack emotional intelligence crucial for doctor-patient rapport, especially in behavioral health, potentially lowering patient satisfaction and treatment adherence (Cordero, 2024). Heavy reliance on personal data introduces privacy risks and cybersecurity threats and can undermine trust if patients are not fully informed of AI’s role in their treatment (Farhud & Zokaei, 2021). Technical risks like overfitting can result in harmful diagnostic errors (Chustecki, 2024), while algorithmic bias stemming from non-diverse datasets may lead to misdiagnoses and healthcare disparities for marginalized populations (Siafakas & Vasarmidi, 2024). Finally, because AI algorithms can evolve with time, traditional regulatory frameworks struggle to assess their safety and efficacy consistently, complicating accountability and eroding public trust in AI-enabled medical systems (Price, 2019).

Robotics, Automobile and Factories

AI is transforming the automobile industry and logistics by enabling predictive modeling, deep learning, and optimization techniques that enhance supply chain efficiency, sustainability, and strategic decision-making (Didast, et al., 2024). In warehouse operations, AI-driven robots and Automated Guided Vehicles (AGVs) improve accuracy, speed, and cost-effectiveness by navigating environments, identifying items, and dynamically allocating tasks (Sodiya et al., 2024; Dehghan, et al., 2023). As robotics and AI continue to transform digital logistics, they open avenues for further research in resource orchestration and the development of innovative operational strategies (Rainer Jr., et al., 2025). AI's integration into logistics extends to traffic prediction and management, where real-time data from sensors, GPS, and social media is used for dynamic routing, congestion control, and signal optimization, refining mobility systems before full autonomous adoption becomes widespread (Bharadiya, 2023).
Despite the advantages AI offers in logistics and automotives, it also introduces risks. Autonomous Vehicles (AVs), for instance, present cybersecurity and privacy challenges. Threats like GPS spoofing, ransomware, and data breaches endanger safety and can expose sensitive biometric and location data (Sadaf et al., 2023; Bendiab et al., 2023). Similarly, logistics companies must safeguard vast datasets related to cargo, fleet movement, and delivery operations, making cybersecurity a critical priority (Didast, et al., 2024). Furthermore, models often inherit societal prejudices embedded in their training data (Richey Jr, et al., 2023), with real-world consequences in areas like hiring, credit assessment, and law enforcement. In logistics and supply chain management (L&SCM), such biases could result in preferential treatment of certain suppliers, products, or regions, contributing to systemic discrimination and inefficiencies in decision-making.

Jobs and Careers

AI creates new employment. For example, AI developers, data scientists, and machine learning engineers are in high demand in areas like autonomous vehicles, health tech, and finance. This means that AI brings in new professions and sectors for people to work in (Vinson et al., 2024). AI systems can streamline repetitive tasks that enhance human productivity and shift the focus more towards decision-making and creativity. For example, in the marketing and finance domain, AI tools accelerate data analysis and extraction enabling workers to make quick decisions and increase job satisfaction. (Acemoglu & Restrepo, 2019). AI helps individuals acquire new skills as numerous organizations and corporations are providing training initiatives in areas such as programming, data analysis, and the applications of AI tools (Frank et al., 2019). AI systems also help small businesses and entrepreneurs by reducing operational costs and allowing for smarter decision-making through the interpretation of data trends (Jumaev, 2024). In healthcare and customer service, for example, AI handles administrative tasks, which gives professionals an opportunity to work with their customers and have a better work-life balance (Tomar et al., 2024). Moreover, AI enables remote work by supporting virtual collaboration and global hiring (Faluyi, 2025).
On the other hand, AI-driven automation leads to significant job displacement, particularly in sectors like manufacturing, administrative support, and customer service. Even white-collar sectors such as accounting, legal assistance, and journalism, lead to a broader impact on the workforce (Faluyi, 2025). AI also introduced the creation of 'gig' roles that lack stability, benefits, and career growth opportunities which leads to job dissatisfaction, lower wages, and a growing sense of economic insecurity among the workforce (Acemoglu & Restrepo, 2019). High-skilled human workers may also experience wage stagnation as AI can perform complex data analysis, legal document reviews, and medical diagnostics more efficiently and cost-effectively (Frank et al., 2019). AI systems may accelerate employment inequality and exacerbate the phenomenon of job polarization. Between 2-5% of jobs are to be automated where low-skilled workers face higher risks of displacement, while those with advanced technical skills benefit from new opportunities, creating a labor market divide that can contribute to social tensions and economic instability (Gmyrek, Winkler, & Garganta, 2024). AI's influence extends beyond direct automation, affecting supply chains, service delivery models, and business strategies which leads to significant shifts in job availability and requirements, forcing companies to restructure their workforce and reduce the need for traditional job roles (Webb, 2019). This anxiety of job insecurity and occupational stress among employees can negatively impact mental health and reduce job satisfaction. Furthermore, constant monitoring and performance tracking by AI systems can increase burnout rates and impact productivity (Chui, Manyika, & Miremadi, 2016). This phenomenon is closely related to the John Henry effect, where workers feel compelled to compete against automated systems, leading to overexertion and detrimental effects on both physical and mental health (Gammon & Bornstein, 2018).

Society

AI is reshaping nearly every facet of society, from government operations and public services to entertainment and personal relationships. It enhances productivity, decision-making, and crisis response, yet also amplifies risks around bias, inequality, and privacy, particularly in low- and middle-income countries with limited regulation and digital infrastructure (Tony Blair Institute for Global Change, 2024). AI-driven tools like ChatGPT support work management and public access, but reliance on biased training data can reinforce discrimination. As AI systems increasingly influence policy, law, and elections, ethical governance becomes essential to protect human rights, maintain accountability, and prevent the consolidation of power among a few tech giants (Elysée Palace, 2025; UNRIC, 2024). In democratic contexts, AI’s dual nature is especially apparent. On one hand, chatbots streamline voting information, and fraud detection tools boost electoral trust. On the other, AI enables targeted political ads and deepfake content that manipulate voter behavior and undermine institutions (Bond, 2024; Robins-Early, 2024; Mishra, 2024). Deepfakes featuring celebrities such as Taylor Swift and Elon Musk have heightened public concern over misinformation (Nguyen, 2024), while Netflix’s undisclosed use of AI-generated visuals has drawn criticism for threatening media authenticity (Belanger, 2024). The EU’s AI Act aims to address these risks by regulating deceptive practices and enhancing transparency. The legal system is similarly challenged. AI expedites case evaluations but complicates questions of accountability and data ethics. Microsoft’s CoPilot+ Recall feature, for example, faced backlash for unauthorized data collection (Marcinek et al., 2024; Rahman-Jones, 2024). Facial recognition technologies have led to false arrests, most notably that of Porcha Woodruff, an African American woman wrongfully detained due to algorithmic error (Swarns, 2023). Discriminatory outcomes have also surfaced in AI hiring tools, as evidenced by the iTutorGroup case, where older candidates were filtered out unfairly (EEOC, 2023). These examples underline the urgent need for robust oversight to safeguard civil rights in AI-driven processes. Meanwhile, AI’s impact on personal and environmental spheres is complex. Social robots and virtual assistants offer companionship and reduce loneliness, but may weaken human connection over time. Algorithms that prioritize engagement over meaningful dialogue on social platforms can fuel polarization and reduce empathy (Rodilosso, 2024; Johnston, 2020). Environmentally, AI can optimize energy use and predict climate events, yet model training consumes substantial energy and generates e-waste, often affecting marginalized regions. Calls for “Sustainable AI” reflect a growing recognition that innovation must align with ecological and social responsibility. Ultimately, navigating these tensions requires proactive governance, ethical design, and cross-sector collaboration to ensure AI supports equitable and sustainable progress.

Data

Data Collection and Extraction

For our research, we collected a large dataset of AI-related news headlines using Google News RSS feeds (Google, n.d.) to capture multilingual perspectives from different regions. Articles were gathered between November 9, 2023, and November 11, 2024, with English-language coverage extended to February 12, 2025, to include recent developments. We used Feedparser (McKee & Pilgrim, n.d.), BeautifulSoup (Richardson, 2023), and Requests (Reitz, n.d.) for content retrieval and parsing. Queries included terms like “AI” and “Artificial Intelligence” in over 40 languages. For dynamic pages, ScrapingBee (ScrapingBee, n.d.) was used to render JavaScript content. For consistency, all article titles and sources were translated to English using GoogleTranslator from the deep-translator library (Baccouri, 2020). Translations were done in batches of 20 for performance optimization. The final dataset includes 288,429 records, each containing the article’s publication date, English-translated title and source, and the original language. This structure supports consistent, multilingual analysis of global AI-related media coverage. While GoogleTranslator provides fast and scalable translations across more than 40 languages, it is important to note that automatic translations are not always perfect. In most cases, the tool produced reliable and intelligible English renderings of headlines, sufficient for large-scale analysis. However, nuances such as idiomatic expressions, cultural references, or domain-specific terminology may not always be preserved with complete accuracy (Aiken and Balan, 2011; Groves and Mundt, 2015). Given the volume of data, human validation for every headline was not feasible, but the consistency and general quality of the translations were adequate for capturing sentiment, topics, and linguistic patterns at scale.

Feature Engineering

Following data collection, we performed feature engineering to extract relevant metadata from the news titles and publication dates. This process involved computing the number of characters and words in each title, identifying the day of the week, month, year, and quarter of publication, and determining whether the article was published on a weekend. Additionally, we performed text classification to categorize news articles into relevant themes, including Education, Healthcare, Robotics, Career and Society. This was done using a pattern-matching method and regular expressions with keywords listed in Table 1. The distribution was: Education – 19,465 (6.75%), Healthcare – 13,941 (4.83%), and Robotics – 19,372 (6.72%), Career – 40,131 (13.91%) and Society – 35,027. Notably, 62.7% of headlines remained uncategorized. Additionally, 24,627 headlines overlapped across categories, resulting in 88,837 unique category assignments. For balanced analysis, we randomly sampled 10,000 unique headlines per category (without replacement). Since Healthcare had fewer than 10,000 unique entries, additional multi-category headlines labeled as Healthcare were included to meet the target. This approach ensured both structure and broad coverage. We further applied topic modeling and sentiment analysis to the resulting 50,000-headline dataset.
Prefix-based matching (e.g., "educat-" matching "education" or "educator") allowed for morphological variations of key terms. Without this, a partial match could mistakenly capture words like deducate, which is unrelated to education. At the same time, standalone keywords (e.g., "bus" not matching "business") and exact acronym detection (e.g., "EV" but not in "event") helped reduce false positives. Multi-word phrases like “self-driving” and “human-robot” were matched as full terms to preserve their meaning. Each theme was encoded as a binary feature per headline. While not manually validated and sensitive to context, this method offers a scalable approach to thematic classification, with scope for refinement using advanced models.

Methodology

Exploratory Data Analysis (EDA)

For EDA, we conducted linguistic, geographic, and quantitative textual analysis to explore the characteristics of AI news headlines. Finally, we perform an analysis on our domain classification.
  • Linguistic and Geographical Diversity
Our dataset has shown strong linguistic diversity. English leads with 18% of headlines, but many non-English articles reflect global engagement. Spanish accounts for the highest number of non-English articles, followed closely by French, Korean, Italian, Japanese, and Portuguese. Additionally, German, Indonesian, Russian, and Arabic also represent substantial portions of the dataset. Given the multilingual dissemination, it appears that conversations about AI are pertinent worldwide.
2.
AI news headlines textual analysis
The dataset showed significant variability in AI news headline lengths, averaging 76.41 characters (SD = 29.49), with a range of 2 to 465 characters. Most headlines contained an average of 11.87 words, with counts ranging from 1 to 75, reflecting diverse editorial styles from concise to detailed. To explore content patterns, we conducted an n-gram analysis. Unigrams (Figure 3a) highlighted frequent mentions of "new," "generative," "research," and "education," pointing to ongoing innovation and AI’s role in knowledge and technology. Frequent references to Google indicated its major influence. Bigrams (Figure 3b) like "new ai" and "ai regulation" emphasized current developments and regulatory discourse. Trigrams (Figure 3c), such as "using artificial intelligence," reflected practical applications, as well as themes in education and governance. Quadgrams (Figure 3d) included references to policies ("EU AI Act"), realism in AI outputs ("look like real life"), public figures (e.g., Elon Musk, Jensen Huang), and concerns like "artificial intelligence replace humans," indicating persistent anxieties around job displacement. Overall, the headlines had both optimism about AI advancements and unease about its societal implications.

Natural Language Processing Analysis

We conducted topic modeling across five domains using advanced NLP techniques to analyze 10,000 news headlines per domain. The workflow included data preprocessing, embedding generation, dimensionality reduction, clustering, and topic extraction. Stopwords combined sets from NLTK (Bird et al., 2009), Scikit-learn (Pedregosa et al., 2011), a custom-defined list, and numbers from 0 to 9999 to remove irrelevant values. The BAAI/bge-small-en model generated dense vector embeddings, stored as NumPy arrays for reuse and computational efficiency. Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018) reduced the high-dimensional embeddings using n_neighbors=12, n_components=5, min_dist=0.1, and cosine similarity to place semantically similar topics closer together. Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) (McInnes et al., 2017) then clustered the data with min_cluster_size=20, min_samples=5, and epsilon=0.2 to balance granularity. BERTopic (Grootendorst, 2022) extracted topics using the embeddings and clusters, with a TF-IDF vectorizer set to unigrams and bigrams, max_df=0.90, and min_df=3 (Jones, 1972). The top nine topics were selected using BERTopic’s topic reduction function. This approach effectively captured thematic structures and domain-specific influences of AI. While Latent Semantic Analysis (LSA) was also tested, BERTopic produced better results. Further, we conducted sentiment analysis, a widely applied NLP technique used in media analysis, social media monitoring, and customer feedback (Wankhade et al., 2022). Prior studies have shown its effectiveness in capturing public opinion during high-impact events like the COVID-19 pandemic, incorporating spatiotemporal and socioeconomic data to inform policy (Ali et al., 2021; Samuel et al., 2020a; Samuel et al., 2020b; Rahman et al., 2021). In this study, sentiment analysis was applied to AI-related news headlines to assess general attitudes—positive, neutral, or negative—toward AI developments. We used the BART-large-MNLI model, a transformer-based classifier aligned with recent research highlighting the superior performance of LLMs in sentiment tasks (Zhang et al., 2023). Each headline was assigned a sentiment label, providing an initial overview of public sentiment in AI-related media narratives.

Statistical Analysis

For the quantitative analysis of sentiment patterns across AI-related news headlines, we used a comprehensive statistical approach to study variations in sentiment across our five domains. We began with descriptive statistics to assess central tendencies, calculating mean sentiment scores for baseline comparisons across domains. To test distribution normality, we used the Shapiro-Wilk test (Shapiro & Wilk, 1965), which indicated non-normal sentiment distributions across all domains. This warranted the use of non-parametric methods. We conducted a Kruskal-Wallis test (Kruskal & Wallis, 1952) to examine whether sentiment scores significantly differed among the five domains. Upon finding significant differences, we performed pairwise Mann-Whitney U tests (Mann & Whitney, 1947) (a non-parametric alternative to one-way ANOVA (Girden, 1992)) to identify which domain pairs exhibited statistically significant sentiment differences. To reduce the risk of Type I errors, we applied a Bonferroni correction, adjusting the significance threshold to α = 0.005 - since we're doing 10 comparisons, we adjusted the significance level (α = 0.05) to account for multiple testing. For each domain pair, we reported the U-statistic, p-value, and effect size using rank-biserial correlation (r), categorized as small (|r| < 0.3), medium (0.3 ≤ |r| < 0.5), or large (|r| ≥ 0.5) to reflect the magnitude of sentiment differences. We also calculated standard deviations and variances to assess sentiment variability within each domain. Finally, a median test was conducted to complement our analysis of central tendencies, providing a robust view of inter-domain sentiment differences.

Results and Analysis

Sentiment Analysis

The sentiment analysis was performed on a dataset of 50,000 in total combining all five domains scoring each headline with a continuous score from -1 to 1. This provides an overall distribution and trends of sentiments over time. The sentiment score was sorted and the results are evident in Figure 4 (top-left) where the positive sentiment is prominently showcased in the color green compared to the negative sentiment represented in the color red. This asymmetry suggests an overall optimistic view of AI in the dataset. Both extremes are well represented while the transition from negative to positive is smooth, showcasing some neutral sentiments.
The sentiment analysis was also conducted on each domain consisting of 10,000 records of data. The education domain shows a strong inclination towards the positive sentiment with over 80 percent of the records falling into the positive side, indicating an optimistic view of news towards education, possibly because of the advancements in learning methodologies and accessibility to education. However, there are some negative sentiments. The health domain’s sentiment also resembles education where the positive sentiment highly dominates. Health has the most number of positive sentiments amongst all the domains which implies advancements in medical innovation and drug discovery. The negative sentiment might be due to privacy concerns over patient data. The sentiment for the robotics domain is also positive with around 80/20 split. While it shows some negative sentiment due to privacy concerns and job displacement, the overall discourse surrounding robotics is largely optimistic due to the growing advancements and automation due to AI. The career domain reflects a positive outlook overall but has a slightly more negative sentiment with approximately a 75/25 split in comparison to education. The data represents sentiments on both extremes with a possibility of having professional growth on the positive end and job insecurity on the lower end. The general positivity suggests a perception of career development and opportunities in various fields. The society domain brings some balance to the sentiment distribution as it has a relatively wider spread amongst the negative and positive sentiment scores. This indicates that there are mixed reviews of AI on societal issues with some expressing concerns while others discussing the advancements. Overall, the sentiment across all domains remains largely positive with some negative perceptions as shown in Figure 4.

Topic Modelling

The topic modeling analysis was conducted on each domain consisting of 10,000 records of data. This study identified the overall sentiment and dominant themes within each domain. The study extracts the top nine topics from each domain for topic modeling analysis. For education, the topics extracted explain the inclination of the sentiment emphasizing university systems, digital learning, and AI-driven educational tools. Research programs, higher education institutions, free online courses, and the role of AI in language learning are some of the recurring themes shown in Figure 5. However, topics on academic integrity, cheating, and plagiarism highlight the concerns regarding AI in education. The health domain (Figure 6) is dominated by themes of pharmaceutical innovation and personalized medicine, with a strong emphasis on cancer research, diagnostic imaging, telemedicine, and mental health. The vaccines and drug discovery showcase innovative drug development in the health domain. Additionally, cardiovascular diseases and dementia indicate a focus on chronic conditions which indicates advancements due to AI. Mental health-related topics such as therapy and psychology are also included, reflecting a growing concern for psychological well-being in healthcare discussions. The mention of suicide is unclear, possibly reflecting AI risks or its role in suicide prevention.
In the robotics domain (Figure 7), key topics included consumer robotics, industrial automation, and autonomous vehicles, especially in the automobile industry. Ethical concerns around data privacy and regulation highlight the need for stronger compliance in the industry. In the career domain (Figure 8), positive themes included stock market trends, venture capital investment, and generative AI’s role in boosting productivity across business, legal, and communication tasks. However, concerns over job security and automation-driven displacement persist.
The topic extraction for the society domain indicates high relevance to government and public administration utilizing AI, while the disinformation and regulations showcase the negative aspects (Figure 9). The topics around AI in governance, regulations in the EU, and data transparency lead to an intersection between AI and policies around it. Additionally, some topics such as warfare, Cold War, and military drones suggest strong involvement of AI in geopolitical conflicts. Topics with trust and transparency in comparison to existential threats to humanity prove the analysis of an evenly spread sentiment.

Cross-Domain Analysis of Sentiment in News Headlines

Normality Testing

Visual inspection of sentiment distributions (Figure 10) shows clear deviations from normality across all domains. Education exhibits a near-symmetric distribution peaking around 0.25, though a slight leftward tail suggests mild negative skewness. Healthcare and Robotics display strong positive skewness, with sentiment concentrated in the positive region. Career shows a positively skewed, mildly bimodal pattern with a secondary peak in the negative range. In contrast, Society presents a pronounced bimodal distribution, indicating polarized sentiment with peaks in both negative and positive ranges. These observations are further validated by the Shapiro-Wilk test, which indicated significant departures from normality across all domains: Education (W = 0.921, p < 0.001), Healthcare (W = 0.911, p < 0.001), Robotics (W = 0.931, p < 0.001), Career (W = 0.931, p < 0.001) and Society (W = 0.963, p < 0.001). These results confirm the violation of the normality assumption and justify the use of non-parametric methods for subsequent analyses.

Domain-Wise Sentiment Comparison

A Kruskal-Wallis H test was conducted to determine whether sentiment scores differed by domain. The results revealed a statistically significant difference in sentiment scores across the five domains, H(4) = 2668.565, p < 0.001. To further investigate, post-hoc pairwise comparisons were conducted using the Mann-Whitney U test with Bonferroni correction (α = 0.005). All pairwise comparisons were statistically significant (p < 0.005), including the smallest difference between Education and Career as shown in Table 2.
Table 2. Pairwise Mann-Whitney U test results with effect sizes.
Table 2. Pairwise Mann-Whitney U test results with effect sizes.
Domain Pair U-statistic p-value Effect Size (r) Interpretation
Education vs. Healthcare 41,978,129.00 0 0.16 Small effect
Education vs. Robotics 44,726,836.50 0 0.105 Small effect
Education vs. Society 63,714,804.00 0 -0.274 Small effect
Healthcare vs. Robotics 52,559,970.00 0 -0.051 Small effect
Healthcare vs. Society 68,426,711.00 0 -0.369 Medium effect
Robotics vs. Society 66,616,369.00 0 -0.332 Medium effect
Career vs. Education 52,451,658.50 0 -0.049 Small effect
Career vs. Healthcare 45,255,945.00 0 0.095 Small effect
Career vs. Robotics 47,576,088.00 0 0.048 Small effect
Career vs. Society 63,428,451.00 0 -0.269 Small effect
Table 3. Sentiment differences (Δ) across domains.
Table 3. Sentiment differences (Δ) across domains.
Large Differences (Δ > 0.20) Society vs. Healthcare (0.310), Robotics (0.282), Education (0.233), Career (0.224)
Moderate Differences (0.05 < Δ < 0.20) Healthcare vs. Career (0.086), Education (0.078); Robotics vs. Career (0.058), Education (0.049)
Small Differences (Δ < 0.05) Education vs. Career (0.008); Healthcare vs. Robotics (0.028)
To assess practical significance, mean sentiment differences between domains were analyzed as:
These results confirm that Society is an outlier, showing much lower sentiment scores than the other domains. A non-parametric median test also confirmed significant differences in median sentiment across the five domains (Statistic = 1977.768, p < 0.001), with a grand median of 0.297. Descriptive statistics of sentiment scores indicate clear differences in tone across domains. Healthcare had the most positive coverage, with the highest mean sentiment (0.321) and moderate variability, reflecting steady optimism. Robotics followed (mean = 0.293), generally positive but with some concerns. Education (mean = 0.244) showed a moderately positive outlook and the lowest standard deviation (0.387), suggesting a stable perception. Career (mean = 0.235) reflected more mixed sentiment, likely due to job-related anxieties, and had a higher variability (0.484). In contrast, Society stood out as the most dystopian, with a near-neutral mean (0.011) and the highest variability (0.514), pointing to polarized views shaped by issues such as surveillance, inequality, and ethics.
We further looked at the topics emerging from negative AI news headlines. Figure 11 presents the top keywords associated with each topic extracted from negative sentiment news headlines across all domains. Nine distinct topics were identified: Topic 1 appears centered around labor and automation, with terms like drivers, self-driving, unemployed, and professions. Topic 2 focuses on education-related concerns, including high school, teachers, cheating, and exams. Topic 3 reflects existential and ethical fears tied to AI, with keywords such as destroy humanity, deceive, and copyright. Topic 4 is concerned with healthcare and clinical issues, including cancer, hospitals, patients, and medicine. Topic 5 deals with regulatory and legal frameworks, featuring terms like EU law, regulation, and world law. Topic 6 shows tech leadership and robotics, with names like Sam Altman, OpenAI, chatbots, and robotics. Topic 7 addresses political interference and elections, with keywords such as influence elections, democracy, and campaigns. Topic 8 relates to social media and privacy, with terms like YouTube, Instagram, user data, and NVIDIA. Topic 9 emphasizes warfare and geopolitical threats, including nuclear, Ukraine, weapons, and disinformation. These topics show key negative narratives and concerns surrounding AI across societal, political, and technological contexts.

Future Research

Through sentiment analysis and topic modeling of news headlines, this study enhances understanding of AI’s multi-faceted discussions and provides new opportunities for future work. Our analysis is based on a sample of 50,000 headlines, stratified from five domains. While the findings are certainly valid, there is greater breadth across the full dataset of over 288,000 records, so subsequent research should use this large corpus to increase generalizability. The sample used in this study leaned notably toward the positive spectrum in sentiment, which may have influenced the skewed portrayal of domains such as healthcare and robotics as predominantly utopian. Analyzing more multilingual headlines and regional sources could give a more balanced picture of global AI discourse. Furthermore, longitudinal analyses across multiple years would allow researchers to observe temporal shifts in sentiment, capturing how public discourse evolves in response to major AI breakthroughs, regulations, or incidents (e.g., data breaches, election interference, medical AI recalls). In parallel with the sentiment analysis, this study explored the use of instruction-tuned models: DeepSeek-R1-Distill-Qwen-1.5B and 7B variants, for emotion classification of AI news headlines. Results showed moderate performance, with 1.5B surprisingly outperforming 7B (57.6% vs. 31.6%) on predefined emotion categories (e.g., Sadness, Anger, Fear, Trust). Allowing the model to self-generate emotion categories led to 30 unique emotional expressions, showing the potential of open-ended approaches to capture affective subtlety. These findings suggest that domain familiarity and instruction clarity may outweigh raw parameter count in emotion classification tasks. Future work could explore few-shot prompting, contrastive examples, and fine-tuning small LLMs with emotion-labeled data to improve performance. Our keyword-based classification approach, while effective in enabling domain-level analysis, is limited by its reliance on static pattern-matching rules. Future iterations could explore deep-learning-based classifiers (e.g., fine-tuned BERT or RoBERTa models) trained on labeled datasets to improve accuracy, especially in overlapping or ambiguous headlines. These models can capture contextual subtleties that are often missed by rule-based approaches. Embedding techniques could also be enhanced by integrating contextual sentence encoders (domain-specific sentence-BERT models) and hierarchical attention networks, particularly to differentiate between multi-topic headlines. Future research may benefit from incorporating multi-modal data sources including full news articles, social media posts, policy papers, and public speeches to triangulate sentiment and emotion trends. This would help validate whether headlines align with the tone of the underlying content and assess how media framing influences public perception. To further enrich our understanding of public opinion beyond traditional news media, future research should incorporate social media platforms such as Reddit and Twitter, where decentralized, real-time discourse often captures emerging sentiments, grassroots reactions, and public skepticism more candidly than curated news headlines. These platforms enable the exploration of bottom-up narratives, emotional volatility, and community-level trends that may not yet surface in mainstream journalism. Sentiment dynamics from social media can also serve as early indicators of public backlash, resistance, or support in response to key AI developments or regulatory shifts. Tracking sentiment over time alongside events could identify inflection points in public opinion. While sentiment analysis offers critical insights into public perceptions of AI, future research should also assess the tangible outcomes of AI deployment across diverse communities. AI-driven innovations can expand access to healthcare, education, and economic opportunities, yet they also risk deepening inequalities, displacing workers, and amplifying bias if deployed without oversight. Community-level studies could reveal disparities in how different populations experience AI’s benefits and harms, offering a more complete picture beyond media narratives. Understanding these impacts will be essential to ensuring that AI development promotes equity, sustains public trust, and supports resilient, inclusive communities over the long term.

Discussion

In this study, AI-related headlines were analyzed using sentiment analysis, topic modeling, and statistical testing to examine how AI is framed in global news discourse across five domains: Education, Healthcare, Robotics, Careers, and Society. These domains showed a spectrum of narratives, ranging from optimistic portrayals in healthcare and robotics to polarized and cautionary tones in societal coverage. This divergence shows that perceptions of AI are not monolithic; rather, they are shaped by the specific affordances, risks, and socio-political dynamics unique to each domain. Sentiment distributions varied significantly across domains, as confirmed by Kruskal-Wallis and pairwise Mann-Whitney U tests, reinforcing the importance of disaggregated approaches in AI impact studies. Topic modeling further elucidated the complex themes driving these sentiments, from innovation and empowerment to inequality and existential risk. Education headlines focus on adaptive learning but also flag risks like academic dishonesty and overdependence on generative tools. In healthcare, AI is transforming diagnostics, personalized treatment, and operational efficiency. It enables early anomaly detection through advanced algorithms like GANs and VAEs, supports customized care plans, accelerates medical research via synthetic data and disease modeling, and automates administrative tasks to reduce clinician burnout. Predictive analytics enhance risk prediction, pandemic response, and population health management. Beyond clinical settings, AI improves medical education, marketing, and revenue cycle management. However, concerns persist around algorithmic bias, data privacy, and the opacity of deep learning models, which risk undermining trust and widening disparities. Ensuring equitable, transparent, and validated AI deployment is crucial to preserving healthcare’s human-centered values (Bhuyan et al., 2025).
Robotics is tied to industrial automation and autonomous vehicles, with both praise for efficiency and worry over job loss. Careers, although broadly positive, showed deeper polarization, as evidenced by its high standard deviation. This domain exhibited conflicting themes: new job creation in AI and data science juxtaposed with fears of job automation, precarious gig work, and wage stagnation. Society-related topics covered deepfakes, disinformation, and geopolitical tensions, framing AI as both powerful and potentially harmful. Our topic model on negative headlines further confirmed widespread concerns. Themes like AI in warfare, ethics, and labor automation convey the fear of unintended consequences if AI is left unchecked. These findings suggest a broader media trend of technological determinism, where AI is celebrated in some domains and problematized in others. This polarized framing risks distorting public understanding, policymaking, and responsible adoption. Positive bias in healthcare and robotics may create “informational bubbles”, while dystopian framings in societal discourse could amplify fear and mistrust. News media, therefore, must aim for balanced, fact-based reporting that resists binary extremes. The role of news media in creating public understanding is particularly critical given that much of the general public relies on such sources to construct their beliefs about technological futures.
These findings emphasize the need for HEAI that prioritizes human values, rights, and well-being. It demands transparency, fairness, accountability, and user agency at every stage of AI design, deployment, and regulation. Rather than centering only on performance or profit, HEAI focuses on ensuring that AI serves people; supporting equity, inclusion, and social good. Foundational research has articulated the importance of culturally sensitive, personally adaptive, and ethically guided AI frameworks across education, governance, and information systems (Samuel et al., 2023; Kashyap et al., 2024, ). These contributions introduce concepts such as adaptive cognitive fit and generative systems design, emphasizing participatory AI development and alignment with diverse human needs (Samuel et al., 2022; Garvey, Samuel & Pelaez, 2021). We propose the development of a globally distributed, HEAI-enhanced ethics board, functioning as a decentralized consortium where AI systems assist human experts from diverse domains like philosophy, sociology, law, and computer science in evaluating AI models before deployment. Blockchain can further support transparency and shared oversight. Additionally, ethical “red-teaming” (the practice of engaging independent groups to simulate adversarial and failure scenarios) should become standard practice, stress-testing models in sensitive domains like healthcare and law enforcement before public release. From a policy standpoint, differentiated strategies are required to address domain-specific risks. For instance, while AI in education demands pedagogical reforms that promote digital literacy and critical thinking, AI in society necessitates robust regulatory frameworks that safeguard privacy, autonomy, and democratic participation. Similarly, labor market shifts induced by AI require proactive investments in workforce reskilling, social safety nets, and ethical guidelines for workplace automation. Thus, the application of AI should be accompanied by education on its limitations, potential biases, and ethical considerations. Moreover, public discourse must be grounded in nuance, resisting reductive framings of AI as purely utopian or dystopian.

Conclusions

The future of AI is not predetermined but actively constructed through policy choices (Samuel, 2021), media narratives, technological design, and public engagement. Recent work also shows how fear-inducing news headlines fuel “AI phobia” and distort public understanding, which in turn shapes reactive policies (Samuel et al., 2025). Alongside these sociopolitical dynamics, AI agents, agentic AI, and swarm intelligence are emerging focal areas in artificial intelligence research and practice. The emphasis lies on increasing levels of autonomy, agent-to-agent collaboration, self-learning, and adaptation to environmental variables, along with synchronized functioning to achieve complex goals with minimal human oversight. These paradigms are increasingly recognized for their transformative potential across domains such as scientific discovery, intelligent systems design, and human–machine collaboration (Acharya et al., 2025; Sapkota et al., 2026; Gridach et al., 2025). By critically examining how AI is represented in the public sphere, this research contributes to a more informed, ethical, and inclusive path forward. As AI systems become increasingly embedded in critical infrastructures, educational institutions, workplaces, and personal lives, their societal impact will be determined not by technological inevitability, but by how they are communicated, regulated, and adopted. To ensure this trajectory remains responsible, high-impact AI systems should be subject to independent ethical audits and adversarial “red-team” testing before deployment, while governments and institutions must simultaneously invest in sustained public education initiatives that promote AI literacy to enable citizens to engage critically and democratically with these technologies.

References

  1. Acemoglu, D.; Restrepo, P. Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives 2019, 33(2), 3–30. [Google Scholar] [CrossRef]
  2. Acharya, D. B.; Kuppan, K.; Divya, B. Agentic AI: Autonomous intelligence for complex goals—A comprehensive survey. IEEE Access 2025, 13, 18912–18936. [Google Scholar] [CrossRef]
  3. Aiken, M.; Balan, S. An analysis of Google Translate accuracy. Translation Journal 2011, 16(2). [Google Scholar]
  4. Ali, G. M. N.; Rahman, M. M.; Hossain, M. A.; Rahman, M. S.; Paul, K. C.; Thill, J. C.; Samuel, J. Public perceptions of COVID-19 vaccines: Policy implications from US spatiotemporal sentiment analytics. In Healthcare; MDPI, August 2021; Vol. 9, No. 9. [Google Scholar]
  5. Alowais, S. A.; Alghamdi, S. S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A. I.; Almohareb, S. N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H. A.; Al Yami, M. S.; Al Harbi, S.; Albekairy, A. M. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Medical Education 2023, 23(689). [Google Scholar] [CrossRef]
  6. Al-Zahrani, A. M. Unveiling the Shadows: Beyond the Hype of AI in Education. Heliyon 2024, 10(9), e30696. [Google Scholar] [CrossRef]
  7. Ananyi, S. O.; Somieari-Pepple, E. Cost-benefit analysis of artificial intelligence integration in education management: Leadership perspectives. International Journal of Economics Environmental Development and Society 2023, 4(3), 353–370. [Google Scholar]
  8. Athaluri, S. A.; Manthena, S. V.; Kesapragada, V. S. R. K. M.; Yarlagadda, V.; Dave, T.; Duddumpudi, R. T. S. Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References. Cureus 2023, 15(4). [Google Scholar] [CrossRef] [PubMed]
  9. Baccouri, N. Deep Translator: A flexible Python tool for translations [Python library]. 2020. Available online: https://deep-translator.readthedocs.io/en/latest/.
  10. Bairathi, A. A world with AI: Where will we be in 10, 20, and 50 years? NASSCOM Community. 11 February 2025. Available online: https://community.nasscom.in/communities/ai/world-ai-where-will-we-be-10-20-and-50-years.
  11. Bajwa, J.; Munir, U.; Nori, A.; Williams, B. Artificial intelligence in healthcare: Transforming the practice of medicine. Future Healthcare Journal 2021, 8(2), e188–e194. [Google Scholar] [CrossRef] [PubMed]
  12. Baker, R. S.; Hawn, A. Algorithmic Bias in Education. International Journal of Artificial Intelligence in Education 2021, 32, 1052–1092. [Google Scholar] [CrossRef]
  13. Baptista, E. What is DeepSeek and why is it disrupting the AI sector? Reuters. 28 January 2025. Available online: https://www.reuters.com/technology/artificial-intelligence/what-is-deepseek-why-is-it-disrupting-ai-sector-2025-01-27/ (accessed on 12 February 2025).
  14. Belanger, A. Netflix doc accused of using AI to manipulate true crime story. Ars Technica. 19 April 2024. Available online: https://arstechnica.com/tech-policy/2024/04/netflix-doc-accused-of-using-ai-to-manipulate-true-crime-story/?utm_source=chatgpt.com.
  15. Bendiab, G.; Hameurlaine, A.; Germanos, G.; Kolokotronis, N.; Shiaeles, S. Autonomous vehicles security: Challenges and solutions using blockchain and artificial intelligence. IEEE Transactions on Intelligent Transportation Systems 2023, 24(4), 3614–3637. [Google Scholar] [CrossRef]
  16. Bharadiya, J. Artificial intelligence in transportation systems a critical review. American Journal of Computing and Engineering 2023, 6(1), 34–45. [Google Scholar] [CrossRef]
  17. Bhuyan, S. S.; Sateesh, V.; Mukul, N.; Galvankar, A.; Mahmood, A.; Nauman, M.; Samuel, J. Generative Artificial Intelligence Use in Healthcare: Opportunities for Clinical Excellence and Administrative Efficiency. Journal of Medical Systems 2025, 49(1), 10. [Google Scholar] [CrossRef]
  18. Bird, S.; Klein, E.; Loper, E. Natural Language Processing with Python; O’Reilly Media, 2009; Available online: https://www.nltk.org/book/.
  19. Bond, S. How AI deepfakes polluted elections in 2024; NPR, 2024; Available online: https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections (accessed on 12 February 2025).
  20. Bostrom, N. Superintelligence: Paths, dangers, strategies; Oxford University Press, 2014. [Google Scholar]
  21. Bostrom, N. Deep utopia: Life and meaning in a solved world; Ideapress Publishing, 2024. [Google Scholar]
  22. Bratton, L. What Big Tech execs have said about DeepSeek as US contemplates ban; Yahoo Finance, 2025; Available online: https://finance.yahoo.com/news/what-big-tech-execs-have-said-about-deepseek-as-us-contemplates-ban-140030220.html (accessed on 12 February 2025).
  23. Chang, Xinyu. Gender Bias in Hiring: An Analysis of the Impact of Amazon's Recruiting Algorithm. Advances in Economics, Management and Political Sciences 2023, 23, 134–140. [Google Scholar] [CrossRef]
  24. Chidipothu, N.; Anderson, R.; Samuel, J.; Pelaez, A.; Esguerra, J.; Hoque, M. N. Improving large language model (LLM) performance with retrieval augmented generation (RAG): Development of a transparent generative AI university support system for educational purposes. Journal of Big Data and Artificial Intelligence 2025, 3(1). [Google Scholar] [CrossRef]
  25. Chui, M.; Manyika, J.; Miremadi, M. Where machines could replace humans—and where they can’t (yet). McKinsey Quarterly. 2016. Available online: https://www.mckinsey.com/featured-insights/employment-and-growth/where-machines-could-replace-humans-and-where-they-cant-yet.
  26. Chustecki, M. Benefits and risks of AI in health care: Narrative review. Interactive Journal of Medical Research 2024, 13, e53616. [Google Scholar] [CrossRef]
  27. Cools, H.; Van Gorp, B.; Opgenhaffen, M. Where exactly between utopia and dystopia? A framing analysis of AI and automation in US newspapers. Journalism 2022. [Google Scholar] [CrossRef]
  28. Cordero, D. The downsides of artificial intelligence in healthcare. The Korean Journal of Pain 2024, 37(1), 87–88. [Google Scholar] [CrossRef] [PubMed]
  29. Cross, S.; Bell, I.; Nicholas, J.; Valentine, L.; Mangelsdorf, S.; Baker, S.; Titov, N.; Alvarez-Jimenez, M. Use of AI in mental health care: Community and mental health professionals survey. JMIR Mental Health 2024, 11, e60589. [Google Scholar] [CrossRef] [PubMed]
  30. Cuthbertson, A. AI and the meaning of life: Philosopher Nick Bostrom says technology could bring utopia but will force us to rethink our purpose. The Independent. 20 April 2024. Available online: https://www.the-independent.com/tech/ai-deep-utopia-nick-bostrom-cockaigne-b2530807.html (accessed on 12 February 2025).
  31. DeepSeek-AI, Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., et al. DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv 2025. [Google Scholar] [CrossRef]
  32. Dehghan, A.; Cevik, M.; Bodur, M. Dynamic AGV Task Allocation in Intelligent Warehouses. arXiv, 2023; arXiv:2312.16026. [Google Scholar]
  33. Didast, F.; Nassih, R. Y.; Ait Lbachir, I. Artificial Intelligence and Logistics: Recent Trends and Development. International Journal of Advanced Computer Science and Applications 2024, 12(1). [Google Scholar]
  34. Dmitracova, O. 41% of companies worldwide plan to reduce workforces by 2030 due to AI; CNN, 8 January 2025; Available online: https://www.cnn.com/2025/01/08/business/ai-job-losses-by-2030-intl/index.html (accessed on 12 February 2025).
  35. Dutta, S.; Ranjan, S.; Mishra, S.; Sharma, V.; Hewage, P.; Iwendi, C. Enhancing educational adaptability: A review and analysis of AI-driven adaptive learning platforms. In 2024 4th International Conference on Innovative Practices in Technology and Management (ICIPTM); IEEE, February 2024; pp. 1–5. [Google Scholar] [CrossRef]
  36. Palace, Elysée. Statement on inclusive and sustainable artificial intelligence for people and the planet. 11 February 2025. Available online: https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet.
  37. Ettman, C. K.; Galea, S. The potential influence of AI on population mental health. JMIR Mental Health 2023, 10, e49936. [Google Scholar] [CrossRef]
  38. Faluyi, S. E. AI and job market: Analysing the potential impact of AI on employment, skills, and job displacement. African Journal of Marketing Management 2025, 17(1), 1–8. [Google Scholar] [CrossRef]
  39. Farhud, D. D.; Zokaei, S. Ethical issues of artificial intelligence in medicine and healthcare. Iranian Journal of Public Health 2021, 50(11), i–v. [Google Scholar] [CrossRef]
  40. Frank, M. R.; Autor, D.; Bessen, J. E.; Brynjolfsson, E.; Cebrian, M.; Deming, D. J.; Feldman, M.; Groh, M.; Lobo, J.; Moro, E.; Wang, D.; Youn, H.; Rahwan, I. Toward understanding the impact of artificial intelligence on labor. Proceedings of the National Academy of Sciences 2019, 116(14), 6531–6539. [Google Scholar] [CrossRef]
  41. Gammon, C.; Bornstein, M. Frey, B. B., Ed.; John Henry effect. In The SAGE encyclopedia of educational research, measurement, and evaluation; SAGE Publications, Inc., 2018; Vol. 4. [Google Scholar] [CrossRef]
  42. Garvey, M. D.; Samuel, J.; Pelaez, A. Would you please like my tweet?! An artificially intelligent, generative probabilistic, and econometric based system design for popularity-driven tweet content generation. Decision Support Systems 2021, 113497. [Google Scholar] [CrossRef]
  43. Girden, E. R. ANOVA: Repeated measures; Sage, 1992. [Google Scholar]
  44. Gmyrek, P.; Winkler, H.; Garganta, S. Buffer or bottleneck? Employment exposure to generative AI and the digital divide in Latin America. In ILO Working Paper 121; Geneva, ILO and The World Bank, 2024. [Google Scholar] [CrossRef]
  45. Google. Google News RSS feeds; Google, n.d.; Available online: https://news.google.com/rss (accessed on 13 February 2025).
  46. Grand View Research. AI companion market size, share & trends analysis report by type (Text-based, Voice-based, Multi-modal), by application (Mental Health Support, Education & Learning Aid), by industry vertical (Consumer, Businesses, Healthcare), and by region forecasts, 2025 - 2030 (Report No. GVR-4-68040-517-3). Grand View Research, 2024.
  47. Gridach, M.; Nanavati, J.; Abidine, K.; Mendes, L.; Mack, C. Agentic AI for scientific discovery: A survey of progress, challenges, and future directions. arXiv 2025, arXiv:2503.08979. [Google Scholar] [CrossRef]
  48. Grootendorst, M. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv. 2022. Available online: https://arxiv.org/abs/2203.05794.
  49. Groves, M.; Mundt, K. Friend or foe? Google Translate in language for academic purposes. English for Specific Purposes 2015, 37, 112–121. [Google Scholar] [CrossRef]
  50. Gupta, M.; Kakar, I. S.; Peden, M.; Altieri, E.; Jagnoor, J. Media coverage and framing of road traffic safety in India. BMJ Global Health 2021, 6(3), e004499. [Google Scholar] [CrossRef] [PubMed]
  51. Hill, D. L. G. AI in imaging: The regulatory landscape. British Journal of Radiology 2024, 97(1155), 483–491. [Google Scholar] [CrossRef] [PubMed]
  52. Hoose, S.; Králiková, K. Artificial intelligence in mental health care: Management implications, ethical challenges, and policy considerations. Administrative Sciences 2024, 14(9), 227. [Google Scholar] [CrossRef]
  53. Hu, Q.; Rangwala, H. Towards Fair Educational Data Mining: A Case Study on Detecting At-Risk Students. In International Educational Data Mining Society; 2020. [Google Scholar]
  54. International Data Corporation (IDC). Artificial intelligence will contribute $19.9 trillion to the global economy through 2030 and drive 3.5% of global GDP in 2030.; IDC, 17 September 2024; Available online: https://www.idc.com/getdoc.jsp?containerId=prUS52600524.
  55. Jackson, J.; Paste, Staff. The 50 best dystopian movies of all time; Paste Magazine, 2023; Available online: https://www.pastemagazine.com/movies/dystopian-movies/best-dystopian-movies-of-all-time-1 (accessed on 12 February 2025).
  56. Jamieson, T.; Van Belle, D. A. How development affects news media coverage of earthquakes: Implications for disaster risk reduction in observing communities. Sustainability 2019, 11(7), 1970. [Google Scholar] [CrossRef]
  57. jmuwa. The best dystopian TV shows; IMDb, 2020; Available online: https://www.imdb.com/list/ls048004810/ (accessed on 12 February 2025).
  58. Johnston, C. How social media algorithms inherently create polarization. Psychology Today. 29 November 2020. Available online: https://www.psychologytoday.com/us/blog/cultural-psychiatry/202011/how-social-media-algorithms-inherently-create-polarization.
  59. Jones, K. S. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation 1972, 28(1), 11–21. [Google Scholar] [CrossRef]
  60. Jumaev, G. The impact of AI on job market: Adapting to the future of work; Zenodo., 8 January 2024. [Google Scholar] [CrossRef]
  61. Jumper, J.; Evans, R.; Pritzel, A.; et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef]
  62. Kashyap, R.; Samuel, Y.; Friedman, L. W.; Samuel, J. Artificial intelligence education & governance—human enhancive, culturally sensitive and personally adaptive HAI. Frontiers in Artificial Intelligence 2024, 7, 1443386. Available online: https://www.frontiersin.org/articles/10.3389/frai.2024.1443386.
  63. Khosla, V. AI: Dystopia or utopia? Khosla Ventures, 20 September 2024; Available online: https://www.khoslaventures.com/ai-dystopia-or-utopia/ (accessed on 12 February 2025).
  64. Klepper, D.; Swenson, A. AI-generated disinformation poses threat of misleading voters in 2024 election; PBS NewsHour, 14 May 2023; Available online: https://www.pbs.org/newshour/politics/ai-generated-disinformation-poses-threat-of-misleading-voters-in-2024-election (accessed on 12 February 2025).
  65. Kruskal, W. H.; Wallis, W. A. Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association 1952, 47(260), 583–621. [Google Scholar] [CrossRef]
  66. Li, D.; He, W.; Guo, Y. Why AI still doesn’t have consciousness? CAAI Transactions on Intelligence Technology 2021, 6(2), 175–179. [Google Scholar] [CrossRef]
  67. Littman, M. L.; Ajunwa, I.; Berger, G.; Boutilier, C.; Currie, M.; Doshi-Velez, F.; Hadfield, G.; Horowitz, M. C.; Isbell, C.; Kitano, H.; Levy, K.; Lyons, T.; Mitchell, M.; Shah, J.; Sloman, S.; Vallor, S.; Walsh, T. Gathering strength, gathering storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 study panel report . arXiv 2021. [Google Scholar] [CrossRef]
  68. Mann, H. B.; Whitney, D. R. On a test of whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics 1947, 18(1), 50–60. [Google Scholar] [CrossRef]
  69. Marcinek, K.; Stanley, K. D.; Smith, G.; Cormarie, P.; Gunashekar, S. Risk-based AI regulation: A primer on the Artificial Intelligence Act of the European Union.; RAND Corporation, 20 November 2024; Available online: https://www.rand.org/pubs/research_reports/RRA3243-3.html.
  70. McInnes, L.; Healy, J.; Astels, S. hdbscan: Hierarchical density based clustering. The Journal of Open Source Software 2017, 2(11), 205. [Google Scholar] [CrossRef]
  71. McInnes, L.; Healy, J.; Melville, J. UMAP: Uniform Manifold Approximation and Projection for dimension reduction. arXiv. 2018. Available online: https://arxiv.org/abs/1802.03426.
  72. McKee, K.; Pilgrim, M. Universal Feed Parser (feedparser) [Python library]. n.d. Available online: https://feedparser.readthedocs.io/en/latest/ (accessed on 13 February 2025).
  73. Mhlanga, D. Open AI in Education, the Responsible and Ethical Use of ChatGPT Towards Lifelong Learning. 11 February 2023. Retrieved from papers.ssrn.com website. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4354422.
  74. Mill, J. S. Public and parliamentary speeches – Part I – November 1850 – November 1868; Toronto; University of Toronto Press, 1988. [Google Scholar]
  75. Mishra, V. Unchecked AI threatens democracy, warns UN chief. United Nations News. 15 September 2024. Available online: https://news.un.org/en/story/2024/09/1154316.
  76. Mitchell, M. What does it mean to align AI with human values? Quanta Magazine. 13 December 2022. Available online: https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213/.
  77. More, T. Utopia; New York; Appleton-Century-Crofts, 1949. [Google Scholar]
  78. Nelson, L. Practical AI limitations you need to know. AFA Education Blog. 22 December 2024. Available online: https://afaeducation.org/blog/practical-ai-limitations-you-need-to-know/.
  79. Nguyen, B. Donald Trump, Elon Musk, Taylor Swift, and more: The 10 most AI deepfaked people right now. Quartz. 10 October 2024. Available online: https://qz.com/donald-trump-elon-musk-taylor-swift-beyonce-ai-deepfake-1851666681.
  80. O'Gieblyn, M. Does AI have a subconscious? WIRED. 23 May 2023. Available online: https://www.wired.com/story/does-ai-have-a-subconscious/.
  81. Ouchchy, L.; Coin, A.; Dubljević, V. AI in the headlines: The portrayal of the ethical issues of artificial intelligence in the media. AI & Society 35 2020, 927–936. [Google Scholar] [CrossRef]
  82. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Duchesnay, E. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 2011, 12, 2825–2830. Available online: https://jmlr.org/papers/v12/pedregosa11a.html.
  83. Pierce, D. Two possible futures for AI. The Verge. 29 October 2024. Available online: https://www.theverge.com/2024/10/29/24282333/ai-vision-anthropic-openai-shakealert-vergecast.
  84. Piocciochi, M.; Alwabel, R. A. Leveraging Artificial Intelligence in Education: Current Applications and Future Prospects. 18 June 2020. Retrieved. Available online: https://www.researchgate.net/publication/384016811_Leveraging_Artificial_Intelligence_in_Education_Current_Applications_and_Future_Prospects.
  85. Price, W. N., II. Risks and remedies for artificial intelligence in health care. In The Brookings Institution.; 2019; Available online: https://www.brookings.edu/articles/risks-and-remedies-for-artificial-intelligence-in-health-care/.
  86. Rahman, M. M.; Ali, G. M. N.; Li, X. J.; Samuel, J.; Paul, K. C.; Chong, P. H.; Yakubov, M. Socioeconomic factors analysis for COVID-19 US reopening sentiment with Twitter and census data. Heliyon (ScienceDirect by Elsevier) 2021, e06200. [Google Scholar] [CrossRef]
  87. Rahman-Jones, I. UK watchdog looking into Microsoft AI taking screenshots. BBC News. 21 May 2024. Available online: https://www.bbc.com/news/articles/cpwwqp6nx14o.
  88. Rainer, R. K., Jr.; Richey, R. G., Jr.; Chowdhury, S. How Robotics is Shaping Digital Logistics and Supply Chain Management: An Ongoing Call for Research. Journal of Business Logistics 2025, 46(1), e70005. [Google Scholar] [CrossRef]
  89. Rainie, L.; Anderson, J. Experts imagine the impact of artificial intelligence by 2040.; Imagining the Digital Future Center, 29 February 2024. [Google Scholar]
  90. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; Chen, M. Hierarchical text-conditional image generation with CLIP latents. ArXiv 2022, abs/2204.06125. Available online: https://arxiv.org/abs/2204.06125.
  91. Randieri, C. Unveiling the role of AI algorithms: Unmasking societal inequities and cultural prejudices. Forbes Technology Council. 19 July 2023. Available online: https://www.forbes.com/councils/forbestechcouncil/2023/07/19/unveiling-the-role-of-ai-algorithms-unmasking-societal-inequities-and-cultural-prejudices/.
  92. Reitz, K. Requests: HTTP for humans [Python library]. n.d. Available online: https://docs.python-requests.org/en/latest/.
  93. Reynaud, F.; Untersinger, M. Paris 2024: Controversial AI-led video surveillance put to the test during Olympics; Le Monde, 24 July 2024; Available online: https://www.lemonde.fr/en/pixels/article/2024/07/24/paris-2024-controversial-ai-led-video-surveillance-put-to-the-test-during-olympics_6697267_13.html.
  94. Richardson, L. Beautiful Soup documentation [Python library]. Crummy. 2023. Available online: https://www.crummy.com/software/BeautifulSoup/.
  95. Robins-Early, N. Trump posts deepfakes of Swift, Harris, and Musk in effort to shore up support. The Guardian. 19 August 2024. Available online: https://www.theguardian.com/us-news/article/2024/aug/19/trump-ai-swift-harris-musk-deepfake-images?utm_source=chatgpt.com.
  96. Robison, K. Anthropic’s CEO thinks AI will lead to a utopia—he just needs a few billion dollars first. The Verge. 16 October 2024. Available online: https://www.theverge.com/2024/10/16/24268209/anthropic-ai-dario-amodei-agi-funding-blog.
  97. Rodilosso, E. Filter bubbles and the unfeeling: How AI for social media can foster extremism and polarization. Philosophy & Technology 2024, 37(71). [Google Scholar] [CrossRef]
  98. Rolf, B.; Jackson, I.; Müller, M.; Lang, S.; Reggelin, T.; Ivanov, D. A review on reinforcement learning algorithms and applications in supply chain management. International Journal of Production Research 2022, 61(20), 7151–7179. [Google Scholar] [CrossRef]
  99. Rundle, J. New York State bans DeepSeek from government devices: State says app raises serious security and censorship concerns. The Wall Street Journal. 10 February 2025. Available online: https://www.wsj.com/articles/new-york-state-bans-deepseek-from-government-devices-de7a9df4.
  100. Sadaf, M.; Iqbal, Z.; Javed, A. R.; Saba, I.; Krichen, M.; Majeed, S.; Raza, A. Connected and automated vehicles: Infrastructure, applications, security, critical challenges, and future aspects. Technologies 2023, 11(5), 117. [Google Scholar] [CrossRef]
  101. SafeTREC. The role of media and road safety; California Active Transportation Safety Information Pages (CATSIP), n.d.; Available online: https://catsip.berkeley.edu/resources/role-media-and-road-safety.
  102. Samuel, J. A call for proactive policies for informatics and artificial intelligence technologies. In Scholars Strategy Network; Url, 2021; Available online: https://scholars.org/contribution/call-proactive-policies-informatics-andprovide in text.
  103. Samuel, J. The Critical Need for Transparency and Regulation amidst the Rise of Powerful Artificial Intelligence Models. In Scholars Strategy Network (SSN); Key Findings, 2023; Available online: https://scholars.org/contribution/critical-need-transparency-and-regulation.
  104. Samuel, J.; Ali, G. G.; Rahman, M.; Esawi, E.; Samuel, Y. Covid-19 public sentiment insights and machine learning for tweets classification. Information 2020, 11(6), 314. [Google Scholar] [CrossRef]
  105. Samuel, J.; Kashyap, R.; Samuel, Y.; Pelaez, A. Adaptive cognitive fit: Artificial intelligence augmented management of information facets and representations. International Journal of Information Management 2022, 65, 102505. [Google Scholar] [CrossRef]
  106. Samuel, J.; Khanna, T.; Esguerra, J.; Sundar, S.; Pelaez, A.; Bhuyan, S. S. The rise of artificial intelligence phobia! Unveiling news-driven spread of AI fear sentiment using ML, NLP, and LLMs. IEEE Access 2025, 13, 125944–125969. [Google Scholar] [CrossRef]
  107. Samuel, J.; Rahman, M.; Ali, Nawaz G. G., Md; Samuel, Y.; Pelaez, A.; Chong, P. H. J.; Yakubov, M. Feeling Positive About Reopening? New Normal Scenarios From COVID-19 US Reopen Sentiment Analytics. In in IEEE Access; 2020; vol. 8, pp. 142173–142190. Available online: https://ieeexplore.ieee.org/document/9154672. [CrossRef]
  108. Samuel, J.; Tripathi, A.; Mema, E. A new era of artificial intelligence beginswhere will it lead us? Editorial - Journal of Big Data and Artificial Intelligence 2024, 2(1). [Google Scholar]
  109. Samuel, Y.; Brennan-Tonetta, M.; Samuel, J.; Kashyap, R.; Kumar, V.; Madabhushi, S. K. K.; Chidipothu, N.; Anand, I.; Jain, P. Cultivation of Human Centered Artificial Intelligence: Culturally Adaptive Thinking in Education for AI (CATE-AI). Frontiers in Artificial Intelligence 2023, 6, 1198180. [Google Scholar] [CrossRef]
  110. Sapkota, R.; Roumeliotis, K. I.; Karkee, M. AI agents vs. Agentic AI: A conceptual taxonomy, applications and challenges. Information Fusion 2026, 126(Part B), 103599. [Google Scholar] [CrossRef]
  111. ScrapingBee. ScrapingBee API documentation. ScrapingBee. n.d. Available online: https://www.scrapingbee.com/documentation/.
  112. Service, R. F. Google’s DeepMind aces protein folding: Artificial intelligence firm takes crown in biannual contest; Science, 6 December 2018; Available online: https://www.science.org/content/article/google-s-deepmind-aces-protein-folding.
  113. Shapiro, S. S.; Wilk, M. B. An analysis of variance test for normality (complete samples). Biometrika 1965, 52(3–4), 591–611. [Google Scholar] [CrossRef]
  114. Shermer, M. When it comes to AI, think protopia, not dystopia or utopia; Skeptic, 26 July 2024; Available online: https://www.skeptic.com/reading_room/artificial-intelligence-think-protopia-not-dystopia-or-utopia/.
  115. Siafakas, N.; Vasarmidi, E. Risks of artificial intelligence (AI) in medicine. Pneumon 2024, 37(3), 40. [Google Scholar] [CrossRef]
  116. Silver, N. S. AI utopia and dystopia: What will the future have in store? Forbes. 20 June 2023. Available online: https://www.forbes.com/sites/nicolesilver/2023/06/20/ai-utopia-and-dystopia-what-will-the-future-have-in-store-artificial-intelligence-series-5-of-5/.
  117. Singal, P. Nick Bostrom discusses Superintelligence, AI, and Deep Utopia in Dinis Guarda YouTube podcast; IntelligentHQ, 2024; Available online: https://www.intelligenthq.com/nick-bostrom-discusses-superintelligence-ai-and-deep-utopia-in-dinis-guarda-youtube-podcast/.
  118. Smith, R. A. AI is starting to threaten white-collar jobs. Few industries are immune. The Wall Street Journal. 12 February 2024. Available online: https://www.wsj.com/lifestyle/careers/ai-is-starting-to-threaten-white-collar-jobs-few-industries-are-immune-9cdbcb90.
  119. Sodiya, E. O.; Umoga, U. J.; Amoo, O. O.; Atadoga, A. AI-driven warehouse automation: A comprehensive review of systems. GSC Advanced Research and Reviews 2024, 18(2), 272–282. [Google Scholar] [CrossRef]
  120. Solanki, A.; Jadiga, S. AI Applications for Improving Transportation and Logistics Operations. International Journal of Intelligent Systems and Applications in Engineering 2024, 12(2), 45–52. [Google Scholar]
  121. Swarns, C. When artificial intelligence gets it wrong. Innocence Project. 19 September 2023. Available online: https://innocenceproject.org/when-artificial-intelligence-gets-it-wrong/.
  122. TensorFlow. DeepDream. TensorFlow Tutorials. n.d. Available online: https://www.tensorflow.org/tutorials/generative/deepdream.
  123. The Guardian. Revealed: Bias found in AI system used to detect UK benefits fraud. 6 December 2024. Available online: https://www.theguardian.com/society/2024/dec/06/revealed-bias-found-in-ai-system-used-to-detect-uk-benefits.
  124. The Week UK. Future of generative AI: Utopia, dystopia or up to us? The Explainer, 31 July 2024; Available online: https://theweek.com/tech/future-of-generative-ai-utopia-dystopia-or-up-to-us.
  125. Tony Blair Institute for Global Change. The impact of AI on the labour market. 8 November 2024. Available online: https://institute.global/insights/economic-prosperity/the-impact-of-ai-on-the-labour-market.
  126. Tripathi, A.; Samuel, J.; Brennan-Tonetta, M.; Nguyen, H.; Mema, E. When machines createEnvisioning our future as shaped by the transformative power of generative AI. Journal of Big Data and Artificial Intelligence 2025, 3(1). [Google Scholar] [CrossRef]
  127. U.S. Equal Employment Opportunity Commission (EEOC). iTutorGroup to pay $365,000 to settle EEOC discriminatory hiring suit. EEOC Newsroom. 11 September 2023. Available online: https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit.
  128. United Nations Regional Information Centre (UNRIC). Can artificial intelligence (AI) influence elections? 7 June 2024. Available online: https://unric.org/en/can-artificial-intelligence-ai-influence-elections/.
  129. UspeakGreek. Etymology and meaning of the word dystopian. 19 December 2023. Available online: https://uspeakgreek.com/art/literature/etymology-and-meaning-of-word-dystopian/.
  130. Utopia & Dystopia List of famous utopian movies. n.d. Available online: https://www.utopiaanddystopia.com/utopian-fiction/utopian-movies-list/.
  131. Vinson, D. W.; Arcan, M.; Niland, D.; Delahunty, F. Towards sustainable workplace mental health: A novel approach to early intervention and support. ArXiv. 2024. Available online: https://arxiv.org/abs/2402.01592.
  132. Wankhade, M.; Rao, A. C. S.; Kulkarni, C. A survey on sentiment analysis methods, applications, and challenges. Artificial Intelligence Review 2022, 55(7), 5731–5780. [Google Scholar] [CrossRef]
  133. Webb, M. The impact of artificial intelligence on the labor market. SSRN Electronic Journal. 2019. [Google Scholar] [CrossRef]
  134. Winton, A. Get a horse! America’s skepticism toward the first automobiles; The Saturday Evening Post, 9 January 2017; Available online: https://www.saturdayeveningpost.com/2017/01/get-horse-americas-skepticism-toward-first-automobiles/.
  135. Wladawsky-Berger, I. The emerging, unpredictable age of AI; MIT Initiative on the Digital Economy, 22 February 2017; Available online: https://ide.mit.edu/insights/the-emerging-unpredictable-age-of-ai/.
  136. World Economic Forum (WEF). The Future of Jobs Report 2025. WEF, 2025. Available online: https://www.weforum.org/publications/the-future-of-jobs-report-2025/.
  137. Younge, H. L. Utopia: or, Apollo’s golden days; Dublin, Ireland; Ptd. by George Faulkner, 1747. [Google Scholar]
  138. Yudkowsky, E. Pausing AI developments isn’t enough. We need to shut it all down; Time, 29 March 2023; Available online: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/.
  139. Zajko, M. Artificial intelligence, algorithms, and social inequality: Sociological contributions to contemporary debates. Sociology Compass 2022, 16(3). [Google Scholar] [CrossRef]
  140. Zaman, B. U. Transforming Education Through AI, Benefits, Risks, and Ethical Considerations; Transforming Education through AI, Benefits, Risks, and Ethical Considerations, 2023. [Google Scholar] [CrossRef]
  141. Zhai, C.; Wibowo, S.; Li, L. D. The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learning Environments 2024, 11(1), 28. [Google Scholar] [CrossRef]
  142. Zhang, W.; Deng, Y.; Liu, B.; Pan, S. J.; Bing, L. Sentiment analysis in the era of large language models: A reality check. arXiv 2023, arXiv:2305.15005. [Google Scholar] [CrossRef]
  143. Ziegler, B. It’s the year 2030: What will artificial intelligence look like? The Wall Street Journal. 21 September 2024. Available online: https://www.wsj.com/article/ai-future-2030.
Figure 1. Semantic network of AI news headlines.
Figure 1. Semantic network of AI news headlines.
Preprints 211426 g001
Figure 2. A combination of AI news headlines reflecting both utopian and dystopian perspectives.
Figure 2. A combination of AI news headlines reflecting both utopian and dystopian perspectives.
Preprints 211426 g002
Figure 3. Analysis of (a) unigrams (top-left), (b) bigrams (top-right), (c) trigrams (bottom-left), and (d) quadgrams (bottom-right) found in news headlines from our dataset.
Figure 3. Analysis of (a) unigrams (top-left), (b) bigrams (top-right), (c) trigrams (bottom-left), and (d) quadgrams (bottom-right) found in news headlines from our dataset.
Preprints 211426 g003
Figure 4. Sentiment distribution bar plot showing positive sentiment (green) and negative sentiment (red).
Figure 4. Sentiment distribution bar plot showing positive sentiment (green) and negative sentiment (red).
Preprints 211426 g004
Figure 5. Results of topic modeling on news headlines in the education domain.
Figure 5. Results of topic modeling on news headlines in the education domain.
Preprints 211426 g005
Figure 6. Results of topic modeling on news headlines in the health domain.
Figure 6. Results of topic modeling on news headlines in the health domain.
Preprints 211426 g006
Figure 7. Results of topic modeling on news headlines in the robotics domain.
Figure 7. Results of topic modeling on news headlines in the robotics domain.
Preprints 211426 g007
Figure 8. Results of topic modeling on news headlines in the career domain.
Figure 8. Results of topic modeling on news headlines in the career domain.
Preprints 211426 g008
Figure 9. Results of topic modeling on news headlines in the society domain.
Figure 9. Results of topic modeling on news headlines in the society domain.
Preprints 211426 g009
Figure 10. Sentiment score distribution across five domains.
Figure 10. Sentiment score distribution across five domains.
Preprints 211426 g010
Figure 11. Results of topic modeling on negative news headlines across all domains.
Figure 11. Results of topic modeling on negative news headlines across all domains.
Preprints 211426 g011
Table 1. Keywords used for categorizing AI news headlines in our dataset by domain.
Table 1. Keywords used for categorizing AI news headlines in our dataset by domain.
Domain Keywords
Education educate, learn, teach, study, academic, curriculum, pedagogy, student, school, classroom, course, professor, lecturer, university, college, campus, tutor
Healthcare health, medical, doctor, nurse, hospital, clinic, pharmaceutical, drug, biotech, diagnosis, patient, treatment, vaccine, telemedicine, disease, cardio, immune, neuro, physician, medical technology, radiology, addiction, abuse, suicide, depression, psychology, surgery, therapy, mental, wellness, genomics, genetics, epidemic, pandemic, cancer, diabetes, biomedical, EHR, X-ray
Robotics robot, autonomous, navigate, cyborg, industrial, agriculture, combat, weapon, force, sensor, driver, logistics, vehicle, electric, farm, automation, mobility, fleet, humanoid, automated, autopilot, aerial, unmanned, automotive, automobile, military, army, navy, naval, transportation, drone, warehouse, car, bus, train, truck, pilot, battery, plane, flight, aircraft, CAV, UAV, EV, ADAS, DARPA, SWARM, LiDAR, self-driving, pick-and-place, human-robot, supply chain, computer vision
Career job, recruit, work, organization, career, professional, business, enterprise, company, employ, skill, corporate, layoff, manager, startup, entrepreneur, investment, investor, venture, replace, unemployed, hiring, hire, companies, firms, talent, CEO, CTO, CIO, CDO, HR
Society ethical, regulation, social, trust, democratic, equal, legislation, culture, human, law, public, rights, society, privacy, societal, policy, governance, transparency, accountability, compliance, government, sustainability, health, election, war, relationship
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated