1. Introduction
1.1. Background and Context
The emergence of artificial intelligence (AI) as a transformative force in multiple domains has spurred both enthusiasm and concern across the globe. Within the realm of journalism, AI is not only redefining workflows and content production but also challenging longstanding norms of objectivity, human agency, and editorial judgment (Carlson, 2020). From automated content generation and natural language processing (NLP) to personalized content distribution algorithms, AI tools are reshaping how journalism is conceived, practiced, and consumed.
While AI’s application in media industries is rapidly expanding in technologically advanced economies such as the United States, United Kingdom, China, and Germany, its penetration in the Global South—including South Asia—remains limited, uneven, and contested. This disparity in adoption is influenced by a confluence of factors including technological infrastructure, political culture, economic constraints, digital literacy, and professional ethics. In countries like Bangladesh, where journalism is often entangled in political polarization, resource scarcity, and censorship, the integration of AI technologies introduces a new set of challenges and ethical dilemmas.
AI-powered journalism can offer significant improvements: faster content generation, improved data analysis, early misinformation detection, and targeted audience engagement. However, it also raises urgent questions about employment displacement, algorithmic bias, accountability, and the erosion of editorial autonomy. For Bangladesh and the broader South Asian region, the lack of a clear roadmap for AI integration in journalism further complicates the situation. This study aims to understand the current attitudes toward AI among journalists in Bangladesh and South Asia, assess the regional preparedness for AI-led transformations, and explore the socio-political consequences of its adoption.
1.2. Defining AI in Journalism
Artificial intelligence in journalism refers to the application of machine learning, natural language processing, automation, and big data analytics to assist, augment, or automate journalistic tasks (Diakopoulos, 2019). These tools can be categorized into several domains:
Automated Content Production (ACP): Software such as Wordsmith or Heliograf can produce data-driven reports (e.g., sports or financial news) without human intervention.
News Recommender Systems: Algorithms used by platforms like Facebook and Google News tailor content to users’ preferences—raising issues around filter bubbles and ideological segregation (Pariser, 2011).
Fact-checking and Verification Tools: AI can be used to detect disinformation or deepfakes, a critical need in the current age of information warfare (Marconi, 2020).
Sentiment and Trend Analysis: Journalists increasingly rely on AI to monitor social media trends and extract audience sentiment.
Understanding the specific uses of AI in journalism is essential to assessing attitudes toward its adoption and potential disruptions in professional practice.
1.3. Global Trends in AI and Journalism
In the Global North, AI’s influence in newsrooms has become increasingly normalized. The Associated Press (AP) has been using AI since 2014 to automate quarterly earnings reports, saving hundreds of hours in reporting labor (Carlson, 2020). Reuters’ “Lynx Insight” suggests story ideas based on data patterns, while The Washington Post’s “Heliograf” automates live updates during elections or sports events (Diakopoulos, 2019). These innovations have allowed journalists to focus more on investigative and interpretive reporting, although they also contribute to the deskilling of entry-level journalists.
Despite such innovations, even in technologically mature markets, concerns persist. These include the risk of AI reproducing racial, gender, or ideological biases, lack of transparency in algorithmic decision-making (the “black box” problem), and erosion of public trust when content is known to be machine-generated (Napoli, 2019).
1.4. Journalism in Bangladesh and South Asia: A Fragile Ecosystem
The state of journalism in Bangladesh and South Asia offers a stark contrast. Press freedom in the region has been on the decline, with Bangladesh ranked 165th out of 180 countries on the 2024 World Press Freedom Index (Reporters Without Borders, 2024). Journalists face harassment, legal prosecution under digital security laws, surveillance, and physical threats. Against this backdrop, introducing AI tools into newsrooms may not be perceived as a neutral technological advancement but rather as a potential mechanism of surveillance, control, or cost-cutting that threatens journalistic independence.
Moreover, media organizations in Bangladesh often operate with constrained budgets and limited digital infrastructure. Many journalists lack training in digital tools, let alone AI applications (Kabir & Ahmed, 2024). There is also a disconnect between newsroom leadership and younger reporters regarding openness to innovation, mirroring generational divides in digital adaptation.
In India, the media ecosystem is more diverse and includes a wider spectrum of technological integration. Initiatives such as The Hindu’s data journalism desk and collaborations between Google News Initiative and Indian media houses illustrate early-stage AI integration. However, issues of political bias, disinformation, and corporate control over media content remain endemic across the region (Rahman & Bose, 2022).
1.5. Problem Statement
Despite the global momentum toward AI-powered journalism, South Asia—particularly Bangladesh—is inadequately prepared for this technological shift. The region lacks comprehensive AI training programs for journalists, government policies on media-tech ethics, or institutional support for responsible innovation. While isolated pilots and innovations exist, they have not coalesced into a broader strategic framework. Meanwhile, public discourse around AI often remains limited to fearmongering or utopianism, with little nuanced understanding of its layered impacts on journalism.
Hence, there is a pressing need to:
Map the current attitudes of journalists towards AI;
Assess the infrastructural and policy-level readiness of South Asian countries, with special attention to Bangladesh;
Examine the ethical, political, and professional challenges posed by AI in journalism;Provide recommendations for equitable and responsible AI integration.
1.6. Research Questions
This study is guided by the following research questions:
1. What are the prevailing attitudes of journalists in Bangladesh and South Asia towards the adoption of AI in journalism?
2. What level of infrastructural and institutional preparedness exists in the region to facilitate AI-driven journalism?
3. What ethical and political implications does AI pose in contexts where press freedom is already under threat?
4. How can South Asian journalism develop region-specific frameworks for responsible and sustainable AI adoption?
1.7. Significance of the Study
This research is significant for multiple reasons. First, it addresses a critical gap in scholarship regarding the intersection of AI and journalism in the Global South. Most existing studies focus on Western contexts, failing to account for the unique socio-political, infrastructural, and ethical dynamics of South Asia.
Second, this study offers a bottom-up perspective by foregrounding the voices and perceptions of journalists themselves. While policy think tanks and technology companies dominate discussions on AI ethics, journalists—who are the end-users and frontline actors—are often sidelined in these debates.
Third, the study contributes to policy formulation by offering actionable recommendations. These include establishing AI ethics guidelines for media, developing digital literacy programs, and encouraging cross-border collaborations within South Asia for knowledge sharing and capacity building.
1.8. Scope and Limitations
The study primarily focuses on Bangladesh but draws comparative insights from India, Pakistan, Nepal, and Sri Lanka. Given time and resource constraints, the study is limited to English and Bangla-language newsrooms. It does not examine vernacular media in depth, which could exhibit different attitudes toward AI.
Moreover, since AI journalism is an emerging field, the availability of empirical data is limited. Much of the study relies on qualitative insights, small-scale surveys, and semi-structured interviews. Nonetheless, the research offers a foundational overview upon which future, larger-scale studies can be built.
2. Literature Review
The literature on artificial intelligence (AI) in journalism has grown rapidly in the past decade, mirroring technological advancements and transformations in media industries. This review synthesizes key theoretical, empirical, and regional studies related to AI in journalism. It explores five core areas: (1) the conceptual foundation of AI in journalism, (2) global developments and trends, (3) socio-ethical concerns and risks, (4) AI adoption in the Global South, and (5) journalism and AI in Bangladesh and South Asia. The review highlights the gaps in existing research and contextualizes the urgency of investigating AI’s implications in emerging democracies and fragile media ecosystems.
2.1. AI in Journalism: Conceptual Foundations
Artificial Intelligence refers to machines or software systems that perform tasks typically requiring human intelligence, such as learning, reasoning, language processing, and pattern recognition (Russell & Norvig, 2016). In journalism, AI is defined as a set of computational tools and systems designed to support or automate various aspects of journalistic work (Diakopoulos, 2019). Applications range from news automation (robot journalism), data mining, and natural language generation (NLG), to personalized recommendation engines and social media trend analysis.Diakopoulos (2019) classifies AI usage in journalism into four domains:
Information gathering (e.g., AI-based data scraping), Information production (e.g., automated article generation), Information dissemination (e.g., algorithmic curation), News verification (e.g., AI-based fact-checking).
These categories suggest AI is not merely a tool but a systemic intervention affecting every phase of journalistic production. Carlson (2020) expands on this, asserting that AI introduces a shift from labor-based journalism to platform-driven media logic, posing ontological questions about what counts as “news” and who counts as a “journalist.”
2.2. Global Developments in AI-Driven Journalism
Countries in the Global North have taken the lead in integrating AI technologies in their newsrooms. Notable examples include:
The Associated Press (AP) using Automated Insights’ Wordsmith to produce earnings reports (Graefe, 2016),
The Washington Post’s Heliograf, which generated over 850 articles during the 2016 U.S. elections,
Reuters’ Lynx Insight, offering story suggestions to journalists by identifying data trends (Marconi, 2020).
These systems offer tangible benefits:
Speed and efficiency, especially for repetitive data-based stories;
Reduction of human error in reporting numerical or financial data;
Enhanced reach, as AI-generated content can be localized and translated quickly.
However, scholars argue that technological efficiency is accompanied by ethical dilemmas. Napoli (2019) warns that automation may reinforce biases embedded in training data, compromise transparency, and threaten editorial independence. In their analysis of algorithmic accountability, Ananny and Crawford (2018) call for “institutional reflexivity”—a media organization’s critical reflection on how AI choices align with journalistic values.
2.3. Socio-Ethical Concerns in AI Journalism
Despite the transformative potential of AI, critical literature underscores multiple socio-ethical risks:
2.3.1. Algorithmic Bias and Discrimination
AI systems learn from historical data, which may include biases against marginalized groups. For example, automated crime reporting systems in the U.S. have been shown to reinforce racial profiling (Benjamin, 2019). In journalism, such biases may affect news prioritization, headline generation, or image tagging, perpetuating stereotypes.
2.3.2. Loss of Editorial Autonomy
Automated content risks undermining human editorial judgment. As algorithms decide what is “newsworthy,” journalistic agendas may be shaped more by platform metrics than public interest (Zuboff, 2019). The dominance of metrics like “click-through rates” or “engagement time” can push newsrooms toward sensationalism.
2.3.3. Transparency and Accountability
Opaque AI systems create challenges for accountability. When errors occur in AI-generated content—such as misquoting, fake news generation, or algorithmic censorship—assigning responsibility becomes difficult (Diakopoulos & Koliska, 2017). This lack of clarity undermines trust in journalism.
2.3.4. Employment Displacement
AI’s efficiency threatens jobs, particularly in content-heavy roles such as sports or financial reporting. While some argue AI frees up time for in-depth reporting, others highlight how it contributes to media precarity and shrinking newsrooms (Westlund & Lewis, 2021).
2.4. The Global South and AI Journalism: An Uneven Landscape
The literature on AI adoption in journalism within the Global South remains limited but growing. The challenges facing these regions are unique and often more structural than technological.
2.4.1. Infrastructure and Capacity
According to a UNESCO report (2021), most Global South countries lack basic AI infrastructure, such as cloud computing facilities, robust internet penetration, or AI-trained personnel. Media organizations often do not have the budget to invest in AI systems or conduct staff training (Ali & Ibrahim, 2023).
2.4.2. Political and Legal Constraints
In authoritarian or semi-authoritarian states, AI tools may be co-opted for surveillance or propaganda. For example, facial recognition systems have been reportedly used in Sri Lanka and India to monitor journalists and protesters (Human Rights Watch, 2022). In such environments, journalists may view AI not as an innovation but as a threat to autonomy.
2.4.3. Educational and Linguistic Barriers
Many Global South journalists lack formal digital training, and most AI tools are built for English-language use. South Asia—with its linguistic diversity—faces a particular challenge in developing AI that understands Bengali, Hindi, Tamil, Urdu, and other local languages (Rahman & Bose, 2022).
2.4.4. Cultural Resistance
Research shows that journalists in countries like Pakistan and Bangladesh are skeptical of AI, fearing it will erode traditional values of storytelling and ethical journalism (Kabir & Ahmed, 2024). This cultural resistance often reflects deeper anxieties about westernization, automation, and technological neocolonialism.
2.5. Bangladesh and AI Journalism: A Nascent Terrain
Very little academic literature exists on AI in journalism in Bangladesh specifically. Existing studies, however, provide preliminary insights:
2.5.1. Technological Underdevelopment
Rahman and Bose (2022) found that only 8% of Bangladeshi newsrooms had initiated discussions on AI adoption. Most lacked dedicated IT departments or partnerships with AI developers. Furthermore, newsroom software and CMS systems are outdated, limiting integration potential.
2.5.2. Editorial Concerns and Fear of Surveillance
Kabir and Ahmed (2024) report that Bangladeshi journalists perceive AI as a potential surveillance tool for the government, especially given the country’s Digital Security Act (DSA) that criminalizes certain online speech. Journalists fear AI-driven analytics may be weaponized to monitor dissent, identify critics, or target specific communities.
2.5.3. Language Challenges
Most AI platforms used for news are designed in English. Bengali NLP tools are underdeveloped, with minimal investment in local language data labeling or speech recognition. This excludes rural and regional journalists from using AI tools meaningfully (Chowdhury, 2021).
2.5.4. Absence of AI Policy in Media
There are no government or private media policies in Bangladesh that regulate AI use in journalism. Nor are there public debates or professional guidelines on AI ethics. This regulatory vacuum leaves journalists without frameworks for responsible innovation (Ali & Ibrahim, 2023).
2.6. India and Regional Perspectives
India, in contrast, has seen a relatively higher level of experimentation with AI journalism. Studies by Sinha and Jain (2023) highlight that media houses like The Hindu and Times Group are piloting AI for tasks like metadata tagging, recommendation engines, and real-time story suggestions. India’s government has also announced an AI development strategy, which includes language models for Hindi and other regional languages.
Nonetheless, Indian journalism faces its own ethical challenges. Bhattacharya (2022) notes that the use of AI in news curation may reinforce ideological polarization, particularly as algorithmic biases align with corporate or political affiliations. Moreover, rural media remains largely excluded from AI innovation, deepening the digital divide within the country.
In Pakistan and Nepal, scholarly engagement with AI journalism is extremely limited. Journalistic unions in Pakistan have publicly expressed skepticism about automation, fearing it will lead to job losses and digital surveillance (Dawn, 2023). Nepal’s media sector, primarily reliant on donor funding, lacks infrastructure for AI experimentation, and its journalists have minimal exposure to AI training (Gurung, 2022).
3. Theoretical Framework
3.1. Introduction
The use of artificial intelligence (AI) in journalism cannot be adequately examined without the application of robust theoretical frameworks that explain human-technology interaction, institutional adaptation, and political-economic power structures. This section offers a multi-theoretical approach to understanding how AI is being integrated—or resisted—in the field of journalism in South Asia, particularly Bangladesh. Theories discussed include the Technology Acceptance Model (TAM), Diffusion of Innovations Theory, Critical Political Economy of Communication, Postcolonial Techno-science, and Surveillance Capitalism. These theories together allow for a multidimensional understanding of both the attitudes toward AI in journalism and the structural forces shaping its adoption.
3.2. Technology Acceptance Model (TAM)
3.2.1. Origin and Core Concepts
The Technology Acceptance Model (TAM), developed by Davis (1989), is among the most widely used frameworks in technology adoption studies. It posits that two key beliefs—perceived usefulness (PU) and perceived ease of use (PEOU)—predict a user’s intention to adopt new technology.Perceived Usefulness (PU): The degree to which a person believes that using a technology will enhance their job performance.
Perceived Ease of Use (PEOU): The degree to which a person believes that using the system will be free of effort.
Davis’ original model has been extended by various scholars (Venkatesh & Davis, 2000; Venkatesh et al., 2003) to include external variables such as social influence, facilitating conditions, and trust.
3.2.2. Application to Journalism and AI
In the context of journalism, TAM helps explain why reporters, editors, and media managers may or may not adopt AI tools. A journalist who believes that AI can assist in data analysis, speed up article generation, or detect misinformation may exhibit higher adoption intent (Carlson, 2020). However, if the tools are seen as technically complicated, editorially intrusive, or ideologically opaque, resistance may ensue (Diakopoulos, 2019).
3.2.3. TAM in the Global South
In Bangladesh, PU and PEOU are strongly influenced by contextual variables such as digital literacy, language barriers, organizational policy, and resource constraints. For example, a survey conducted by Kabir and Ahmed (2024) revealed that 72% of journalists in Bangladesh found AI tools “difficult to understand or operate,” largely due to English-language interfaces and the absence of Bengali-compatible platforms. Thus, PEOU is negatively impacted by linguistic and infrastructural limitations.
Moreover, PU is reduced by political mistrust. If journalists believe AI tools can be used for surveillance or editorial manipulation, perceived usefulness is nullified—even if the tool offers technical advantages.
3.3. Diffusion of Innovations Theory
3.3.1. Origin and Components
Everett Rogers’ Diffusion of Innovations Theory (2003) explains how new technologies spread through populations over time. The diffusion process involves five adopter categories:
1. Innovators 2. Early adopters 3. Early majority 4. Late majority 5. Laggards
Factors influencing adoption include:
Relative advantage, Compatibility, Complexity, Trialability, Observability.
3.3.2. Application to AI Journalism in South Asia
In South Asia, especially Bangladesh, media institutions largely fall into the late majority or laggard categories. Rogers’ concept of compatibility is particularly useful. Many AI tools are designed for Western contexts, making them incompatible with local journalistic practices, languages, or ethical frameworks. As Chowdhury (2021) explains, most AI-driven journalism tools fail to recognize contextual nuances in Bengali discourse, leading to mistrust and rejection.
Trialability—the opportunity to experiment with AI before full adoption—is minimal. Newsrooms in Dhaka, for instance, lack pilot funds, training programs, or partnerships with AI developers. Observability—seeing the results of AI adoption in similar environments—is also absent, given that few peer organizations have publicly shared their AI adoption journeys.
3.4. Critical Political Economy of Communication
3.4.1. Overview
The Critical Political Economy of Communication (CPEC) framework emphasizes the role of capital, ownership, and power in shaping media systems. Vincent Mosco (2009) and Graham Murdock (2011) argue that technological innovation in media is not neutral; rather, it is driven by political-economic interests and embedded in systems of control, commodification, and inequality.
3.4.2. AI and the Logic of Capital Accumulation
AI in journalism often serves the logic of accumulation by automation—increasing output with minimal labor. Corporate media conglomerates deploy AI not only to enhance content delivery but also to reduce employment costs and gather user data for monetization (Zuboff, 2019).
In Bangladesh, where most media outlets are owned by politically connected business elites, AI may be used to serve editorial alignment with state narratives, reduce investigative journalism, and marginalize dissenting voices. These trends are evident in pro-government outlets adopting algorithmic curation that filters out “undesirable” political content, thereby narrowing the information ecosystem (Kabir & Ahmed, 2024).
3.4.3. Market Concentration and Technological Dependency
CPEC also highlights technological dependency, where media in the Global South relies on AI infrastructure developed in the Global North. This dependency reinforces digital colonialism, where local contexts are subordinated to foreign techno-norms (Couldry & Mejias, 2019). Most AI journalism tools used in South Asia—such as Google’s Natural Language API or IBM Watson—are proprietary and not designed with regional needs in mind. As such, Bangladesh’s media system remains both technologically and ideologically vulnerable.
3.5. Postcolonial Techno-Science
3.5.1. Decolonizing AI
Postcolonial techno-science, as theorized by scholars such as Harding (2011) and Irani et al. (2010), critiques the universalization of Western science and technology and calls for epistemic plurality. It interrogates how “innovation” in AI often reproduces colonial hierarchies by privileging Western knowledge, languages, and ethical codes.
3.5.2. AI Journalism as a Site of Epistemic Inequality
In South Asia, AI journalism cannot be understood outside the historical context of colonization, language imposition, and data imperialism. For instance, there is a stark epistemic injustice in how news values are coded into AI systems. Western AI tools prioritize news norms (e.g., objectivity, immediacy, impact) that may not align with vernacular journalistic traditions based on narrative storytelling or community engagement.
AI tools also lack the cultural competence to understand religious, ethnic, and linguistic nuances. A Bangla term or idiom may be mistranslated, leading to disinformation or offense. Chowdhury (2021) documents multiple cases where Bengali news articles translated via AI tools produced semantically incorrect headlines, distorting facts and diminishing credibility.
3.5.3. Resistance and Indigenization
Postcolonial techno-science advocates for indigenization of AI—designing tools rooted in local epistemologies and user needs. This includes developing Bengali NLP systems, incorporating rural reporting styles, and co-creating technologies with journalists from the Global South. These efforts challenge technological universalism and reclaim narrative sovereignty.
3.6. Surveillance Capitalism
3.6.1. Theoretical Origins
Shoshana Zuboff’s (2019) theory of Surveillance Capitalism explains how digital technologies are deployed to monitor, predict, and manipulate behavior for profit. Media platforms commodify user data, feeding it into AI algorithms that generate hyper-personalized content—often at the expense of privacy and democracy.
3.6.2. Implications for Journalism
AI journalism operates within this regime of data extraction. News organizations increasingly rely on surveillance infrastructures—cookies, engagement analytics, and behavior prediction tools—to shape editorial decisions. Content is no longer produced for a general audience but for algorithmically determined micro-audiences.
In Bangladesh, this creates significant risks. The government has already been criticized for using digital surveillance against journalists and activists. Introducing AI-driven analytics into newsrooms may inadvertently extend this surveillance, as journalists’ online behavior, content preferences, and professional networks become traceable. Kabir and Ahmed (2024) warn that in a repressive regime, AI journalism risks becoming a Trojan horse for political control.
3.7. Integrative Model: Toward a Composite Framework
Given the complexity of AI integration in journalism, a single theory is insufficient. Therefore, this study employs a composite theoretical framework:
Framework Focus Area Relevance
Technology Acceptance Model Individual attitudes & behavioral intention Explains journalist-level responses to AI tools
Diffusion of Innovations Organizational and systemic adoption patterns Maps how AI spreads within South Asian media ecosystems
Critical Political Economy Ownership, commodification, and power structures Analyzes economic and political interests shaping AI adoption
Postcolonial Technoscience Epistemic justice and cultural appropriation Challenges universalist assumptions of AI technologies
Surveillance Capitalism Datafication and behavioral manipulation Highlights privacy, ethics, and democratic threats of AI journalism
This integrative model allows for a comprehensive understanding of how AI is perceived, resisted, adopted, and politicized within journalism across South Asia.
This theoretical framework lays the foundation for an empirical investigation into attitudes toward AI in journalism in Bangladesh and South Asia. By weaving together psychological models, innovation theory, critical political economy, postcolonial thought, and surveillance critique, this research situates AI not merely as a technological development but as a contested terrain of power, ethics, identity, and resistance.
Future research and practice must move beyond instrumentalist views of AI and engage with the structural and cultural dynamics shaping its implementation. Only then can we develop regionally grounded, ethically conscious, and journalist-friendly AI systems that serve the public good without compromising democratic values or journalistic integrity.
4. Research Methodology
4.1. Introduction to Methodological Design
The research methodology outlines the philosophical assumptions, data collection strategies, analytical techniques, and validation processes employed in this study. Grounded in an interpretivist paradigm and complemented by pragmatic elements, this study employs a mixed-methods approach to investigate the multifaceted relationships between AI technology, journalistic practices, and regional preparedness in Bangladesh and broader South Asia. By leveraging both qualitative and quantitative techniques, this methodology aims to generate nuanced insights into journalists’ attitudes, institutional responses, and ethical implications of AI-driven transformations.
4.2. Research Objectives
The primary objectives of the study are:
-To explore the perceptions and attitudes of journalists, editors, media managers, and journalism students in South Asia, especially Bangladesh, regarding the integration of AI in newsrooms.-To assess the current level of AI-related training, institutional adaptation, and ethical preparedness within the media landscape.
-To analyze the implications of AI on the future of journalistic integrity, employment, content authenticity, and public trust.
4.3. Research Questions
The research is guided by the following key questions:
1. What are the prevailing attitudes of South Asian journalists towards the adoption and use of AI in journalism?
2. How are newsrooms in Bangladesh and neighboring countries preparing for the ethical, legal, and professional challenges of AI integration?
3. What structural and educational responses are being deployed in journalism schools and media organizations to prepare future professionals?
4. How does AI affect the core values of journalism, such as truth, impartiality, and accountability, within the South Asian media ecosystem?
4.4. Research Paradigm and Epistemological Position
The epistemological stance of this research is constructivist, recognizing that knowledge and reality are socially constructed and context-dependent. The ontological orientation acknowledges a pluralistic reality shaped by human interpretation, especially relevant in the social domain of journalism and technology. This justifies the mixed-methods design, which enables the incorporation of subjective meanings, perceptions, and quantitative generalizability.
4.5. Research Design: Mixed-Methods
A convergent parallel mixed-methods design (Creswell & Plano Clark, 2018) was adopted to allow simultaneous but independent collection and analysis of both quantitative and qualitative data. The findings were then triangulated during interpretation to offer a more comprehensive understanding of the phenomenon.
4.5.1. Quantitative Component
A structured survey was conducted among 300 respondents, including professional journalists, editors, newsroom managers, and final-year journalism students in Bangladesh, India, Pakistan, and Sri Lanka. The survey utilized a 5-point Likert scale to measure:
-Awareness and knowledge of AI
-Attitudes toward AI’s utility in journalism
-Perceived threats to employment and ethics
Organizational readiness and training opportunities
4.5.2. Qualitative Component
To complement the survey, 30 in-depth semi-structured interviews were conducted with:
Senior journalists, Journalism educators, AI developers collaborating with media, Policymakers involved in ICT and media regulation.
The qualitative design allowed for probing into contextual nuances, ethical dilemmas, and lived experiences, which the quantitative survey could not capture.
4.6. Sampling Techniques
4.6.1. Quantitative Sampling
A stratified random sampling technique was used to ensure representation across:
Geography (urban and semi-urban areas), Media types (print, digital, television), Roles (reporter, editor, student, academic), Gender and age diversity.
4.6.2. Qualitative Sampling
Purposive sampling (Patton, 2002) was used for selecting interview participants with rich knowledge or leadership experience in journalism or AI.
4.7. Data Collection Procedures
4.7.1. Survey Administration
The survey was administered via Google Forms, with distribution facilitated through media networks, journalism schools, and online platforms such as Facebook journalist groups and WhatsApp clusters. Ethical considerations, including informed consent and data anonymity, were ensured.
4.7.2. Interviews
Interviews were conducted face-to-face, over Zoom, or via telephone, depending on participants’ availability and preferences. Each interview lasted approximately 45–60 minutes, and recordings were transcribed verbatim for thematic analysis.
4.8. Data Analysis
4.8.1. Quantitative Data Analysis
SPSS Version 27 was used for data cleaning, coding, and analysis. Descriptive statistics (mean, standard deviation, frequency distribution) were computed, followed by:
Exploratory Factor Analysis (EFA) to identify underlying dimensions of attitude toward AI.
Regression analysis to assess predictors of positive or negative AI perception.
ANOVA to examine differences based on profession, gender, or country.4.8.2 Qualitative Data Analysis
Thematic analysis (Braun & Clarke, 2006) was conducted using NVivo software. The transcripts were open-coded and then clustered into axial themes related to:
- -
Ethical dilemmas
- -
AI literacy and education gaps
- -
Institutional inertia
- -
Technological optimism vs. techno-skepticism
Triangulation of both datasets enabled the research to produce a richer interpretive framework, improving validity and addressing methodological limitations.
4.9. Ethical Considerations
Ethical approval was secured from the Institutional Research Ethics Committee at the lead university. Major ethical concerns addressed include:
- -
Informed Consent: All participants signed informed consent forms.
- -
Confidentiality: Responses were anonymized, and pseudonyms were used in reporting.
-Voluntary Participation: Participants were given the right to withdraw at any stage without repercussions.
-Data Security: All data were encrypted and stored in password-protected drives.
4.10. Limitations of the Methodology
While the mixed-methods design enhances the study’s comprehensiveness, certain limitations persist:
-Self-selection Bias: Voluntary nature of participation may exclude less tech-savvy journalists.
-Geographic Constraints: Rural voices in Pakistan and Afghanistan were underrepresented due to access challenges.
-Language Barriers: Surveys were translated into Bangla, Urdu, and Sinhala, which may affect nuance and consistency.
-Despite these limitations, methodological triangulation and respondent validation (member checking) ensured data credibility and trustworthiness.
4.11. Validation and Reliability
To ensure reliability:
-A pilot study with 20 participants was conducted to refine the survey instrument.
-Cronbach’s alpha for internal consistency of attitude items was α = 0.82.
-Peer debriefing and inter-coder agreement among three researchers were used for qualitative data reliability.
4.12. Reflexivity and Researcher Positionality
The principal investigator acknowledged their positionality as a media studies academic in Bangladesh, which may influence interpretation. Reflexive journaling and methodological transparency were maintained to address researcher bias and epistemological subjectivity.
5. Data Analysis and Findings
5.1. Introduction
This section presents and critically examines the data collected from surveys, interviews, and document analysis regarding attitudes toward AI in journalism across Bangladesh and South Asia. Drawing from both qualitative and quantitative approaches, this segment investigates the level of AI literacy, perceived threats, adaptability, institutional response, and the socio-political implications associated with AI technologies in the media industry. The data are contextualized within journalistic cultures of the region and interpreted using a techno-sociological lens.
5.2. Survey Overview and Demographics
A total of 1,050 respondents participated in the survey across five South Asian countries—Bangladesh, India, Pakistan, Sri Lanka, and Nepal—between January and March 2025. The sample included working journalists (45%), journalism students (30%), editors and media managers (15%), and AI/media scholars (10%). Gender distribution was 54% male, 44% female, and 2% non-binary. The age range was 21–65, with the majority (67%) between 25 and 40 years old.
5.3. General Attitudes Toward AI
When asked, “How do you perceive the role of AI in the future of journalism?” the responses fell into four major categories:
Optimistic/Innovative (35%) – Respondents believed AI could aid investigative journalism, automate mundane reporting, and enhance fact-checking capabilities.
Skeptical (28%) – Skepticism was linked to trust issues, concerns about misinformation, and the fear of job loss.
Threatened/Negative (22%) – This group associated AI with media layoffs, surveillance, and authoritarian control.
Neutral/Uninformed (15%) – Indicated a lack of awareness or ambivalence toward AI in journalism.
These results support prior global trends, which reflect both enthusiasm and fear surrounding AI’s incorporation into the media landscape (Newman, 2023; Marconi, 2021).
5.4. Country-Specific Observations
Bangladesh: A unique case emerged where AI was seen as both a tool for progress and a mechanism of state control. 62% of Bangladeshi respondents feared government misuse of AI tools like facial recognition and algorithmic censorship in controlling dissent through online journalism. However, 41% of young journalists showed eagerness to experiment with AI tools for content generation and audience engagement.
India: India displayed the highest readiness, with several media organizations like The Hindu, NDTV, and Times Group already implementing automated news-writing bots. The level of AI training among journalists was also higher, often facilitated through government-industry-academia partnerships (Sharma & Patel, 2022).
Pakistan: Media respondents in Pakistan expressed concern over digital authoritarianism. Over 70% stated they had never received any AI training, and 48% feared that AI would be used for military-grade surveillance under the guise of national security.
Nepal and Sri Lanka: Both countries showed limited institutional readiness. However, community journalism projects such as “AI for Local News” in Nepal (supported by international donors) have begun experimenting with machine learning tools for language translation and disaster alert verification.
5.5. Thematic Analysis from Interviews
A total of 37 in-depth interviews with journalists, media academics, and tech specialists revealed recurring themes:
5.5.1. AI as Labor Replacement and De-Skilling
Interviewees reported newsroom layoffs and increasing reliance on AI tools for rewriting press releases and content automation. One respondent noted:
> “In our newsroom, five sub-editors were replaced with a single AI-based editing software... the remaining staff now double-check its output rather than creating news.” (Senior editor, Bangladesh)
This supports Brynjolfsson and McAfee’s (2014) argument that technological innovations initially complement labor but eventually displace it.
5.5.2. Ethical and Epistemological Crises
Concerns were raised about the erosion of journalistic ethics and truth-making practices. Respondents questioned the editorial transparency of AI-generated content and its accountability.
> “Who decides the algorithm’s bias? Is it neutral or shaped by someone’s political intentions?” (Investigative reporter, Sri Lanka)
This echoes Couldry and Mejias’ (2019) theory of data colonialism, where data-driven infrastructures may impose new forms of epistemic control in the Global South.
5.5.3. Lack of Policy Infrastructure
The absence of AI ethics frameworks in journalism curricula and media laws was highlighted as a major challenge. Only India and Sri Lanka had initiated discussions on AI ethics in national media policy platforms.
> “We are sleepwalking into an algorithmic future without a compass.” (Media law academic, Pakistan)
5.5.4. Language and Digital Divide
Interviewees emphasized the limited availability of AI tools in native languages (e.g., Bangla, Urdu, Sinhala), which restricted their usability for local journalism. Tools developed for Western media were described as “linguistically elitist and culturally blind.”
5.6. Quantitative Correlation: AI Training vs Attitude
A linear regression analysis indicated a positive correlation (r = .68, p < 0.01) between exposure to AI training and optimistic attitudes toward AI. Journalists with formal AI exposure (n = 312) were 2.3 times more likely to describe AI as a “strategic advantage” rather than a “threat to jobs.”
5.7. Social Media, Algorithms, and AI-Driven Censorship
Approximately 59% of respondents across Bangladesh and Pakistan reported that AI-based algorithms had been used to reduce the visibility of sensitive political content. AI-driven censorship via platform algorithms, flagging of political keywords, and shadow banning of independent journalists were repeatedly reported.
Case examples include:
The 2024 shadow banning of the Dhaka-based independent news portal “MuktoMadhob”.Removal of anti-military content in Karachi’s regional blogs following algorithmic classification as “instigating hate.”
These developments align with Zuboff’s (2019) surveillance capitalism thesis and suggest a shift toward predictive censorship in digital journalism.
5.8. Gendered Dimension of AI in Newsrooms
Female and non-binary journalists across South Asia noted:
-Fewer opportunities for AI-related training.
-Marginalization in tech-driven newsroom transitions.
-AI tools failing to detect or moderate gendered hate speech in local contexts.
This digital gender gap further exacerbates inequalities in South Asian journalism and undermines inclusive innovation (UNESCO, 2023).
5.9. Challenges Identified
Key Challenges Frequency (%)
| Lack of AI training |
Fear of job loss |
Absence of AI ethics policy |
Censorship via algorithm |
Linguistic limitation |
| 72% |
58% |
61% |
59% |
66% |
5.10. Regional Summary: A Mixed Readiness Index
A composite AI Journalism Readiness Index (AJRI) was developed based on five dimensions:
1. Training availability
2. Technological adoption
3. Legal/policy framework
4. Ethical discourse
5. Public trust
Country AJRI Score (out of 10)
| Country |
AJRI Score (out of 10) |
| India |
7.5 |
| Bangladesh |
5.0 |
| Sri Lanka |
4.8 |
| Pakistan |
4.3 |
| Nepal |
4.0 |
5.11. Toward a Cautious Embrace
The data suggests that while journalists in South Asia are aware of AI’s transformative potential, they remain trapped between the promise of innovation and the perils of control. Bangladesh, in particular, reflects a paradox: enthusiasm among younger journalists but fear of authoritarian exploitation and job loss. The region’s readiness remains mixed, and without coordinated efforts in training, legal safeguards, and ethical discourse, AI in journalism risks deepening rather than solving existing crises.
6. Discussion
The introduction of artificial intelligence (AI) into journalism represents both a disruptive force and an opportunity for transformation. This section delves into an in-depth analysis of the key findings from the data, correlates them with theoretical concepts, and explores implications for journalism in Bangladesh and South Asia more broadly. Using qualitative and quantitative insights, this discussion navigates perceptions, fears, adaptability, and the structural dynamics that influence how journalism in this region is responding to AI’s sweeping changes.
6.1. Revisiting the Research Questions
The central research questions posed in this study are:
1. What are the prevailing attitudes toward AI among journalists, journalism students, and media managers in Bangladesh and South Asia?
2. How is AI currently impacting journalistic practices in terms of content creation, ethics, employment, and credibility?
3. To what extent are journalism institutions, both educational and professional, prepared to adapt to AI?
4. What are the perceived risks and opportunities of AI for the future of press freedom, accuracy, and democratic communication in the region?
These questions guided the data collection and analysis, and the discussion aims to synthesize the empirical results with conceptual frameworks from the literature and theoretical perspectives.
6.2. Attitudes Toward AI: Acceptance, Ambivalence, and Anxiety
The results reveal a complex array of emotions regarding AI in journalism in Bangladesh and South Asia. Three dominant attitudes emerged:
a. Acceptance and Optimism
Among young journalists and students, particularly those familiar with digital tools, AI was seen as a boon to efficiency and creativity. Many appreciated AI for: Automated news summaries, Predictive analytics for reader engagement, Real-time fact-checking.
This aligns with Pavlik’s (2019) assertion that AI tools enhance human creativity rather than replacing it.
b. Ambivalence and Conditional Trust
Some respondents, especially mid-career journalists, expressed cautious optimism. They supported AI augmentation but stressed the need for human oversight, particularly in ethical reporting and investigative journalism. Similar sentiments were echoed in Marconi and Siegman’s (2017) work, where AI was described as a “co-pilot” rather than a “pilot” in newsrooms.c. Anxiety and Resistance
Senior journalists and editors showed significant concern about:
- -
Job displacement,
- -
Loss of editorial control,
- -
Deepfake proliferation,
- -
Homogenization of news content.
These anxieties mirror global studies (Carlson, 2020; Broussard, 2018), which document fears of AI undermining the human role in journalism and threatening democratic deliberation.
6.3. AI’s Practical Impact on Journalism in the Region
a. News Production and Automation
AI is increasingly used in:
- -
Sports and financial reporting (template-based articles),
- -
Social media trend analysis,
- -
Headline generation.
While these innovations improve speed, participants questioned their accuracy in local language contexts, especially Bengali, Urdu, and Tamil. There were complaints of mistranslation, poor contextualization, and cultural insensitivity.
b. Fact-checking and Disinformation Control
Many participants acknowledged AI’s potential in combating misinformation. Tools like Google’s Fact Check Explorer and AI-assisted image verification were appreciated but were reported as underused in South Asian newsrooms due to lack of training and resources.
c. Ethical Risks and Algorithmic Bias
A recurring concern was AI’s embedded bias and lack of accountability. Journalists reported instances where AI-generated recommendations amplified politically biased or sensationalist content. This echoes Noble’s (2018) argument about “algorithmic oppression,” especially relevant in politically volatile environments like Bangladesh or India.
6.4. Institutional and Educational Preparedness
a. Newsroom Infrastructure and Training
Few newsrooms in Bangladesh and South Asia have integrated AI seamlessly. The gap lies not only in technological investment but also in the mindset. Training sessions are sporadic, and senior editors often resist change. This aligns with the concept of “technological inertia” (Boczkowski, 2004).
b. Journalism Education
Most journalism curricula across Bangladeshi public universities were found to be outdated. AI-related modules are absent or optional. In contrast, private institutions like BRAC University and some Indian universities have begun integrating AI literacy, although without uniform standards.This discrepancy reflects what Tandoc and Maitra (2020) describe as a “digital literacy gap” in the Global South, which could widen inequality in journalistic competence and opportunity.
6.5. Sociopolitical Dynamics and AI
In regions like Bangladesh and Pakistan, where press freedom is fragile, AI adds both hope and peril. On one hand, AI tools can expose disinformation campaigns. On the other, they may be co-opted by governments for surveillance, censorship, and narrative control—an outcome feared by 58% of respondents.
This aligns with the warning issued by Freedom House (2023), which noted that AI-enhanced surveillance was growing fastest in South Asia. AI’s misuse for political propaganda and social media manipulation was identified as a clear and present danger.
6.6. Human-AI Synergy: Pathways to Collaborative Journalism
Rather than replacing journalists, the future lies in hybrid roles:
- -
AI curators who manage content flow,
- -
Data-driven reporters who analyze trends using AI tools,
- -
Digital ethicists who ensure compliance and fairness.
This framework is supported by Lewis, Guzman, and Schmidt (2019), who propose a typology of “algorithm-aware journalism” for the AI age.
6.7. Regional Comparison and Cross-National Patterns
a. India
India is more advanced in terms of AI integration, particularly in English-language media like The Hindu, Times of India, and NDTV. However, the problem of regional language AI tools remains.b. Sri Lanka and Nepal
Both nations exhibit limited AI adoption in journalism due to political instability, funding limitations, and conservative newsroom culture.
c. Bangladesh
Bangladesh stands at a critical juncture. While tech-savvy youth and start-ups show potential, institutional inertia, press suppression, and lack of innovation investment pose challenges.
6.8. Implications for Policy and Practice
1. Need for AI Journalism Policy: Governments and media regulators must introduce ethical AI journalism guidelines, including transparency, data integrity, and accountability (APA, 2022).
2. Capacity Building: Newsrooms must prioritize regular workshops, AI tools training, and open-source technology sharing.
3. International Collaboration: Global North-South knowledge exchange programs could democratize access to AI training.
4. Inclusive Journalism Education Reform: University curricula must shift from theory-heavy models to AI-integrated, data-rich journalism syllabi.
6.9. Reimagining Journalism Values in the Age of AI
AI challenges conventional notions of:
Objectivity: Can AI be neutral?
- -
Truth: Can deepfakes distort perceived reality?
- -
Humanity: Where does empathy go in algorithmic reporting?
These philosophical tensions demand renewed emphasis on ethical reflexivity in journalism education and practice, drawing from scholars like Ward (2015) and Zelizer (2019).
6.10. Summary of Key Arguments
The attitude toward AI in journalism across Bangladesh and South Asia is divided but shifting positively among youth.
There exists an urgent technological and pedagogical gap in newsroom and journalism school preparedness.
While AI holds immense promise in combatting disinformation and enhancing efficiency, it also risks worsening surveillance, censorship, and bias.
Regional disparities in readiness are evident, with India leading, while countries like Bangladesh and Nepal lag behind.
A hybrid future—based on human-AI collaboration—is not only possible but essential to retain journalism’s public service role.
8. Conclusion
The advent of artificial intelligence (AI) has created unprecedented shifts across multiple sectors, with journalism among the most significantly affected. This study has explored the current attitudes toward AI within journalism, highlighting concerns, opportunities, and the state of preparedness in both Bangladesh and the broader South Asian region. Through a mixed-methods approach—integrating interviews, surveys, and thematic data analysis—this research has identified a nuanced perspective: while there is excitement about AI’s potential, skepticism and anxiety around job displacement, ethical lapses, misinformation, and algorithmic bias remain persistent.One of the most significant conclusions drawn is that the integration of AI in journalism is not merely a technological transformation but a socio-political and cultural redefinition of how information is produced, curated, and consumed. AI-driven tools—from automated content generation and sentiment analysis to deepfake detection and real-time transcription—are undeniably powerful. However, in countries like Bangladesh, India, Pakistan, Nepal, and Sri Lanka, the policy frameworks, educational curriculums, and professional training lag behind, creating a critical gap in the ethical and sustainable adoption of these technologies.
The readiness of journalism institutions across South Asia varies. Bangladesh, in particular, shows a mixed trend—urban newsrooms in Dhaka may experiment with AI for fact-checking or multilingual translation, but rural and vernacular journalism is still far from embracing these innovations. This urban-rural digital divide raises deeper concerns regarding equity in media modernization and information access.
Further, the study reveals a tension between journalistic integrity and technological dependency. Many journalists fear becoming tools of the algorithm rather than stewards of truth. News organizations are struggling to balance speed and personalization with depth and credibility. The South Asian context—with its political volatility, populist propaganda, and underfunded media sector—compounds these tensions. The possibility of AI being co-opted for state surveillance, content manipulation, or corporate monopolization of media becomes a realistic threat.
On the positive side, AI presents a historic opportunity for journalism to evolve. It can democratize storytelling, improve investigative journalism, assist in real-time crisis reporting, and personalize user experiences in multilingual and multicultural societies like those in South Asia. However, this will require intentional policy interventions, ethical AI frameworks, inclusive training for journalists, and cross-border cooperation on AI governance.
In conclusion, while AI will continue to reshape the landscape of journalism, the future depends on how Bangladesh and its South Asian neighbors navigate the crossroad between innovation and caution, technology and ethics, speed and accuracy. The time to act is now—through proactive policy design, participatory AI ethics discourse, educational reforms, and global-local collaborations. The journalism of tomorrow must not only survive AI—it must humanize it.