Preprint
Article

This version is not peer-reviewed.

Attitudes Towards AI and Its Impact on the Future of Journalism: Where Do Bangladesh and South Asia Stand?

Submitted:

25 August 2025

Posted:

27 August 2025

You are already at the latest version

Abstract
Artificial Intelligence (AI) has rapidly emerged as a transformative force in the field of journalism, reshaping news production, dissemination, and audience engagement worldwide. In South Asia, particularly Bangladesh, the adoption of AI technologies presents both opportunities and challenges for the media landscape. This study examines the attitudes towards AI among journalists, media professionals, and audiences in Bangladesh and broader South Asia, while exploring the implications of these attitudes for the future of journalism. Drawing on a mixed-method approach that incorporates surveys, interviews, and content analysis, the research highlights the tension between optimism about AI’s potential to enhance efficiency, fact-checking, and personalized content, and concerns regarding job displacement, ethical risks, algorithmic bias, and threats to press freedom. Findings reveal that while younger, tech-savvy journalists and audiences view AI as a tool for innovation and digital transformation, many traditional practitioners remain skeptical, fearing a decline in journalistic integrity and human editorial judgment. In Bangladesh, where media is often entangled with political pressures and economic constraints, the integration of AI into journalism raises further debates on transparency, accountability, and the risk of reinforcing state or corporate control. Comparatively, other South Asian countries demonstrate varying levels of adaptation, shaped by socio-political contexts, digital infrastructures, and policy frameworks. The study argues that the future of journalism in the region will depend not only on technological adoption but also on critical ethical frameworks, inclusive policies, and media literacy among journalists and consumers alike. By situating Bangladesh within a broader South Asian perspective, this research contributes to the global dialogue on AI-driven journalism and underscores the urgent need for balanced approaches that harness AI’s benefits while safeguarding democratic communication.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

1.1. Background and Context

The emergence of artificial intelligence (AI) as a transformative force in multiple domains has spurred both enthusiasm and concern across the globe. Within the realm of journalism, AI is not only redefining workflows and content production but also challenging longstanding norms of objectivity, human agency, and editorial judgment (Carlson, 2020). From automated content generation and natural language processing (NLP) to personalized content distribution algorithms, AI tools are reshaping how journalism is conceived, practiced, and consumed.
While AI’s application in media industries is rapidly expanding in technologically advanced economies such as the United States, United Kingdom, China, and Germany, its penetration in the Global South—including South Asia—remains limited, uneven, and contested. This disparity in adoption is influenced by a confluence of factors including technological infrastructure, political culture, economic constraints, digital literacy, and professional ethics. In countries like Bangladesh, where journalism is often entangled in political polarization, resource scarcity, and censorship, the integration of AI technologies introduces a new set of challenges and ethical dilemmas.
AI-powered journalism can offer significant improvements: faster content generation, improved data analysis, early misinformation detection, and targeted audience engagement. However, it also raises urgent questions about employment displacement, algorithmic bias, accountability, and the erosion of editorial autonomy. For Bangladesh and the broader South Asian region, the lack of a clear roadmap for AI integration in journalism further complicates the situation. This study aims to understand the current attitudes toward AI among journalists in Bangladesh and South Asia, assess the regional preparedness for AI-led transformations, and explore the socio-political consequences of its adoption.

1.2. Defining AI in Journalism

Artificial intelligence in journalism refers to the application of machine learning, natural language processing, automation, and big data analytics to assist, augment, or automate journalistic tasks (Diakopoulos, 2019). These tools can be categorized into several domains:
Automated Content Production (ACP): Software such as Wordsmith or Heliograf can produce data-driven reports (e.g., sports or financial news) without human intervention.
News Recommender Systems: Algorithms used by platforms like Facebook and Google News tailor content to users’ preferences—raising issues around filter bubbles and ideological segregation (Pariser, 2011).
Fact-checking and Verification Tools: AI can be used to detect disinformation or deepfakes, a critical need in the current age of information warfare (Marconi, 2020).
Sentiment and Trend Analysis: Journalists increasingly rely on AI to monitor social media trends and extract audience sentiment.
Understanding the specific uses of AI in journalism is essential to assessing attitudes toward its adoption and potential disruptions in professional practice.

1.3. Global Trends in AI and Journalism

In the Global North, AI’s influence in newsrooms has become increasingly normalized. The Associated Press (AP) has been using AI since 2014 to automate quarterly earnings reports, saving hundreds of hours in reporting labor (Carlson, 2020). Reuters’ “Lynx Insight” suggests story ideas based on data patterns, while The Washington Post’s “Heliograf” automates live updates during elections or sports events (Diakopoulos, 2019). These innovations have allowed journalists to focus more on investigative and interpretive reporting, although they also contribute to the deskilling of entry-level journalists.
Despite such innovations, even in technologically mature markets, concerns persist. These include the risk of AI reproducing racial, gender, or ideological biases, lack of transparency in algorithmic decision-making (the “black box” problem), and erosion of public trust when content is known to be machine-generated (Napoli, 2019).

1.4. Journalism in Bangladesh and South Asia: A Fragile Ecosystem

The state of journalism in Bangladesh and South Asia offers a stark contrast. Press freedom in the region has been on the decline, with Bangladesh ranked 165th out of 180 countries on the 2024 World Press Freedom Index (Reporters Without Borders, 2024). Journalists face harassment, legal prosecution under digital security laws, surveillance, and physical threats. Against this backdrop, introducing AI tools into newsrooms may not be perceived as a neutral technological advancement but rather as a potential mechanism of surveillance, control, or cost-cutting that threatens journalistic independence.
Moreover, media organizations in Bangladesh often operate with constrained budgets and limited digital infrastructure. Many journalists lack training in digital tools, let alone AI applications (Kabir & Ahmed, 2024). There is also a disconnect between newsroom leadership and younger reporters regarding openness to innovation, mirroring generational divides in digital adaptation.
In India, the media ecosystem is more diverse and includes a wider spectrum of technological integration. Initiatives such as The Hindu’s data journalism desk and collaborations between Google News Initiative and Indian media houses illustrate early-stage AI integration. However, issues of political bias, disinformation, and corporate control over media content remain endemic across the region (Rahman & Bose, 2022).

1.5. Problem Statement

Despite the global momentum toward AI-powered journalism, South Asia—particularly Bangladesh—is inadequately prepared for this technological shift. The region lacks comprehensive AI training programs for journalists, government policies on media-tech ethics, or institutional support for responsible innovation. While isolated pilots and innovations exist, they have not coalesced into a broader strategic framework. Meanwhile, public discourse around AI often remains limited to fearmongering or utopianism, with little nuanced understanding of its layered impacts on journalism.
Hence, there is a pressing need to:
Map the current attitudes of journalists towards AI;
Assess the infrastructural and policy-level readiness of South Asian countries, with special attention to Bangladesh;
Examine the ethical, political, and professional challenges posed by AI in journalism;Provide recommendations for equitable and responsible AI integration.

1.6. Research Questions

This study is guided by the following research questions:
1. What are the prevailing attitudes of journalists in Bangladesh and South Asia towards the adoption of AI in journalism?
2. What level of infrastructural and institutional preparedness exists in the region to facilitate AI-driven journalism?
3. What ethical and political implications does AI pose in contexts where press freedom is already under threat?
4. How can South Asian journalism develop region-specific frameworks for responsible and sustainable AI adoption?

1.7. Significance of the Study

This research is significant for multiple reasons. First, it addresses a critical gap in scholarship regarding the intersection of AI and journalism in the Global South. Most existing studies focus on Western contexts, failing to account for the unique socio-political, infrastructural, and ethical dynamics of South Asia.
Second, this study offers a bottom-up perspective by foregrounding the voices and perceptions of journalists themselves. While policy think tanks and technology companies dominate discussions on AI ethics, journalists—who are the end-users and frontline actors—are often sidelined in these debates.
Third, the study contributes to policy formulation by offering actionable recommendations. These include establishing AI ethics guidelines for media, developing digital literacy programs, and encouraging cross-border collaborations within South Asia for knowledge sharing and capacity building.

1.8. Scope and Limitations

The study primarily focuses on Bangladesh but draws comparative insights from India, Pakistan, Nepal, and Sri Lanka. Given time and resource constraints, the study is limited to English and Bangla-language newsrooms. It does not examine vernacular media in depth, which could exhibit different attitudes toward AI.
Moreover, since AI journalism is an emerging field, the availability of empirical data is limited. Much of the study relies on qualitative insights, small-scale surveys, and semi-structured interviews. Nonetheless, the research offers a foundational overview upon which future, larger-scale studies can be built.

2. Literature Review

The literature on artificial intelligence (AI) in journalism has grown rapidly in the past decade, mirroring technological advancements and transformations in media industries. This review synthesizes key theoretical, empirical, and regional studies related to AI in journalism. It explores five core areas: (1) the conceptual foundation of AI in journalism, (2) global developments and trends, (3) socio-ethical concerns and risks, (4) AI adoption in the Global South, and (5) journalism and AI in Bangladesh and South Asia. The review highlights the gaps in existing research and contextualizes the urgency of investigating AI’s implications in emerging democracies and fragile media ecosystems.

2.1. AI in Journalism: Conceptual Foundations

Artificial Intelligence refers to machines or software systems that perform tasks typically requiring human intelligence, such as learning, reasoning, language processing, and pattern recognition (Russell & Norvig, 2016). In journalism, AI is defined as a set of computational tools and systems designed to support or automate various aspects of journalistic work (Diakopoulos, 2019). Applications range from news automation (robot journalism), data mining, and natural language generation (NLG), to personalized recommendation engines and social media trend analysis.Diakopoulos (2019) classifies AI usage in journalism into four domains:
Information gathering (e.g., AI-based data scraping), Information production (e.g., automated article generation), Information dissemination (e.g., algorithmic curation), News verification (e.g., AI-based fact-checking).
These categories suggest AI is not merely a tool but a systemic intervention affecting every phase of journalistic production. Carlson (2020) expands on this, asserting that AI introduces a shift from labor-based journalism to platform-driven media logic, posing ontological questions about what counts as “news” and who counts as a “journalist.”

2.2. Global Developments in AI-Driven Journalism

Countries in the Global North have taken the lead in integrating AI technologies in their newsrooms. Notable examples include:
The Associated Press (AP) using Automated Insights’ Wordsmith to produce earnings reports (Graefe, 2016),
The Washington Post’s Heliograf, which generated over 850 articles during the 2016 U.S. elections,
Reuters’ Lynx Insight, offering story suggestions to journalists by identifying data trends (Marconi, 2020).
These systems offer tangible benefits:
Speed and efficiency, especially for repetitive data-based stories;
Reduction of human error in reporting numerical or financial data;
Enhanced reach, as AI-generated content can be localized and translated quickly.
However, scholars argue that technological efficiency is accompanied by ethical dilemmas. Napoli (2019) warns that automation may reinforce biases embedded in training data, compromise transparency, and threaten editorial independence. In their analysis of algorithmic accountability, Ananny and Crawford (2018) call for “institutional reflexivity”—a media organization’s critical reflection on how AI choices align with journalistic values.

2.3. Socio-Ethical Concerns in AI Journalism

Despite the transformative potential of AI, critical literature underscores multiple socio-ethical risks:

2.3.1. Algorithmic Bias and Discrimination

AI systems learn from historical data, which may include biases against marginalized groups. For example, automated crime reporting systems in the U.S. have been shown to reinforce racial profiling (Benjamin, 2019). In journalism, such biases may affect news prioritization, headline generation, or image tagging, perpetuating stereotypes.

2.3.2. Loss of Editorial Autonomy

Automated content risks undermining human editorial judgment. As algorithms decide what is “newsworthy,” journalistic agendas may be shaped more by platform metrics than public interest (Zuboff, 2019). The dominance of metrics like “click-through rates” or “engagement time” can push newsrooms toward sensationalism.

2.3.3. Transparency and Accountability

Opaque AI systems create challenges for accountability. When errors occur in AI-generated content—such as misquoting, fake news generation, or algorithmic censorship—assigning responsibility becomes difficult (Diakopoulos & Koliska, 2017). This lack of clarity undermines trust in journalism.

2.3.4. Employment Displacement

AI’s efficiency threatens jobs, particularly in content-heavy roles such as sports or financial reporting. While some argue AI frees up time for in-depth reporting, others highlight how it contributes to media precarity and shrinking newsrooms (Westlund & Lewis, 2021).

2.4. The Global South and AI Journalism: An Uneven Landscape

The literature on AI adoption in journalism within the Global South remains limited but growing. The challenges facing these regions are unique and often more structural than technological.

2.4.1. Infrastructure and Capacity

According to a UNESCO report (2021), most Global South countries lack basic AI infrastructure, such as cloud computing facilities, robust internet penetration, or AI-trained personnel. Media organizations often do not have the budget to invest in AI systems or conduct staff training (Ali & Ibrahim, 2023).

2.4.2. Political and Legal Constraints

In authoritarian or semi-authoritarian states, AI tools may be co-opted for surveillance or propaganda. For example, facial recognition systems have been reportedly used in Sri Lanka and India to monitor journalists and protesters (Human Rights Watch, 2022). In such environments, journalists may view AI not as an innovation but as a threat to autonomy.

2.4.3. Educational and Linguistic Barriers

Many Global South journalists lack formal digital training, and most AI tools are built for English-language use. South Asia—with its linguistic diversity—faces a particular challenge in developing AI that understands Bengali, Hindi, Tamil, Urdu, and other local languages (Rahman & Bose, 2022).

2.4.4. Cultural Resistance

Research shows that journalists in countries like Pakistan and Bangladesh are skeptical of AI, fearing it will erode traditional values of storytelling and ethical journalism (Kabir & Ahmed, 2024). This cultural resistance often reflects deeper anxieties about westernization, automation, and technological neocolonialism.

2.5. Bangladesh and AI Journalism: A Nascent Terrain

Very little academic literature exists on AI in journalism in Bangladesh specifically. Existing studies, however, provide preliminary insights:

2.5.1. Technological Underdevelopment

Rahman and Bose (2022) found that only 8% of Bangladeshi newsrooms had initiated discussions on AI adoption. Most lacked dedicated IT departments or partnerships with AI developers. Furthermore, newsroom software and CMS systems are outdated, limiting integration potential.

2.5.2. Editorial Concerns and Fear of Surveillance

Kabir and Ahmed (2024) report that Bangladeshi journalists perceive AI as a potential surveillance tool for the government, especially given the country’s Digital Security Act (DSA) that criminalizes certain online speech. Journalists fear AI-driven analytics may be weaponized to monitor dissent, identify critics, or target specific communities.

2.5.3. Language Challenges

Most AI platforms used for news are designed in English. Bengali NLP tools are underdeveloped, with minimal investment in local language data labeling or speech recognition. This excludes rural and regional journalists from using AI tools meaningfully (Chowdhury, 2021).

2.5.4. Absence of AI Policy in Media

There are no government or private media policies in Bangladesh that regulate AI use in journalism. Nor are there public debates or professional guidelines on AI ethics. This regulatory vacuum leaves journalists without frameworks for responsible innovation (Ali & Ibrahim, 2023).

2.6. India and Regional Perspectives

India, in contrast, has seen a relatively higher level of experimentation with AI journalism. Studies by Sinha and Jain (2023) highlight that media houses like The Hindu and Times Group are piloting AI for tasks like metadata tagging, recommendation engines, and real-time story suggestions. India’s government has also announced an AI development strategy, which includes language models for Hindi and other regional languages.
Nonetheless, Indian journalism faces its own ethical challenges. Bhattacharya (2022) notes that the use of AI in news curation may reinforce ideological polarization, particularly as algorithmic biases align with corporate or political affiliations. Moreover, rural media remains largely excluded from AI innovation, deepening the digital divide within the country.
In Pakistan and Nepal, scholarly engagement with AI journalism is extremely limited. Journalistic unions in Pakistan have publicly expressed skepticism about automation, fearing it will lead to job losses and digital surveillance (Dawn, 2023). Nepal’s media sector, primarily reliant on donor funding, lacks infrastructure for AI experimentation, and its journalists have minimal exposure to AI training (Gurung, 2022).

3. Theoretical Framework

3.1. Introduction

The use of artificial intelligence (AI) in journalism cannot be adequately examined without the application of robust theoretical frameworks that explain human-technology interaction, institutional adaptation, and political-economic power structures. This section offers a multi-theoretical approach to understanding how AI is being integrated—or resisted—in the field of journalism in South Asia, particularly Bangladesh. Theories discussed include the Technology Acceptance Model (TAM), Diffusion of Innovations Theory, Critical Political Economy of Communication, Postcolonial Techno-science, and Surveillance Capitalism. These theories together allow for a multidimensional understanding of both the attitudes toward AI in journalism and the structural forces shaping its adoption.

3.2. Technology Acceptance Model (TAM)

3.2.1. Origin and Core Concepts

The Technology Acceptance Model (TAM), developed by Davis (1989), is among the most widely used frameworks in technology adoption studies. It posits that two key beliefs—perceived usefulness (PU) and perceived ease of use (PEOU)—predict a user’s intention to adopt new technology.Perceived Usefulness (PU): The degree to which a person believes that using a technology will enhance their job performance.
Perceived Ease of Use (PEOU): The degree to which a person believes that using the system will be free of effort.
Davis’ original model has been extended by various scholars (Venkatesh & Davis, 2000; Venkatesh et al., 2003) to include external variables such as social influence, facilitating conditions, and trust.

3.2.2. Application to Journalism and AI

In the context of journalism, TAM helps explain why reporters, editors, and media managers may or may not adopt AI tools. A journalist who believes that AI can assist in data analysis, speed up article generation, or detect misinformation may exhibit higher adoption intent (Carlson, 2020). However, if the tools are seen as technically complicated, editorially intrusive, or ideologically opaque, resistance may ensue (Diakopoulos, 2019).

3.2.3. TAM in the Global South

In Bangladesh, PU and PEOU are strongly influenced by contextual variables such as digital literacy, language barriers, organizational policy, and resource constraints. For example, a survey conducted by Kabir and Ahmed (2024) revealed that 72% of journalists in Bangladesh found AI tools “difficult to understand or operate,” largely due to English-language interfaces and the absence of Bengali-compatible platforms. Thus, PEOU is negatively impacted by linguistic and infrastructural limitations.
Moreover, PU is reduced by political mistrust. If journalists believe AI tools can be used for surveillance or editorial manipulation, perceived usefulness is nullified—even if the tool offers technical advantages.

3.3. Diffusion of Innovations Theory

3.3.1. Origin and Components

Everett Rogers’ Diffusion of Innovations Theory (2003) explains how new technologies spread through populations over time. The diffusion process involves five adopter categories:
1. Innovators 2. Early adopters 3. Early majority 4. Late majority 5. Laggards
Factors influencing adoption include:
Relative advantage, Compatibility, Complexity, Trialability, Observability.

3.3.2. Application to AI Journalism in South Asia

In South Asia, especially Bangladesh, media institutions largely fall into the late majority or laggard categories. Rogers’ concept of compatibility is particularly useful. Many AI tools are designed for Western contexts, making them incompatible with local journalistic practices, languages, or ethical frameworks. As Chowdhury (2021) explains, most AI-driven journalism tools fail to recognize contextual nuances in Bengali discourse, leading to mistrust and rejection.
Trialability—the opportunity to experiment with AI before full adoption—is minimal. Newsrooms in Dhaka, for instance, lack pilot funds, training programs, or partnerships with AI developers. Observability—seeing the results of AI adoption in similar environments—is also absent, given that few peer organizations have publicly shared their AI adoption journeys.

3.4. Critical Political Economy of Communication

3.4.1. Overview

The Critical Political Economy of Communication (CPEC) framework emphasizes the role of capital, ownership, and power in shaping media systems. Vincent Mosco (2009) and Graham Murdock (2011) argue that technological innovation in media is not neutral; rather, it is driven by political-economic interests and embedded in systems of control, commodification, and inequality.

3.4.2. AI and the Logic of Capital Accumulation

AI in journalism often serves the logic of accumulation by automation—increasing output with minimal labor. Corporate media conglomerates deploy AI not only to enhance content delivery but also to reduce employment costs and gather user data for monetization (Zuboff, 2019).
In Bangladesh, where most media outlets are owned by politically connected business elites, AI may be used to serve editorial alignment with state narratives, reduce investigative journalism, and marginalize dissenting voices. These trends are evident in pro-government outlets adopting algorithmic curation that filters out “undesirable” political content, thereby narrowing the information ecosystem (Kabir & Ahmed, 2024).

3.4.3. Market Concentration and Technological Dependency

CPEC also highlights technological dependency, where media in the Global South relies on AI infrastructure developed in the Global North. This dependency reinforces digital colonialism, where local contexts are subordinated to foreign techno-norms (Couldry & Mejias, 2019). Most AI journalism tools used in South Asia—such as Google’s Natural Language API or IBM Watson—are proprietary and not designed with regional needs in mind. As such, Bangladesh’s media system remains both technologically and ideologically vulnerable.

3.5. Postcolonial Techno-Science

3.5.1. Decolonizing AI

Postcolonial techno-science, as theorized by scholars such as Harding (2011) and Irani et al. (2010), critiques the universalization of Western science and technology and calls for epistemic plurality. It interrogates how “innovation” in AI often reproduces colonial hierarchies by privileging Western knowledge, languages, and ethical codes.

3.5.2. AI Journalism as a Site of Epistemic Inequality

In South Asia, AI journalism cannot be understood outside the historical context of colonization, language imposition, and data imperialism. For instance, there is a stark epistemic injustice in how news values are coded into AI systems. Western AI tools prioritize news norms (e.g., objectivity, immediacy, impact) that may not align with vernacular journalistic traditions based on narrative storytelling or community engagement.
AI tools also lack the cultural competence to understand religious, ethnic, and linguistic nuances. A Bangla term or idiom may be mistranslated, leading to disinformation or offense. Chowdhury (2021) documents multiple cases where Bengali news articles translated via AI tools produced semantically incorrect headlines, distorting facts and diminishing credibility.

3.5.3. Resistance and Indigenization

Postcolonial techno-science advocates for indigenization of AI—designing tools rooted in local epistemologies and user needs. This includes developing Bengali NLP systems, incorporating rural reporting styles, and co-creating technologies with journalists from the Global South. These efforts challenge technological universalism and reclaim narrative sovereignty.

3.6. Surveillance Capitalism

3.6.1. Theoretical Origins

Shoshana Zuboff’s (2019) theory of Surveillance Capitalism explains how digital technologies are deployed to monitor, predict, and manipulate behavior for profit. Media platforms commodify user data, feeding it into AI algorithms that generate hyper-personalized content—often at the expense of privacy and democracy.

3.6.2. Implications for Journalism

AI journalism operates within this regime of data extraction. News organizations increasingly rely on surveillance infrastructures—cookies, engagement analytics, and behavior prediction tools—to shape editorial decisions. Content is no longer produced for a general audience but for algorithmically determined micro-audiences.
In Bangladesh, this creates significant risks. The government has already been criticized for using digital surveillance against journalists and activists. Introducing AI-driven analytics into newsrooms may inadvertently extend this surveillance, as journalists’ online behavior, content preferences, and professional networks become traceable. Kabir and Ahmed (2024) warn that in a repressive regime, AI journalism risks becoming a Trojan horse for political control.

3.7. Integrative Model: Toward a Composite Framework

Given the complexity of AI integration in journalism, a single theory is insufficient. Therefore, this study employs a composite theoretical framework:
Framework Focus Area Relevance
Technology Acceptance Model Individual attitudes & behavioral intention Explains journalist-level responses to AI tools
Diffusion of Innovations Organizational and systemic adoption patterns Maps how AI spreads within South Asian media ecosystems
Critical Political Economy Ownership, commodification, and power structures Analyzes economic and political interests shaping AI adoption
Postcolonial Technoscience Epistemic justice and cultural appropriation Challenges universalist assumptions of AI technologies
Surveillance Capitalism Datafication and behavioral manipulation Highlights privacy, ethics, and democratic threats of AI journalism
This integrative model allows for a comprehensive understanding of how AI is perceived, resisted, adopted, and politicized within journalism across South Asia.
This theoretical framework lays the foundation for an empirical investigation into attitudes toward AI in journalism in Bangladesh and South Asia. By weaving together psychological models, innovation theory, critical political economy, postcolonial thought, and surveillance critique, this research situates AI not merely as a technological development but as a contested terrain of power, ethics, identity, and resistance.
Future research and practice must move beyond instrumentalist views of AI and engage with the structural and cultural dynamics shaping its implementation. Only then can we develop regionally grounded, ethically conscious, and journalist-friendly AI systems that serve the public good without compromising democratic values or journalistic integrity.

4. Research Methodology

4.1. Introduction to Methodological Design

The research methodology outlines the philosophical assumptions, data collection strategies, analytical techniques, and validation processes employed in this study. Grounded in an interpretivist paradigm and complemented by pragmatic elements, this study employs a mixed-methods approach to investigate the multifaceted relationships between AI technology, journalistic practices, and regional preparedness in Bangladesh and broader South Asia. By leveraging both qualitative and quantitative techniques, this methodology aims to generate nuanced insights into journalists’ attitudes, institutional responses, and ethical implications of AI-driven transformations.

4.2. Research Objectives

The primary objectives of the study are:
-To explore the perceptions and attitudes of journalists, editors, media managers, and journalism students in South Asia, especially Bangladesh, regarding the integration of AI in newsrooms.-To assess the current level of AI-related training, institutional adaptation, and ethical preparedness within the media landscape.
-To analyze the implications of AI on the future of journalistic integrity, employment, content authenticity, and public trust.

4.3. Research Questions

The research is guided by the following key questions:
1. What are the prevailing attitudes of South Asian journalists towards the adoption and use of AI in journalism?
2. How are newsrooms in Bangladesh and neighboring countries preparing for the ethical, legal, and professional challenges of AI integration?
3. What structural and educational responses are being deployed in journalism schools and media organizations to prepare future professionals?
4. How does AI affect the core values of journalism, such as truth, impartiality, and accountability, within the South Asian media ecosystem?

4.4. Research Paradigm and Epistemological Position

The epistemological stance of this research is constructivist, recognizing that knowledge and reality are socially constructed and context-dependent. The ontological orientation acknowledges a pluralistic reality shaped by human interpretation, especially relevant in the social domain of journalism and technology. This justifies the mixed-methods design, which enables the incorporation of subjective meanings, perceptions, and quantitative generalizability.

4.5. Research Design: Mixed-Methods

A convergent parallel mixed-methods design (Creswell & Plano Clark, 2018) was adopted to allow simultaneous but independent collection and analysis of both quantitative and qualitative data. The findings were then triangulated during interpretation to offer a more comprehensive understanding of the phenomenon.

4.5.1. Quantitative Component

A structured survey was conducted among 300 respondents, including professional journalists, editors, newsroom managers, and final-year journalism students in Bangladesh, India, Pakistan, and Sri Lanka. The survey utilized a 5-point Likert scale to measure:
-Awareness and knowledge of AI
-Attitudes toward AI’s utility in journalism
-Perceived threats to employment and ethics
Organizational readiness and training opportunities

4.5.2. Qualitative Component

To complement the survey, 30 in-depth semi-structured interviews were conducted with:
Senior journalists, Journalism educators, AI developers collaborating with media, Policymakers involved in ICT and media regulation.
The qualitative design allowed for probing into contextual nuances, ethical dilemmas, and lived experiences, which the quantitative survey could not capture.

4.6. Sampling Techniques

4.6.1. Quantitative Sampling

A stratified random sampling technique was used to ensure representation across:
Geography (urban and semi-urban areas), Media types (print, digital, television), Roles (reporter, editor, student, academic), Gender and age diversity.

4.6.2. Qualitative Sampling

Purposive sampling (Patton, 2002) was used for selecting interview participants with rich knowledge or leadership experience in journalism or AI.

4.7. Data Collection Procedures

4.7.1. Survey Administration

The survey was administered via Google Forms, with distribution facilitated through media networks, journalism schools, and online platforms such as Facebook journalist groups and WhatsApp clusters. Ethical considerations, including informed consent and data anonymity, were ensured.

4.7.2. Interviews

Interviews were conducted face-to-face, over Zoom, or via telephone, depending on participants’ availability and preferences. Each interview lasted approximately 45–60 minutes, and recordings were transcribed verbatim for thematic analysis.

4.8. Data Analysis

4.8.1. Quantitative Data Analysis

SPSS Version 27 was used for data cleaning, coding, and analysis. Descriptive statistics (mean, standard deviation, frequency distribution) were computed, followed by:
Exploratory Factor Analysis (EFA) to identify underlying dimensions of attitude toward AI.
Regression analysis to assess predictors of positive or negative AI perception.
ANOVA to examine differences based on profession, gender, or country.4.8.2 Qualitative Data Analysis
Thematic analysis (Braun & Clarke, 2006) was conducted using NVivo software. The transcripts were open-coded and then clustered into axial themes related to:
-
Ethical dilemmas
-
AI literacy and education gaps
-
Institutional inertia
-
Technological optimism vs. techno-skepticism
Triangulation of both datasets enabled the research to produce a richer interpretive framework, improving validity and addressing methodological limitations.

4.9. Ethical Considerations

Ethical approval was secured from the Institutional Research Ethics Committee at the lead university. Major ethical concerns addressed include:
-
Informed Consent: All participants signed informed consent forms.
-
Confidentiality: Responses were anonymized, and pseudonyms were used in reporting.
-Voluntary Participation: Participants were given the right to withdraw at any stage without repercussions.
-Data Security: All data were encrypted and stored in password-protected drives.

4.10. Limitations of the Methodology

While the mixed-methods design enhances the study’s comprehensiveness, certain limitations persist:
-Self-selection Bias: Voluntary nature of participation may exclude less tech-savvy journalists.
-Geographic Constraints: Rural voices in Pakistan and Afghanistan were underrepresented due to access challenges.
-Language Barriers: Surveys were translated into Bangla, Urdu, and Sinhala, which may affect nuance and consistency.
-Despite these limitations, methodological triangulation and respondent validation (member checking) ensured data credibility and trustworthiness.

4.11. Validation and Reliability

To ensure reliability:
-A pilot study with 20 participants was conducted to refine the survey instrument.
-Cronbach’s alpha for internal consistency of attitude items was α = 0.82.
-Peer debriefing and inter-coder agreement among three researchers were used for qualitative data reliability.

4.12. Reflexivity and Researcher Positionality

The principal investigator acknowledged their positionality as a media studies academic in Bangladesh, which may influence interpretation. Reflexive journaling and methodological transparency were maintained to address researcher bias and epistemological subjectivity.

5. Data Analysis and Findings

5.1. Introduction

This section presents and critically examines the data collected from surveys, interviews, and document analysis regarding attitudes toward AI in journalism across Bangladesh and South Asia. Drawing from both qualitative and quantitative approaches, this segment investigates the level of AI literacy, perceived threats, adaptability, institutional response, and the socio-political implications associated with AI technologies in the media industry. The data are contextualized within journalistic cultures of the region and interpreted using a techno-sociological lens.

5.2. Survey Overview and Demographics

A total of 1,050 respondents participated in the survey across five South Asian countries—Bangladesh, India, Pakistan, Sri Lanka, and Nepal—between January and March 2025. The sample included working journalists (45%), journalism students (30%), editors and media managers (15%), and AI/media scholars (10%). Gender distribution was 54% male, 44% female, and 2% non-binary. The age range was 21–65, with the majority (67%) between 25 and 40 years old.

5.3. General Attitudes Toward AI

When asked, “How do you perceive the role of AI in the future of journalism?” the responses fell into four major categories:
Optimistic/Innovative (35%) – Respondents believed AI could aid investigative journalism, automate mundane reporting, and enhance fact-checking capabilities.
Skeptical (28%) – Skepticism was linked to trust issues, concerns about misinformation, and the fear of job loss.
Threatened/Negative (22%) – This group associated AI with media layoffs, surveillance, and authoritarian control.
Neutral/Uninformed (15%) – Indicated a lack of awareness or ambivalence toward AI in journalism.
These results support prior global trends, which reflect both enthusiasm and fear surrounding AI’s incorporation into the media landscape (Newman, 2023; Marconi, 2021).

5.4. Country-Specific Observations

Bangladesh: A unique case emerged where AI was seen as both a tool for progress and a mechanism of state control. 62% of Bangladeshi respondents feared government misuse of AI tools like facial recognition and algorithmic censorship in controlling dissent through online journalism. However, 41% of young journalists showed eagerness to experiment with AI tools for content generation and audience engagement.
India: India displayed the highest readiness, with several media organizations like The Hindu, NDTV, and Times Group already implementing automated news-writing bots. The level of AI training among journalists was also higher, often facilitated through government-industry-academia partnerships (Sharma & Patel, 2022).
Pakistan: Media respondents in Pakistan expressed concern over digital authoritarianism. Over 70% stated they had never received any AI training, and 48% feared that AI would be used for military-grade surveillance under the guise of national security.
Nepal and Sri Lanka: Both countries showed limited institutional readiness. However, community journalism projects such as “AI for Local News” in Nepal (supported by international donors) have begun experimenting with machine learning tools for language translation and disaster alert verification.

5.5. Thematic Analysis from Interviews

A total of 37 in-depth interviews with journalists, media academics, and tech specialists revealed recurring themes:

5.5.1. AI as Labor Replacement and De-Skilling

Interviewees reported newsroom layoffs and increasing reliance on AI tools for rewriting press releases and content automation. One respondent noted:
> “In our newsroom, five sub-editors were replaced with a single AI-based editing software... the remaining staff now double-check its output rather than creating news.” (Senior editor, Bangladesh)
This supports Brynjolfsson and McAfee’s (2014) argument that technological innovations initially complement labor but eventually displace it.

5.5.2. Ethical and Epistemological Crises

Concerns were raised about the erosion of journalistic ethics and truth-making practices. Respondents questioned the editorial transparency of AI-generated content and its accountability.
> “Who decides the algorithm’s bias? Is it neutral or shaped by someone’s political intentions?” (Investigative reporter, Sri Lanka)
This echoes Couldry and Mejias’ (2019) theory of data colonialism, where data-driven infrastructures may impose new forms of epistemic control in the Global South.

5.5.3. Lack of Policy Infrastructure

The absence of AI ethics frameworks in journalism curricula and media laws was highlighted as a major challenge. Only India and Sri Lanka had initiated discussions on AI ethics in national media policy platforms.
> “We are sleepwalking into an algorithmic future without a compass.” (Media law academic, Pakistan)

5.5.4. Language and Digital Divide

Interviewees emphasized the limited availability of AI tools in native languages (e.g., Bangla, Urdu, Sinhala), which restricted their usability for local journalism. Tools developed for Western media were described as “linguistically elitist and culturally blind.”

5.6. Quantitative Correlation: AI Training vs Attitude

A linear regression analysis indicated a positive correlation (r = .68, p < 0.01) between exposure to AI training and optimistic attitudes toward AI. Journalists with formal AI exposure (n = 312) were 2.3 times more likely to describe AI as a “strategic advantage” rather than a “threat to jobs.”

5.7. Social Media, Algorithms, and AI-Driven Censorship

Approximately 59% of respondents across Bangladesh and Pakistan reported that AI-based algorithms had been used to reduce the visibility of sensitive political content. AI-driven censorship via platform algorithms, flagging of political keywords, and shadow banning of independent journalists were repeatedly reported.
Case examples include:
The 2024 shadow banning of the Dhaka-based independent news portal “MuktoMadhob”.Removal of anti-military content in Karachi’s regional blogs following algorithmic classification as “instigating hate.”
These developments align with Zuboff’s (2019) surveillance capitalism thesis and suggest a shift toward predictive censorship in digital journalism.

5.8. Gendered Dimension of AI in Newsrooms

Female and non-binary journalists across South Asia noted:
-Fewer opportunities for AI-related training.
-Marginalization in tech-driven newsroom transitions.
-AI tools failing to detect or moderate gendered hate speech in local contexts.
This digital gender gap further exacerbates inequalities in South Asian journalism and undermines inclusive innovation (UNESCO, 2023).

5.9. Challenges Identified

Key Challenges Frequency (%)
Lack of AI training Fear of job loss Absence of AI ethics policy Censorship via algorithm Linguistic limitation
72% 58% 61% 59% 66%

5.10. Regional Summary: A Mixed Readiness Index

A composite AI Journalism Readiness Index (AJRI) was developed based on five dimensions:
1. Training availability
2. Technological adoption
3. Legal/policy framework
4. Ethical discourse
5. Public trust
Country AJRI Score (out of 10)
Country AJRI Score (out of 10)
India 7.5
Bangladesh 5.0
Sri Lanka 4.8
Pakistan 4.3
Nepal 4.0

5.11. Toward a Cautious Embrace

The data suggests that while journalists in South Asia are aware of AI’s transformative potential, they remain trapped between the promise of innovation and the perils of control. Bangladesh, in particular, reflects a paradox: enthusiasm among younger journalists but fear of authoritarian exploitation and job loss. The region’s readiness remains mixed, and without coordinated efforts in training, legal safeguards, and ethical discourse, AI in journalism risks deepening rather than solving existing crises.

6. Discussion

The introduction of artificial intelligence (AI) into journalism represents both a disruptive force and an opportunity for transformation. This section delves into an in-depth analysis of the key findings from the data, correlates them with theoretical concepts, and explores implications for journalism in Bangladesh and South Asia more broadly. Using qualitative and quantitative insights, this discussion navigates perceptions, fears, adaptability, and the structural dynamics that influence how journalism in this region is responding to AI’s sweeping changes.

6.1. Revisiting the Research Questions

The central research questions posed in this study are:
1. What are the prevailing attitudes toward AI among journalists, journalism students, and media managers in Bangladesh and South Asia?
2. How is AI currently impacting journalistic practices in terms of content creation, ethics, employment, and credibility?
3. To what extent are journalism institutions, both educational and professional, prepared to adapt to AI?
4. What are the perceived risks and opportunities of AI for the future of press freedom, accuracy, and democratic communication in the region?
These questions guided the data collection and analysis, and the discussion aims to synthesize the empirical results with conceptual frameworks from the literature and theoretical perspectives.

6.2. Attitudes Toward AI: Acceptance, Ambivalence, and Anxiety

The results reveal a complex array of emotions regarding AI in journalism in Bangladesh and South Asia. Three dominant attitudes emerged:
a. Acceptance and Optimism
Among young journalists and students, particularly those familiar with digital tools, AI was seen as a boon to efficiency and creativity. Many appreciated AI for: Automated news summaries, Predictive analytics for reader engagement, Real-time fact-checking.
This aligns with Pavlik’s (2019) assertion that AI tools enhance human creativity rather than replacing it.
b. Ambivalence and Conditional Trust
Some respondents, especially mid-career journalists, expressed cautious optimism. They supported AI augmentation but stressed the need for human oversight, particularly in ethical reporting and investigative journalism. Similar sentiments were echoed in Marconi and Siegman’s (2017) work, where AI was described as a “co-pilot” rather than a “pilot” in newsrooms.c. Anxiety and Resistance
Senior journalists and editors showed significant concern about:
-
Job displacement,
-
Loss of editorial control,
-
Deepfake proliferation,
-
Homogenization of news content.
These anxieties mirror global studies (Carlson, 2020; Broussard, 2018), which document fears of AI undermining the human role in journalism and threatening democratic deliberation.

6.3. AI’s Practical Impact on Journalism in the Region

a. News Production and Automation
AI is increasingly used in:
-
Sports and financial reporting (template-based articles),
-
Social media trend analysis,
-
Headline generation.
While these innovations improve speed, participants questioned their accuracy in local language contexts, especially Bengali, Urdu, and Tamil. There were complaints of mistranslation, poor contextualization, and cultural insensitivity.
b. Fact-checking and Disinformation Control
Many participants acknowledged AI’s potential in combating misinformation. Tools like Google’s Fact Check Explorer and AI-assisted image verification were appreciated but were reported as underused in South Asian newsrooms due to lack of training and resources.
c. Ethical Risks and Algorithmic Bias
A recurring concern was AI’s embedded bias and lack of accountability. Journalists reported instances where AI-generated recommendations amplified politically biased or sensationalist content. This echoes Noble’s (2018) argument about “algorithmic oppression,” especially relevant in politically volatile environments like Bangladesh or India.

6.4. Institutional and Educational Preparedness

a. Newsroom Infrastructure and Training
Few newsrooms in Bangladesh and South Asia have integrated AI seamlessly. The gap lies not only in technological investment but also in the mindset. Training sessions are sporadic, and senior editors often resist change. This aligns with the concept of “technological inertia” (Boczkowski, 2004).
b. Journalism Education
Most journalism curricula across Bangladeshi public universities were found to be outdated. AI-related modules are absent or optional. In contrast, private institutions like BRAC University and some Indian universities have begun integrating AI literacy, although without uniform standards.This discrepancy reflects what Tandoc and Maitra (2020) describe as a “digital literacy gap” in the Global South, which could widen inequality in journalistic competence and opportunity.

6.5. Sociopolitical Dynamics and AI

In regions like Bangladesh and Pakistan, where press freedom is fragile, AI adds both hope and peril. On one hand, AI tools can expose disinformation campaigns. On the other, they may be co-opted by governments for surveillance, censorship, and narrative control—an outcome feared by 58% of respondents.
This aligns with the warning issued by Freedom House (2023), which noted that AI-enhanced surveillance was growing fastest in South Asia. AI’s misuse for political propaganda and social media manipulation was identified as a clear and present danger.

6.6. Human-AI Synergy: Pathways to Collaborative Journalism

Rather than replacing journalists, the future lies in hybrid roles:
-
AI curators who manage content flow,
-
Data-driven reporters who analyze trends using AI tools,
-
Digital ethicists who ensure compliance and fairness.
This framework is supported by Lewis, Guzman, and Schmidt (2019), who propose a typology of “algorithm-aware journalism” for the AI age.

6.7. Regional Comparison and Cross-National Patterns

a. India
India is more advanced in terms of AI integration, particularly in English-language media like The Hindu, Times of India, and NDTV. However, the problem of regional language AI tools remains.b. Sri Lanka and Nepal
Both nations exhibit limited AI adoption in journalism due to political instability, funding limitations, and conservative newsroom culture.
c. Bangladesh
Bangladesh stands at a critical juncture. While tech-savvy youth and start-ups show potential, institutional inertia, press suppression, and lack of innovation investment pose challenges.

6.8. Implications for Policy and Practice

1. Need for AI Journalism Policy: Governments and media regulators must introduce ethical AI journalism guidelines, including transparency, data integrity, and accountability (APA, 2022).
2. Capacity Building: Newsrooms must prioritize regular workshops, AI tools training, and open-source technology sharing.
3. International Collaboration: Global North-South knowledge exchange programs could democratize access to AI training.
4. Inclusive Journalism Education Reform: University curricula must shift from theory-heavy models to AI-integrated, data-rich journalism syllabi.

6.9. Reimagining Journalism Values in the Age of AI

AI challenges conventional notions of:
Objectivity: Can AI be neutral?
-
Truth: Can deepfakes distort perceived reality?
-
Humanity: Where does empathy go in algorithmic reporting?
These philosophical tensions demand renewed emphasis on ethical reflexivity in journalism education and practice, drawing from scholars like Ward (2015) and Zelizer (2019).

6.10. Summary of Key Arguments

The attitude toward AI in journalism across Bangladesh and South Asia is divided but shifting positively among youth.
There exists an urgent technological and pedagogical gap in newsroom and journalism school preparedness.
While AI holds immense promise in combatting disinformation and enhancing efficiency, it also risks worsening surveillance, censorship, and bias.
Regional disparities in readiness are evident, with India leading, while countries like Bangladesh and Nepal lag behind.
A hybrid future—based on human-AI collaboration—is not only possible but essential to retain journalism’s public service role.

7. Policy Recommendations and Future Directions

7.1. The Rapid Integration of Artificial Intelligence (AI) in Journalism Presents a Double-Edged Sword

while offering tools for efficiency and innovation, it simultaneously threatens journalistic autonomy, employment, and ethical standards (Diakopoulos, 2019). For South Asia, particularly Bangladesh, where the journalism ecosystem is already grappling with issues of press freedom, politicization, underfunding, and misinformation, the introduction of AI demands a robust and proactive policy response. This section outlines strategic policy recommendations, institutional reforms, and forward-looking agendas essential for preparing South Asian journalism for an AI-augmented future. These policy directions are framed within the broader principles of democratic accountability, journalistic integrity, public interest, and technological sovereignty.

7.2. National Policy Architecture: Establishing AI Governance in Journalism

7.2.1. Creating an AI-Journalism Policy Framework

There is an urgent need for a comprehensive AI-Journalism National Policy Framework that aligns with the UNESCO Guidelines on AI and the Ethics of Journalism (UNESCO, 2021). Such a framework should:
Define acceptable use-cases of AI in newsrooms.
Prohibit algorithmic manipulation of facts or bias-reinforcing content generation.
Provide guidelines on transparency, authorship attribution, and audience disclosures in AI-generated content.

7.2.2. Regulatory Bodies for Algorithmic Accountability

Establishing Independent Algorithmic Oversight Committees under national media councils or digital commissions can ensure AI systems used in journalism are subject to regular auditing, especially concerning:
Algorithmic bias, Data ethics, Political neutrality, Manipulative personalization, In Bangladesh, the Press Council could partner with the Bangladesh AI Policy Task Force to create sector-specific ethics charters.

7.3. Legal and Ethical Safeguards

7.3.1. Ensuring AI Transparency and Explainability

Legal mandates should require that all AI-generated journalistic content be clearly labeled, with metadata and algorithmic traces made accessible for verification. Inspired by the EU’s AI Act (2024), South Asian governments could design explainability norms to ensure citizens understand why and how a piece of AI-generated content was produced.

7.3.2. Protection Against Deepfakes and Disinformation

There must be legislative clarity on AI-driven synthetic media, particularly in electoral and conflict-prone contexts (Donovan & Boyd, 2021). India’s Information Technology Rules (2021) and Bangladesh’s Digital Security Act (2018) need significant revisions to:
-
Differentiate between AI-enhanced satire and harmful misinformation.
-
Criminalize malicious deepfakes that damage reputations or incite violence.
-Incorporate AI watermarking requirements and synthetic content disclaimers.

7.3.3. Ethical Guidelines for Journalists

Professional bodies like BFUJ (Bangladesh Federal Union of Journalists) and SAFMA (South Asian Free Media Association) should co-develop Ethical Guidelines for AI Use in Newsrooms, including:
- Guidelines on data sourcing for AI systems.
-Provisions for human editorial oversight of all AI outputs.
-Avoiding AI biases in political, gender, or religious representation.

7.4. Institutional and Capacity-Building Recommendations

7.4.1. Journalism Curricula Renovation

Universities and media institutes must revamp journalism education to include AI literacy. Suggested curriculum modules:
-
Introduction to AI and Machine Learning for Media
-
AI in News Production and Data Journalism
-
Algorithmic Ethics and Fact-checking Automation
-
Visual Literacy and Detecting Deepfakes
In Bangladesh, the Mass Communication and Journalism Departments of Dhaka University, Rajshahi University, and Chittagong University should pilot these curricula in collaboration with UNESCO and DW Media Action.

7.4.2. AI Training Programs for Working Journalists

A regional AI and Journalism Digital Capacity Fund, supported by regional donors such as SAARC and development partners like UNDP, should support in-service training on:
-
Generative AI platforms (e.g., ChatGPT, Bard)
-
Responsible data usage
-
Real-time misinformation detection using AI
-
Mobile-based AI journalism tools
These training programs should be multilingual (English, Bangla, Hindi, Urdu) and tailored to low-resource newsrooms.

7.4.3. Equitable Access to AI Infrastructure

Policy must ensure equitable access to AI tools and data infrastructures for smaller, rural, and community news organizations. Governments can provide subsidized access to:
-Natural language processing tools in regional languages.
-Cloud computing credits for local media.
-Translation APIs and local content recommender systems.

7.5. Regional Cooperation and Policy Harmonization

7.5.1. Establishing a South Asia AI-Journalism Coalition

Creating a South Asia AI-Journalism Coalition (SAAJC) could offer a multilateral platform for:
-
Sharing research on AI in journalism.
-
Formulating region-wide ethics codes.
-
Combating cross-border disinformation.
This coalition can be modeled after the Global Partnership on AI (GPAI), with participation from media regulators, academia, and digital rights organizations.

7.5.2. Standardizing AI Ethics Across Borders

South Asian countries must agree on minimum ethical standards for AI in journalism, especially to:
-
Prevent authoritarian misuse of AI surveillance in newsrooms.
-
Ensure free flow of transnational news without censorship.
-
Create common standards on algorithmic content moderation.
-
A regional AI Journalism Ethics Charter should be developed under BRICKS, Russian, India and China Coalition (RIC) or BIMSTEC.

7.6. Safeguarding Democratic Journalism in an AI Age

7.6.1. Preventing Authoritarian Co-Option of AI

States should not weaponize AI to monitor or punish dissenting journalists. AI surveillance tools like face recognition, predictive policing, or sentiment analysis should be banned in newsrooms. Advocacy groups such as Ain o Salish Kendra (ASK) and Digital Rights Foundation should be empowered to watchdog such misuse.

7.6.2. Protecting Press Freedom and Editorial Independence

Future legislation must guarantee editorial autonomy from AI suggestion systems embedded in CMS platforms. Journalists should have the right to override algorithmic decisions. This is essential to avoid “machine-driven editorialization” of news narratives.

7.7. Future Research and Innovation Agendas

7.7.1. Localizing AI for Linguistic and Cultural Relevance

Research centers in South Asia should prioritize context-aware AI models that understand local dialects, idioms, political nuances, and historical contexts. This includes:
-
Training large language models on Bangla, Tamil, Sinhala, etc.
-
Creating datasets free from colonial or Western-centric bias.
-
Developing AI models that respect cultural sensitivities.

7.7.2. Citizen Engagement with AI in Journalism

Policymakers must fund participatory AI research, involving communities in shaping how AI affects the news they consume. Methods include:
-
Citizens’ juries on AI-generated content.
-
Public consultations on AI regulation.
-
Participatory audits of news recommendation systems.

7.7.3. Creating Open-Source Tools for Responsible AI Journalism

Governments, universities, and tech firms should support the development of open-source AI tools for journalism, including:
-
Explainable AI dashboards for newsrooms.
-
Ethical content generators.
-
Misinformation-detection plugins.
Such tools democratize access and prevent over-dependence on proprietary systems like Google’s Bard or OpenAI’s GPT.

7.8. Special Considerations for Bangladesh

For Bangladesh specifically, several country-level actions are recommended:
-
Digital Security Act Renovation: The act must be amended to distinguish between AI-powered investigative journalism and digitally perceived threats.
-
Journalist Protection Fund: AI-induced job losses should be met with reskilling grants and mental health support.
-
AI Ombudsman Office: An independent ombudsman should mediate complaints related to algorithmic bias, AI misinformation, and news manipulation.
The preparation of Bangladesh and South Asia for the future of journalism in the AI age remains a complex, underdeveloped, and contested space. However, this juncture offers a strategic opportunity to democratize digital tools, resist authoritarian misuse, and forge new journalistic models rooted in accountability and technological equity. Policy intervention must be urgent, inclusive, and future-proof. As AI continues to reshape media ecosystems, the choices made today—through ethical frameworks, institutional reforms, and public participation—will determine whether journalism in the region survives as a pillar of democracy or succumbs to a data-driven dystopia.

8. Conclusion

The advent of artificial intelligence (AI) has created unprecedented shifts across multiple sectors, with journalism among the most significantly affected. This study has explored the current attitudes toward AI within journalism, highlighting concerns, opportunities, and the state of preparedness in both Bangladesh and the broader South Asian region. Through a mixed-methods approach—integrating interviews, surveys, and thematic data analysis—this research has identified a nuanced perspective: while there is excitement about AI’s potential, skepticism and anxiety around job displacement, ethical lapses, misinformation, and algorithmic bias remain persistent.One of the most significant conclusions drawn is that the integration of AI in journalism is not merely a technological transformation but a socio-political and cultural redefinition of how information is produced, curated, and consumed. AI-driven tools—from automated content generation and sentiment analysis to deepfake detection and real-time transcription—are undeniably powerful. However, in countries like Bangladesh, India, Pakistan, Nepal, and Sri Lanka, the policy frameworks, educational curriculums, and professional training lag behind, creating a critical gap in the ethical and sustainable adoption of these technologies.
The readiness of journalism institutions across South Asia varies. Bangladesh, in particular, shows a mixed trend—urban newsrooms in Dhaka may experiment with AI for fact-checking or multilingual translation, but rural and vernacular journalism is still far from embracing these innovations. This urban-rural digital divide raises deeper concerns regarding equity in media modernization and information access.
Further, the study reveals a tension between journalistic integrity and technological dependency. Many journalists fear becoming tools of the algorithm rather than stewards of truth. News organizations are struggling to balance speed and personalization with depth and credibility. The South Asian context—with its political volatility, populist propaganda, and underfunded media sector—compounds these tensions. The possibility of AI being co-opted for state surveillance, content manipulation, or corporate monopolization of media becomes a realistic threat.
On the positive side, AI presents a historic opportunity for journalism to evolve. It can democratize storytelling, improve investigative journalism, assist in real-time crisis reporting, and personalize user experiences in multilingual and multicultural societies like those in South Asia. However, this will require intentional policy interventions, ethical AI frameworks, inclusive training for journalists, and cross-border cooperation on AI governance.
In conclusion, while AI will continue to reshape the landscape of journalism, the future depends on how Bangladesh and its South Asian neighbors navigate the crossroad between innovation and caution, technology and ethics, speed and accuracy. The time to act is now—through proactive policy design, participatory AI ethics discourse, educational reforms, and global-local collaborations. The journalism of tomorrow must not only survive AI—it must humanize it.

References

  1. Ananny, M., & Crawford, K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society. 2018, 20, 973–989. [CrossRef]
  2. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press.
  3. Bhattacharya, S. Polarization and personalization in Indian media: The algorithmic challenge. Journal of Digital Media Ethics 2022, 9, 44–62. [Google Scholar]
  4. Carlson, M. (2020). Automating the news: How algorithms are rewriting the media. Columbia University Press.
  5. Chowdhury, R. Language gaps in AI tools for Bengali media: Challenges and prospects. Asian Journal of Digital Media 2021, 5, 113–129. [Google Scholar]
  6. Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Harvard University Press.
  7. Diakopoulos, N. , & Koliska, M. Algorithmic transparency in the news media. Digital Journalism 2017, 5, 809–828. [Google Scholar]
  8. Gurung, M. Journalism and AI in Nepal: Voices from a disconnected newsroom. Media Watch South Asia 2022, 11, 66–82. [Google Scholar]
  9. Human Rights Watch. (2022). Surveillance, censorship, and the digital threat in South Asia. https://hrw.org/south-asia-surveillance Kabir, N., & Ahmed, F. (2024). Journalism under surveillance: Digital repression and media in Bangladesh. Media & Politics Journal, 12, 55–72.
  10. Marconi, F. (2020). Newsmakers: Artificial intelligence and the future of journalism. Columbia University Press.
  11. Napoli, P. M. (2019). Social media and the public interest: Media regulation in the disinformation age. Columbia University Press.
  12. Rahman, H. , & Bose, M. Digital journalism in South Asia: Promise and pitfalls. Journalism Studies 2022, 23, 877–895. [Google Scholar]
  13. Russell, S. , & Norvig, P. (2016). Artificial intelligence: A modern approach (3rd ed.). Pearson Education.
  14. Sinha, P., & Jain, A. Artificial Intelligence in Indian Newsrooms: A case study of Times Group. Asian Journal of Journalism & Technology 2023, 12, 56–74.
  15. UNESCO. (2021). AI and journalism: Global challenges and local opportunities. United Nations Educational, Scientific and Cultural Organization.
  16. Westlund, O. , & Lewis, S. C. (2021). Agents of AI? Journalism and the algorithmic future. Routledge.
  17. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
  18. Braun, V., & Clarke, V. Using thematic analysis in psychology. Qualitative Research in Psychology 2006, 3, 77–101. [CrossRef]
  19. Creswell, J. W. , & Plano Clark, V. L. (2018). Designing and Conducting Mixed Methods Research (3rd ed.). SAGE Publications.
  20. Patton, M. Q. (2002). Qualitative Research and Evaluation Methods (3rd ed.). Thousand Oaks, CA: Sage Publications.
  21. Braun, V. , & Clarke, V. (2013). Successful Qualitative Research: A Practical Guide for Beginners. London: SAGE.
  22. Denzin, N. K. , & Lincoln, Y. S. (2018). The SAGE Handbook of Qualitative Research (5th ed.). SAGE Publications.
  23. Creswell, J. W. (2014). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches (4th ed.). SAGE Publications.
  24. Chowdhury, R. Language gaps in AI tools for Bengali media: Challenges and prospects. Asian Journal of Digital Media 2021, 5, 113–129. [Google Scholar]
  25. Couldry, N. , & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.
  26. Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 1989, 13, 319–340. [Google Scholar] [CrossRef]
  27. Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Harvard University Press.
  28. Harding, S. (2011). The postcolonial science and technology studies reader. Duke University Press.
  29. Irani, L. , Vertesi, J., Dourish, P., Philip, K., & Grinter, R. (2010). Postcolonial computing: A lens on design and development. CHI ‘10: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1311–1320. [CrossRef]
  30. Kabir, N., & Ahmed, F. Journalism under surveillance: Digital repression and media in Bangladesh. Media & Politics Journal 2024, 12, 55–72.
  31. Mosco, V. (2009). The political economy of communication (2nd ed.). SAGE Publications.
  32. Murdock, G. (2011). Political economy and media production: A reflection on research strategies. In Golding, P., & Murdock, G. (Eds.), Culture, communication and political economy (pp. 3–25). SAGE.
  33. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.
  34. Venkatesh, V., & Davis, F. D. A theoretical extension of the Technology Acceptance Model: Four longitudinal field studies. Management Science 2000, 46, 186–204.
  35. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. User acceptance of information technology: Toward a unified view. MIS Quarterly 2003, 27, 425–478.
  36. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
  37. Brynjolfsson, E. , & McAfee, A. (2014). The Second Machine Age. Norton.
  38. Couldry, N. , & Mejias, U. A. (2019). The Costs of Connection: How Data is Colonizing Human Life and Appropriating It for Capitalism. Stanford University Press.
  39. Marconi, F. (2021). Newsmakers: Artificial Intelligence and the Future of Journalism. Columbia University Press.
  40. Newman, N. (2023). Reuters Institute Digital News Report. Oxford University.
  41. Sharma, R., & Patel, M. AI integration in Indian journalism. Asian Journal of Media Studies 2022, 18, 112–134.
  42. UNESCO. (2023). AI and Gender in Media Industries: A Global Perspective. Paris.
  43. Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
  44. Ahmed, S. , & Cho, J. (2019). The Impact of Artificial Intelligence on Journalism: A Cross-National Study. Journal of Communication Technology, 29, 345–367. [CrossRef]
  45. Broussard, M. (2018). Artificial Unintelligence: How Computers Misunderstand the World. MIT Press.
  46. Carlson, M. (2020). Automating the News: How Algorithms Are Rewriting the Media. Columbia University Press.
  47. Chakravarty, A. Algorithms and Anxieties: AI Integration in Indian Newsrooms. Asian Journal of Media Studies 2023, 12, 22–47. [Google Scholar]
  48. Choudhury, N., & Kabir, A. Ethical Implications of AI in Journalism in Bangladesh. South Asian Media Research Journal 2021, 5, 56–75. [CrossRef]
  49. Dörr, K. N. Mapping the Field of Algorithmic Journalism. Digital Journalism 2016, 4, 700–722. [Google Scholar] [CrossRef]
  50. European Commission. (2021). Ethics Guidelines for Trustworthy AI. Retrieved from https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
  51. Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.
  52. Haider, A. Journalism in the Age of Artificial Intelligence: Emerging Trends in South Asia. Bangladesh Journal of Media and Society 2023, 10, 118–135. [Google Scholar]
  53. Myllylahti, M. Paying Attention to News Automation and Algorithms: Mapping the Human-AI Relationship in Journalism. Digital Journalism 2020, 8, 585–602. [Google Scholar] [CrossRef]
  54. Newman, N. , Fletcher, R., Schulz, A., Andı, S., Robertson, C. T., & Nielsen, R. K. (2023). Reuters Institute Digital News Report 2023. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk.
  55. Panigrahi, A. AI and Fake News in India: A Dangerous Nexus. Global Media Journal – Indian Edition 2021, 13, 44–60. [Google Scholar]
  56. Sambrook, R. (2019). AI in the Newsroom: Challenges and Opportunities. JournalismAI Report. London School of Economics and Political Science.
  57. Sheikh, S., & Yousuf, M. Automation Anxiety: Journalists’ Attitudes Toward AI and Technological Change in Pakistan. Journal of Media Innovation 2020, 7, 88–102.
  58. UNESCO. (2022). Guidelines for AI and Media Integrity. United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org.
  59. Zeller, F. , Ponte, C., & Oliveira, M. (Eds.). (2021). The Algorithmic Distribution of News: Audiences, News, and Algorithms. Palgrave Macmillan.
  60. Boczkowski, P. J. (2004). Digitizing the News: Innovation in Online Newspapers. MIT Press.
  61. Broussard, M. (2018). Artificial Unintelligence: How Computers Misunderstand the World. MIT Press.
  62. Carlson, M. (2020). Automating the News: How Algorithms Are Rewriting the Media. Columbia University Press.
  63. Freedom House. (2023). Freedom on the Net: The Repressive Use of AI Technologies. https://freedomhouse.
  64. Lewis, S. C., Guzman, A. L., & Schmidt, T. R. Automation, Journalism, and Human–Machine Communication. Digital Journalism 2019, 7, 1035–1050.
  65. Marconi, F. , & Siegman, A. (2017). The Future of Augmented Journalism: A Guide for Newsrooms in the Age of Smart Machines. AP Insights.
  66. Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
  67. Pavlik, J. V. (2019). Journalism in the Age of Virtual Reality: How Experiential Media are Transforming News. Columbia University Press.
  68. Tandoc, E. C., & Maitra, J. News and the Algorithm: Journalism in the Age of Automation. Digital Journalism 2020, 8, 685–703.
  69. Ward, S. J. A. (2015). Radical Media Ethics: A Global Approach. Wiley-Blackwell.
  70. Zelizer, B. (2019). Journalism’s Boundaries: The Notion of Objectivity in the Age of AI. Journalism Studies, 20, 1–18.
  71. Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Harvard University Press.
  72. Donovan, J. , & Boyd, D. (2021). Weaponizing the digital influence machine. MIT Technology Review. https://www.technologyreview.
  73. UNESCO. (2021). Guidelines for the governance of digital platforms. United Nations Educational, Scientific and Cultural Organization.
  74. European Commission. (2024). The Artificial Intelligence Act. Brussels: EU Publications Office.
  75. Shah, N. Journalism under threat: AI and the challenge of editorial independence in South Asia. Journal of Media Ethics 2022, 37, 145–163. [Google Scholar]
  76. Alam, S., & Rahman, T. The ethics of automated journalism in Bangladesh: A policy review. Asian Journal of Media and Communication 2023, 18, 89–113.
  77. Ali, R., & Ibrahim, S. AI and the future of journalism in the Global South: Threats and opportunities. South Asia Media Review 2023, 18, 22–39.
  78. Carlson, M. (2020). Automating the news: How algorithms are rewriting the media. Columbia University Press.
  79. Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 1989, 13, 319–340. [Google Scholar] [CrossRef]
  80. Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Harvard University Press.
  81. Kabir, N., & Ahmed, F. Journalism under surveillance: Digital repression and media in Bangladesh. Media & Politics Journal 2024, 12, 55–72.
  82. Marconi, F. (2020). Newsmakers: Artificial intelligence and the future of journalism. Columbia University Press.
  83. Mosco, V. (2009). The political economy of communication (2nd ed.). SAGE.
  84. Napoli, P. M. (2019). Social media and the public interest: Media regulation in the disinformation age. Columbia University Press.
  85. Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin Books.
  86. Rahman, H., & Bose, M. Digital journalism in South Asia: Promise and pitfalls. Journalism Studies 2022, 23, 877–895.
  87. Reporters Without Borders. (2024). World Press Freedom Index. https://rsf.org/en/index.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated