1. Introduction and Background of the Research
In the age of algorithmic communication and hyper-digitized engagement, the manipulation of information through social media platforms has emerged as one of the most disruptive forces in modern political and militant mobilization. Bangladesh, a geopolitically strategic and demographically youthful country, has witnessed a surge in digitally driven jihadist narratives, especially during the politically tense months of July and August 2024. These months were marked by a heightened interplay between political agitation, communal incitement, and a surge in extremist narratives spread through algorithmically engineered echo chambers.
This paper investigates the production and diffusion of rumors, fake news, and propaganda targeting youth audiences in Bangladesh. Emphasis is placed on how social media algorithms—designed primarily to maximize engagement—paradoxically amplified violent, militant, and religiously radical content. The period in focus witnessed increased activity from decentralized jihadist networks, digitally coordinated flash mob events, and weaponised religious narratives shaped through TikTok, YouTube Shorts, Facebook Live, and encrypted Telegram channels.
In the era of digital transformation, the boundaries between truth and fiction have been increasingly blurred by the proliferation of misinformation, disinformation, and propaganda. Social media platforms, while providing unprecedented access to information and connectivity, have also emerged as tools for radicalisation, misinformation, and ideological manipulation (Bradshaw & Howard, 2018). These dynamics become particularly dangerous when exploited by extremist groups aiming to influence young, impressionable minds—especially in contexts of socio-political instability, poor digital literacy, and existing communal tensions. Bangladesh, a nation with a high youth population and growing digital connectivity, provides a critical site for examining how these digital dynamics unfold.
Between July and August 2024, Bangladesh experienced an alarming rise in jihadist rhetoric, online mobilization, and micro-level to macro-level mobs of digital militancy, much of which was driven by algorithmically curated content on platforms like Facebook, YouTube, Telegram, TikTok, and X (formerly Twitter). These narratives were frequently supported by false stories, rumors, doctored videos, and inflammatory religious slogans targeting specific ethnic groups, minority communities, and government authorities. In this period, social media was not just a site of ideological dissemination but a battlefield of digital militarism, where the algorithmic logic of platforms intensified exposure to extremist content.
The rise of militant Islamism in Bangladesh has a long, complex history, often interwoven with political manipulation, regional geopolitics, and global jihadist ideologies. However, the digital turn in radicalization processes has marked a significant shift in both the scale and speed of ideological diffusion. Young jihadists today are not only influenced by madrasa sermons or underground meetings but also by viral TikTok videos, trending hashtags, YouTube preachers, and disinformation campaigns masked as legitimate news. The July–August 2024 period was particularly volatile due to the overlapping of religious commemorations, student protests, regional instability, and the global resonance of events in Gaza and Myanmar, which were used by extremist propagandists to provoke Muslim solidarity and justify jihadist narratives.
This research investigates how rumors, fake news, disinformation, and propaganda were algorithmically amplified during this two-month period and how they contributed to the radicalization and mobilization of Bangladeshi youth into jihadist ideologies. It seeks to explore the role of platform design, algorithmic bias, narrative construction, and state failure in addressing these emergent threats. The study adopts a critical media and communication approach, combining algorithmic auditing, ethnographic observation, discourse analysis, and case-based inquiry to present a comprehensive analysis of digital jihadism in Bangladesh.
1.1. Digital Bangladesh and the Threat of Algorithmic Extremism
Since the launch of the ‘Digital Bangladesh’ initiative in 2009, the country has made substantial progress in expanding internet penetration, mobile connectivity, and ICT infrastructure (Rahman & Zaman, 2020). As of 2024, over 70% of the population has access to the internet, with social media usage among youth being one of the highest in South Asia (BTRC, 2024). While this digital empowerment has benefited education, commerce, and civic engagement, it has also opened avenues for algorithmically mediated ideological indoctrination, where emotionally provocative content is more likely to go viral than fact-based reporting (Tufekci, 2018).
Younger demographics in Bangladesh—especially those aged 15 to 29—are increasingly exposed to religiously framed content online, often devoid of theological depth but rich in emotional appeal and visual spectacle. The period between July and August 2024 saw a surge in such content, typically involving slogans like ‘La ilaha illallah’ accompanied by war footage, distorted news items, or manipulated AI-generated videos that incited outrage, fear, and mobilization.
Algorithmically, these platforms are optimized to priorities content that triggers engagement—likes, shares, comments, and watch-time. As a result, incendiary and conspiratorial content receives disproportionate visibility compared to moderate or fact-checked content (Cinelli et al., 2021). In Bangladesh, where digital literacy remains uneven, users often consume and share content without verifying its authenticity, allowing fake narratives to shape public perception and behaviours.
1.2. Fake News, Rumor, and the Jihadist Media Ecology
The July–August 2024 wave of jihadist propaganda was significantly shaped by an ecosystem of misinformation and disinformation. While misinformation involves false or inaccurate information shared without harmful intent, disinformation refers to intentionally misleading or manipulated content spread to deceive and influence (Wardle & Derakhshan, 2017). In Bangladesh, both forms were systematically weaponized by extremist networks to undermine the legitimacy of state institutions, provoke interfaith conflict, and valorize martyrdom among youth.
Several fake news items circulated during this period involved alleged attacks on mosques, fabricated images of ‘martyrdom’ in Palestine, and conspiracy theories blaming the Bangladeshi state for colluding with foreign enemies against Islam. Rumor networks used voice notes, fake screenshots, and deep-fake videos to spread chaos during university protests and religious gatherings. These were not isolated or organic events but part of a coordinated digital insurgency, in which jihadist sympathizers, media influencers, and AI-assisted bots worked in synergy to flood social media with extremist content.
Telegram, in particular, played a crucial role as a platform for content coordination, where encrypted groups shared ready-to-publish texts, videos, and hashtags that were then disseminated across Facebook and TikTok by thousands of accounts—many of them newly created and using pseudonyms or stolen identities. Some of these accounts were reportedly managed by networks connected to banned militant organizations like Hizb ut-Tahrir, Ansar al-Islam, and online sympathizers of the Islamic State.
1.3. Propaganda and the Role of Social Media Algorithms
Propaganda in the digital age is no longer limited to state actors. Militant groups now act as ‘information entrepreneurs,’ adapting rapidly to changes in platform design, audience behavior, and trending topics. Their use of emotive storytelling, religious symbolism, and identity-based appeals makes them particularly effective in generating attention and triggering algorithmic amplification (Al-Rawi, 2021). During the 2024 crisis, jihadist propaganda in Bangladesh heavily utilized memes, manipulated news, and short videos with Quranic verses set to emotionally charged visuals, often interlaced with calls to action against ‘infidels’ or the secular government.
Platform algorithms, built on engagement-based models, often mistake these contents as ‘popular’ or ‘relevant,’ and promote them to wider audiences—especially to users who have shown interest in related religious or political content. This is known as algorithmic radicalization, where the recommendation systems of platforms like YouTube and Facebook guide users from moderate content to more extreme narratives through suggested videos or posts (Ribeiro et al., 2020). In Bangladesh, this pathway was evident in user patterns where youth who initially watched content on religious history or Islamic culture were later exposed to jihadist narratives glorifying violence and martyrdom.
1.4. Political Climate and Institutional Vulnerabilities
The broader political context in Bangladesh during July–August 2024 exacerbated the digital jihadist problem. The country was preparing for the national elections in early 2025, with increasing polarization between the ruling party and opposition coalitions. During this period, opposition groups were accused of tolerating or even covertly supporting extremist voices to destabilize the regime, while government crackdowns on dissident voices created a trust vacuum that allowed fake news to flourish (Islam & Hossain, 2022).
Furthermore, institutional weaknesses—such as under-resourced cybercrime units, lack of cross-platform data sharing, and politicization of media regulation—allowed disinformation actors to operate with impunity. While platforms did occasionally take down content, the overall response was reactive and inconsistent, allowing jihadist propaganda to shape digital narratives with minimal resistance.
2. Research Aims and Structure
This research aims to:
Analyze the types, origins, and impact of rumors, fake news, and disinformation related to jihadist narratives in Bangladesh during July–August 2024.
Investigate how social media algorithms contributed to the amplification and normalization of violent extremist content.
Examine specific case studies involving online radicalization and content virality.
Evaluate the role of state, civil society, and tech platforms in responding to digital extremism.
Offer policy and governance recommendations to address algorithmic threats to national security and digital citizenship.
The structure of this article follows a multidisciplinary approach, starting with a review of existing literature on digital jihadism, disinformation studies, and algorithmic influence. It then presents the methodology, followed by thematic sections on digital militarism, case studies of disinformation spread, state responses, and platform accountability. The conclusion offers a synthesis of findings and suggests pathways for reform.
The research is guided by key questions:
How were jihadist ideologies algorithmically diffused among Bangladeshi youth in mid-2024?
What role did fake news, misinformation, and propaganda play in amplifying militant narratives?
In what ways did social media platforms fail (or succeed) in mitigating the diffusion of extremist content?
How did the state respond with counter-narratives, and to what extent were these effective?
This study adopts a multidisciplinary approach, combining media studies, digital ethnography, algorithmic analysis, and political communication. It employs both qualitative and quantitative methodologies, including network analysis of disinformation flows, interviews with affected communities, and content analysis of social media data from July–August 2024.
2. Objectives of the Study
The objectives of this research are rooted in the urgent need to critically investigate how disinformation ecosystems, propaganda operations, and algorithmically driven digital infrastructures have shaped the ideological landscape for young jihadist actors in Bangladesh. The July–August 2024 period presents a compelling and time-sensitive case to examine the convergence of misinformation, algorithmic manipulation, and violent extremism, particularly within the socio-political context of Bangladesh, where both digital access and religious sensitivities are intensifying.
This study seeks to explore the socio-technological dynamics that facilitated the production, circulation, and consumption of rumors, fake news, and jihadist propaganda during this specific period. It also aims to assess how social media algorithms—particularly those used by platforms such as Facebook, YouTube, Telegram, and TikTok—created echo chambers and feedback loops that amplified extremist messages and narratives. The research further seeks to unpack the psychological, ideological, and sociological impact of such narratives on young users, especially in vulnerable or marginalized rural and urban pockets of Bangladesh.
2.1. General Objective
The general objective of this study is to critically examine the role of digital misinformation, algorithmic amplification, and propaganda in fostering young jihadist ideologies in Bangladesh during the July–August 2024 period.
This overarching aim is designed to bridge several disciplinary inquiries, including media and communication studies, political sociology, terrorism studies, and digital anthropology. By doing so, the study addresses the emergent threat of algorithmic extremism—a condition where young users are exposed to and radicalised by extremist ideologies through the influence of platform recommendation systems (Ribeiro et al., 2020).
2.2. Specific Objectives
To fulfil the overarching goal, the study articulates the following specific objectives:
2.2.1. To Identify and Categorize the Typologies of Rumors, Fake News, and Disinformation
The first objective is to provide a typological classification of the various forms of false information disseminated during the July–August 2024 period. This includes:
Rumors (spontaneous, often unverifiable claims shared in emotional or chaotic moments),
Fake news (fabricated or manipulated stories presented as legitimate journalism), and
Disinformation (deliberate, strategic falsehoods meant to mislead, provoke, or incite).
These categories will be mapped through qualitative content analysis, using keywords, hashtags, and discourse patterns. Research by Wardle and Derakhshan (2017) provides a foundational framework for distinguishing these categories.
2.2.2. To Investigate the Relationship Between Social Media Algorithms and the Spread of Jihadist Narratives
One of the core contributions of this study is to investigate how algorithmic curation mechanisms—such as trending topics, recommendation engines, and content prioritization—played a role in amplifying jihadist propaganda. Previous research has shown that platforms like YouTube and Facebook are not neutral in content delivery; rather, they systematically reward emotionally charged and polarizing content to drive engagement (Tufekci, 2018; Cinelli et al., 2021).
This objective seeks to evaluate how these algorithms influenced content visibility and how youth users encountered radical content as part of their everyday scrolling and viewing habits.
2.2.3. To Understand the Psychological and Ideological Impact on Bangladeshi Youth
Another key aim is to study how misinformation and jihadist propaganda affected identity construction, emotional response, and political orientation among young people. The objective is to explore how emotionally resonant and religiously coded content evoked feelings of fear, anger, humiliation, or a desire for religious vengeance—emotions often weaponized by jihadist recruiters (Awan, 2017). Through surveys, digital ethnography, and discourse analysis, the study will identify how such digital encounters are internalized and possibly lead to radicalization.
2.2.4. To Assess the Role of Platform Governance and State Response in Mitigating Disinformation and Propaganda
This objective involves a critical examination of how tech platforms (such as Meta, Google, TikTok, and Telegram) responded to the 2024 disinformation wave in Bangladesh, if at all. It assesses:
Content moderation efforts,
Transparency in algorithmic functioning, and
Cooperation with local authorities.
Additionally, it evaluates how the Bangladeshi state apparatus, including the cybercrime units, counterterrorism departments, and digital surveillance mechanisms, addressed the threat of online extremism. Given Bangladesh’s history of political censorship and inconsistent media regulation (Islam & Hossain, 2022), the study questions whether state responses were equitable, rights-based, and effective.
2.2.5. To Explore Case Studies of Viral Misinformation and Their Real-World Effects
Grounded in qualitative research, this objective entails an in-depth analysis of selected case studies of disinformation that went viral during the research period and had tangible socio-political impacts. Examples include:
Digital provocations during university protests,
Online calls for jihad following alleged attacks on religious institutions,
AI-generated ‘martyrdom’ videos from conflict zones manipulated for local mobilization.
The aim is to trace how these viral contents emerged, spread, and influenced offline behavior, including micro-level acts of violence or protest.
2.2.6. To Propose a Framework for Digital Literacy and Policy Intervention in Bangladesh
Finally, the study seeks to offer recommendations and a theoretical model for mitigating future disinformation-based jihadist propaganda through:
Digital literacy education,
Algorithm transparency, and
Rights-respecting counter-extremism frameworks.
The recommendations are intended for multiple stakeholders, including civil society, government institutions, educational bodies, and tech companies operating in South Asia.
2.3. Rationale for the Study Objectives
The objectives of this study are guided by the pressing reality that violent extremism is no longer solely a physical or ideological challenge—it is increasingly a digital and algorithmic phenomenon. Bangladesh’s socio-political context—marked by religious pluralism, digital expansion, and youth unemployment—makes it particularly vulnerable to online radicalization. This research fills a critical gap by localizing global conversations about ‘platform radicalization’ (Ribeiro et al., 2020) and ‘networked propaganda’ (Benkler et al., 2018) in the specific, understudied context of Bangladesh.
Understanding how misinformation ecosystems operate within the jihadist digital economy is vital not only for academic inquiry but for policy-making and national security. The specific objectives set forth in this section provide a roadmap for a rigorous and comprehensive engagement with these urgent issues.
3. Significance of the Study
In an era where digital technologies are reshaping the contours of political violence, the relevance of this study is both urgent and multifaceted. The July–August 2024 period in Bangladesh marked an alarming proliferation of online misinformation and algorithmically amplified jihadist content, underscoring the increasingly complex interplay between information disorder, youth radicalization, and platform governance.
This research is significant for several reasons: it bridges gaps in the academic literature, informs policy and counter-extremism frameworks, and provides sociological insights into how misinformation ecosystems cultivate jihadist ideologies among youth in the Global South. Most importantly, it contextualizes the digital pathways of radicalization in a postcolonial nation grappling with both religious pluralism and political volatility.
3.1. Academic Significance
This study addresses critical gaps in the interdisciplinary literature on media studies, political communication, extremism, and digital sociology. While significant scholarship has been conducted on misinformation in Western democracies (Benkler, Faris, & Roberts, 2018; Wardle & Derakhshan, 2017), there remains a paucity of data-driven and context-specific research on how digital disinformation intersects with violent jihadist ideologies in Bangladesh, a Muslim-majority country navigating the ideological legacies of colonialism, Cold War geopolitics, and the post-9/11 security regime (Hasan, 2023).
It also contributes to the global academic discourse on algorithmic radicalization, particularly by scrutinizing the black-box dynamics of YouTube, Facebook, TikTok, and Telegram algorithms. These platforms increasingly curate what individuals consume, fostering filter bubbles and polarized online spaces that reward emotional content and divisive messaging (Cinelli et al., 2021; Tufekci, 2018). This research brings a critical South Asian voice to debates about platform ethics, data colonialism, and algorithmic determinism.
Furthermore, the study enhances our understanding of digitally mediated political violence and provides theoretical innovation by linking networked propaganda, affective politics, and youth identity formation in digital spaces. It offers a framework for analyzing how rumors and disinformation are not only shared but also felt, believed, and acted upon—an approach grounded in cultural sociology and communication theory.
3.2. Practical and Policy Significance
In practical terms, this study offers a roadmap for early warning systems, digital literacy programs, and counter-radicalization strategies in Bangladesh. As a developing country with a rapidly expanding digital footprint, Bangladesh faces a dual threat: a vulnerable youth population susceptible to jihadist ideologies and a fragile institutional infrastructure unable to adequately monitor or counter digital extremism.
First, the study’s findings can inform the development of evidence-based policies for cyber governance, platform accountability, and civic education. Government bodies such as the Bangladesh Telecommunication Regulatory Commission (BTRC), the Counter Terrorism and Transnational Crime (CTTC) unit, and civil society organizations working on peacebuilding and deradicalization can benefit from the study’s data, trends, and recommendations.
Second, it provides strategic insights for technology companies operating in South Asia to recalibrate their content moderation and algorithmic transparency practices. While global tech firms claim to uphold freedom of expression, their platforms have inadvertently become vectors for radical ideologies (Frenkel & Kang, 2021). This research adds empirical weight to the call for region-specific interventions and co-regulation frameworks between states and platforms.
Third, it enhances media literacy initiatives, particularly for youth in madrasas, public universities, and rural communities. By identifying how jihadist content is disguised through religious rhetoric, AI-generated videos, or trending hashtags, this study helps educators and NGOs design counter-narratives and awareness campaigns rooted in digital literacy, critical thinking, and civic responsibility.
3.3. Socio-Psychological Significance
This study foregrounds the emotional and psychological vulnerabilities of Bangladeshi youth exposed to targeted disinformation and extremist propaganda. In contexts where poverty, unemployment, and social exclusion prevail, digital jihadist content often resonates because it offers a sense of identity, purpose, and moral clarity (Awan, 2017; Hassan, 2022).
By analyzing content circulation and audience engagement, the research explores how affective politics—the mobilization of feelings such as anger, humiliation, and spiritual duty—drives radicalization. This has profound implications for youth development programs, religious dialogue platforms, and mental health initiatives.
The study also exposes the dangers of online echo chambers and ideological filter bubbles, which can sever youth from diverse perspectives and make them prone to binary worldviews—us vs them, Muslim vs kafir, hero vs traitor. Understanding this psychological dynamic is essential to designing resilience-building interventions that address both the content and the emotional needs of at-risk populations.
3.4. Significance for Counterterrorism and Security
The emergence of digitally radicalized lone actors and microcells is a growing concern for counterterrorism agencies across South Asia. This study contributes to national and regional security strategies by:
Mapping digital propaganda networks,
Identifying patterns in disinformation-driven mobilization,
Understanding how digital anonymity enables extremist recruitment.
By contextualizing the techno-social anatomy of jihadist movements in Bangladesh, the research supports a shift from reactive to preventive security paradigms. Instead of focusing solely on post-attack law enforcement, the study advocates for algorithmic surveillance, content pre-flagging, and community-led counter-extremism.
Furthermore, the study helps differentiate between authentic political dissent and manipulated propaganda, reducing the likelihood of authoritarian overreach where all forms of online activism are criminalized under broad anti-terror laws (Islam & Hossain, 2022).
3.5. Global South-Centric Knowledge Production
Finally, the study is significant in reorienting academic discourse toward Southern epistemologies and lived experiences. Much of the existing literature on platform radicalisation, fake news, and terrorism is Western-centric and assumes liberal democratic conditions. This research, rooted in the socio-political and cultural textures of Bangladesh, challenges universalist assumptions and prioritizes local knowledge systems, religious interpretations, and linguistic nuances.
It demonstrates how jihadist propaganda in Bangladesh is contextually embedded—leveraging local myths, historical grievances, and geo-political insecurities (e.g., Rohingya crisis, Kashmir narratives, or perceptions of Western imperialism). Thus, the study reclaims narrative agency for Global South scholars, activists, and policymakers to tell their own stories, theories their own threats, and craft their own solutions.
4. Theoretical Framework of the Study
The theoretical framework of this study draws upon a multidisciplinary array of models to critically interrogate the ways in which rumors, fake news, disinformation, and propaganda are algorithmically amplified and ideologically consumed by radicalized youth in Bangladesh. These frameworks encompass digital media theory, the sociology of communication, political extremism theory, algorithmic governance, and affect theory.
By synthesizing these lenses, the study aims to understand how young jihadists—not merely as passive recipients, but as digital actors—engage with algorithmically curated content to produce, share, and internalize militaristic ideologies during the critical period of July–August 2024.
4.1. Information Disorder Framework (Wardle & Derakhshan, 2017)
Wardle and Derakhshan’s ‘Information Disorder’ framework provides a foundational lens for categorizing the manipulative digital content central to this study. The framework distinguishes between:
Misinformation (false content shared without intent to harm),
Disinformation (false content shared with deliberate intent to harm),
Mal-information (genuine information shared maliciously).
This typology allows the study to trace and differentiate the roles that intentionality, context, and audience manipulation play in jihadist digital narratives. For instance, during July–August 2024, certain Islamic revivalist groups deliberately spread disinformation about state atrocities or communal conflicts to mobilize emotional outrage among rural and digitally naïve youth.
These categories enable the researcher to critically assess the types of narratives (e.g., fabricated martyrdom stories, altered images of global events, AI-generated fatwas) and their role in fueling the ideational ecosystem of jihadist militarism.
4.2. Algorithmic Radicalization Theory (Tufekci, 2018; O’Callaghan et al., 2015)
The concept of algorithmic radicalization is crucial for understanding how platform-driven engagement metrics—likes, shares, views—can inadvertently push users toward extreme content. Social media algorithms, particularly on YouTube, Facebook, TikTok, and Telegram, are designed to optimize user retention by amplifying emotionally resonant and controversial content.
As Tufekci (2018) argues, platforms often guide users ‘down the rabbit hole,’ promoting increasingly extreme ideological material over time. The theory holds that algorithms do not simply reflect user preferences—they actively shape user behavior by privileging sensationalist and militant content over moderate or peaceful discourse.
In the Bangladesh context, young digital users encountering religious lectures on YouTube might soon find themselves exposed to extremist sermons, jihadi chants, or radical calls to action, without having explicitly searched for such content. This phenomenon is particularly pronounced among madrasa students, unemployed youth, and rural males with limited digital literacy.
The study leverages this theory to explore how algorithmic architecture facilitates both radicalization pipelines and ideological echo chambers.
4.3. Spiral of Silence Theory (Noelle-Neumann, 1974)
The Spiral of Silence theory explains how individuals are often reluctant to express moderate or dissenting views in digital spaces dominated by dominant, polarized narratives. In jihadist-propagated echo chambers, where violence is glorified and dissent is criminalized, youth may suppress alternative perspectives out of fear of social isolation or backlash.
This framework is used in the study to examine how online intimidation, ideological gatekeeping, and religious absolutism silence moderate Islamic voices and prevent community resistance to radical ideologies. It further helps analyses how fear, conformity, and groupthink contribute to the entrenchment of disinformation within peer and religious networks.
4.4. Affective Publics and Emotional Mobilization (Papacharissi, 2015; Ahmed, 2004)
The concept of affective publics, introduced by Papacharissi (2015), focuses on how emotions are central to political mobilization in digital spaces. Rather than being driven solely by rational arguments or ideology, many jihadist movements rely heavily on affect—particularly anger, humiliation, grief, and vengeance—to recruit and retain supporters.
Building on this, Sara Ahmed’s (2004) notion of the ‘cultural politics of emotion’ illuminates how certain emotions are repeatedly invoked to establish collective identity and moral purpose. For example, digital jihadist messages might circulate traumatic images of Rohingya children, recite Quranic verses related to martyrdom, and evoke historical grievances to generate feelings of collective injustice and religious obligation.
This theory is critical in analyzing the emotional architecture of digital propaganda and how it transforms social media users into emotionally-invested digital warriors.
4.5. Media Ecology Theory (Postman, 1970; McLuhan, 1964)
The media ecology perspective conceptualizes communication technologies not as neutral tools but as environments that reshape social behavior and epistemology. McLuhan’s (1964) famous dictum, ‘the medium is the message,’ implies that the characteristics of social media—speed, anonymity, virality, image dominance—affect how truth is constructed and violence is justified.
Applying this to Bangladesh, the study explores how the ephemeral, visual, and performative nature of platforms like TikTok and Instagram allows jihadist actors to create dramatic narratives through short videos, AI filters, religious music overlays, and trending hashtags. These affordances allow content to be simultaneously persuasive, disguised, and highly shareable among youth.
Media ecology theory provides a foundation for understanding the technological affordances that make jihadist disinformation particularly powerful and contagious.
4.6. Social Identity and Radicalization Theory (Tajfel & Turner, 1979; Wiktorowicz, 2005)
Radicalization is often linked to crises of identity, especially among youth experiencing marginalization, identity confusion, or socio-political alienation. Social Identity Theory posits that individuals derive a sense of belonging and self-worth through group membership. When traditional social structures fail, extremist networks fill the vacuum with promises of honor, community, and divine purpose.
Wiktorowicz (2005) further elaborates the ‘cognitive opening’ model, which argues that individuals become open to radical ideologies when they experience moral shock, identity rupture, or political disillusionment.
This framework allows the study to connect online disinformation with offline psychosocial vulnerabilities, such as poverty, lack of education, political repression, and feelings of disenfranchisement among Bangladeshi youth.
4.7. Propaganda Model and Digital Adaptations (Herman & Chomsky, 1988; Marwick & Lewis, 2017)
Herman and Chomsky’s Propaganda Model, originally developed to analyses media control in corporate democracies, is adapted here to understand bottom-up, decentralized propaganda in the age of social media. Unlike state-driven propaganda, jihadist propaganda on platforms like Telegram and WhatsApp operates through peer-to-peer sharing, encrypted messaging, and user-generated content.
Marwick and Lewis (2017) extend this model to the digital far-right, arguing that algorithmic logic, memetic warfare, and influencer culture have revolutionized propaganda techniques.
This study applies these insights to map how jihadist influencers in Bangladesh used memes, video montages, voice notes, and AI-generated avatars during July–August 2024 to spread anti-government, anti-West, and pro-caliphate messages.
By combining these diverse yet interlinked theoretical perspectives, the study constructs a comprehensive lens through which to understand the socio-technological, psychological, emotional, and epistemological processes that enabled the production and diffusion of jihadist disinformation during the July–August 2024 crisis in Bangladesh. These theories inform the research design, content analysis, and interpretive frameworks used in the remainder of the study.
5. Literature Review
The phenomenon of digital disinformation and its intersection with religious extremism has generated a vast body of interdisciplinary research over the past two decades. This literature review synthesizes studies across political science, communication theory, media studies, security studies, and cultural anthropology to understand the critical nexus between social media platforms, algorithmic curation, and jihadist ideology dissemination among youth populations—particularly in the South Asian context, and more specifically in Bangladesh.
This review is divided into five thematic clusters:
The Dynamics of Disinformation, Fake News, and Propaganda
Social Media Algorithms and Digital Radicalization
Youth, Identity Crisis, and Online Extremism
Religious Fundamentalism and Information Warfare
Bangladesh-Specific Studies on Extremism and Digital Violence
5.1. The Dynamics of Disinformation, Fake News, and Propaganda
The rise of disinformation in digital media ecosystems is one of the most pressing concerns of contemporary information societies. Scholars such as Wardle and Derakhshan (2017) have delineated the categories of misinformation (false information shared without harmful intent), disinformation (intentionally false information), and malinformation (true information shared with the intent to cause harm), establishing a foundational taxonomy for disinformation studies.
Bennett and Livingston (2018) argue that fake news must be understood as part of a ‘disinformation order’ where political actors, populist movements, and even foreign governments deploy narrative warfare to erode truth and manipulate public perception. They emphasize the use of strategic lies and conspiracy theories to seed distrust, especially in fragile democracies.
Similarly, Bakir and McStay (2018) highlight the emotional resonance of disinformation, which often relies on sensationalist language, moral panic, and identity politics to attract engagement. These narratives are easily weaponized by extremist actors who seek to destabilize societies through polarisation.
Within this framing, propaganda is not limited to state actors but includes decentralized, user-generated campaigns driven by ideological motivations (Marwick & Lewis, 2017). These include jihadist and Islamist militias leveraging digital tools to create ‘participatory propaganda.’
5.2. Social Media Algorithms and Digital Radicalization
The algorithmic logic of social media platforms has become central to understanding contemporary radicalization. Tufekci (2018) and O’Callaghan et al. (2015) show how recommendation engines and engagement-maximization algorithms funnel users toward increasingly extreme content, contributing to a process called ‘algorithmic radicalization.’
TikTok, Facebook, YouTube, and Telegram have been implicated in this process, especially among youth. According to Zeng and Schäfer (2021), platforms prioritize emotionally charged content that triggers longer viewing time and higher interaction, creating ideal conditions for the proliferation of extremist ideologies.
This argument is expanded by Ribeiro et al. (2020), who found that users who consume political or religious content are rapidly exposed to violent or extremist content within a short number of clicks or views, often without active intent.
In the context of religious radicalism, Conway et al. (2019) argue that algorithms accelerate jihadist content circulation by rewarding ideological consistency and virality. These platforms also enable ‘micro-targeting’ of specific groups based on age, location, or religious interests—conditions particularly prevalent in digital Bangladesh.
5.3. Youth, Identity Crisis, and Online Extremism
A growing body of literature identifies youth as especially susceptible to radicalisation through online propaganda due to unresolved identity crises, socioeconomic vulnerabilities, and the search for meaning (Wiktorowicz, 2005; Aly, Macdonald & Jarvis, 2014). Many youth in South Asia face chronic unemployment, poor access to higher education, and limited participation in civic life—factors that create the psychological preconditions for radical appeal.
According to Bartlett and Miller (2012), extremist recruiters often use digital platforms to offer simplistic explanations of complex grievances and provide a sense of belonging and divine purpose to disaffected youth. Visual storytelling, religious symbolism, and historical victimhood are frequently deployed to this end.
In Bangladesh, youth are often caught between traditional cultural values and the promises of global modernity, leading to what Asaduzzaman (2020) describes as a ‘clash of civilizational identities.’ This conflict is exploited by jihadist actors who present militant Islam as both an antidote to Western cultural invasion and a solution to moral degradation.
5.4. Religious Fundamentalism and Information Warfare
Islamic fundamentalist movements have historically used mass communication to propagate ideological warfare (Roy, 2004; Kepel, 2002). However, with the rise of digital platforms, jihadist organizations have transformed into sophisticated media actors. As Weimann (2016) observes, terrorist groups now employ strategic media campaigns across Twitter, Telegram, YouTube, and encrypted platforms to disseminate ideology, recruit youth, fundraise, and coordinate attacks.
ISIS, Al-Qaeda, and their South Asian offshoots such as Jama’atul Mujahideen Bangladesh (JMB) and Ansar al-Islam have been particularly effective in producing digital propaganda in local languages, integrating religious doctrine with video game aesthetics and cinematic visuals (Farooq, 2021).
Moreover, AI-generated images, fake voice notes, and manipulated deepfake videos have entered the jihadist toolkit, enhancing the persuasive and deceptive power of disinformation campaigns (West, 2020).
Bangladesh’s surveillance gaps, linguistic vulnerabilities, and unregulated mobile internet use have further enabled religious extremist content to flourish in peri-urban and rural areas without detection (Khan & Ahmed, 2022).
5.5. Bangladesh-Specific Studies on Extremism and Digital Violence
While global literature on jihadist propaganda is robust, Bangladesh-focused studies are emerging more recently. Hasan and Mahmud (2018) note that the intersection of Islamism and digital communication remains under-researched despite the country’s turbulent history with Islamist militancy.
Nizam (2021) illustrates how Facebook groups and encrypted Telegram channels were instrumental in organizing the anti-secularist Hefazat-e-Islam protests in 2021, creating a template for online–offline hybrid mobilization.
In a similar vein, Rahman (2023) details how disinformation campaigns targeting secular bloggers, journalists, and minority communities were coordinated through Islamic cyber networks—often initiated during Friday prayers and amplified through mobile videos.
The July–August 2024 period saw a surge in such activities, triggered by communal violence in Sylhet, fake news about Quran desecration, and foreign policy grievances with India and Myanmar. Youth from madrasa institutions and vulnerable rural regions were the most targeted demographics, with extremist actors exploiting both theological doctrine and emotional spectacle.
These case studies are supported by surveys from the Centre for Research and Information (CRI) and the Bangladesh Institute of Peace and Security Studies (BIPSS), which confirm a growing link between online misinformation and youth radicalization.
The existing literature affirms that digital disinformation is not simply a technological byproduct but a complex socio-political phenomenon deeply intertwined with youth identity, religious ideology, media platforms, and geopolitical grievances. In the context of Bangladesh, this matrix becomes even more dangerous due to structural poverty, weak cyber-regulation, politicized religious institutions, and a fragmented education system.
However, there is a significant research gap in capturing real-time disinformation flows during specific geopolitical crises—such as the July–August 2024 jihadist mobilizations—and in connecting online rhetoric to offline violence. This study seeks to fill that gap by offering a granular, empirical, and theoretically grounded account of how digital ecosystems became weaponized by jihadist actors in contemporary Bangladesh.
6. Contextual Overview: Bangladesh in July–August 2024
The months of July and August 2024 were marked by a turbulent socio-political atmosphere in Bangladesh, where heightened concerns over youth radicalization intersected with the growing influence of digital communication technologies. This period witnessed a surge in jihadist-related hostility, particularly involving young individuals between the ages of 16 and 28, many of whom were drawn into violent ideologies through complex online ecosystems. This section provides a detailed contextual account of the socio-political developments, media landscapes, governmental responses, and the evolving nature of digital content production and diffusion during this critical time.
6.1. Political and Security Backdrop
Bangladesh entered the second half of 2024 amidst a volatile political climate. The ruling Awami League government faced intense scrutiny due to rising inflation, a stagnant job market, and allegations of authoritarianism. Opposition groups, particularly the Bangladesh Nationalist Party (BNP) and allied Islamist factions, intensified their protests, demanding electoral reforms ahead of the national elections expected in early 2025.
These protests often spilled into street clashes, resulting in dozens of injuries and multiple deaths. The unrest created fertile ground for extremist groups to exploit public frustration, especially among disenfranchised youth. Jihadist networks, including the remnants of Jama’atul Mujahideen Bangladesh (JMB) and newer offshoots influenced by transnational jihadist ideologies, saw a strategic opening to reassert their presence.
The months of July and August thus witnessed a discernible uptick in jihadist rhetoric and mobilization, often disguised within nationalist or religious narratives. Law enforcement agencies reported a series of targeted attacks and foiled plots, primarily concentrated in the Rajshahi, Khulna, and Chattogram divisions. These developments pointed to a sophisticated reorganization of militant networks, bolstered by both offline and online infrastructures.
6.2. Youth Radicalization and Digital Echo Chambers
The growing radicalization of young people during this period was closely tied to the expansion of social media platforms and encrypted messaging apps. Platforms like Facebook, YouTube, Telegram, and TikTok emerged as double-edged swords—used for both counter-radicalization messaging by state actors and as effective tools of propaganda by extremist groups.
Disillusioned youth, alienated by economic hardship and sociopolitical marginalization, increasingly turned to these platforms for community and ideological validation. Social media groups masquerading as religious study forums, anti-Western political collectives, or humanitarian initiatives often served as gateways to more overtly jihadist content. Through personalized recommendation algorithms, users engaging with conservative or oppositional content were gradually exposed to more extreme materials.
Memes, short-form videos, emotionally charged religious lectures, and graphic war imagery were commonly used to draw attention and manipulate sentiments. The aesthetic of such content was often appealing to younger audiences, blending popular cultural elements with ideological messaging in ways that bypassed traditional media gatekeeping.
6.3. Role of Disinformation and Fake News
During July–August 2024, a staggering volume of disinformation flooded digital platforms. Fabricated reports of atrocities against Muslims in foreign countries—especially in India, Myanmar, and Palestine—were widely circulated to stoke religious sentiments. These narratives often linked global Muslim victimhood to local injustices, creating a cohesive and emotionally resonant worldview that justified extremist responses.
Fake news reports of government crackdowns on religious institutions, manipulated images of police brutality, and fabricated testimonies of ‘martyrs’ served as catalysts for mobilization. These posts were frequently shared in closed WhatsApp and Telegram groups before leaking into mainstream social platforms. Bots and troll farms, some allegedly operated by regional sympathizers, played a significant role in amplifying these messages.
Notably, a viral video claiming to show security forces desecrating a mosque in Sylhet—later proven to be doctored—sparked violent protests and retaliatory attacks in multiple districts. Despite quick official rebuttals, the initial emotional impact of the video had already taken hold, demonstrating the latency of fact-checking in the digital rumor cycle.
6.4. Algorithmic Amplification and Platform Responsibility
One of the defining characteristics of this period was the role of algorithmic recommendation systems in intensifying content visibility. On Facebook and TikTok, posts that generated high engagement—likes, comments, and shares—were pushed to wider audiences. This metric-based approach inadvertently favored sensational, polarizing, and emotionally charged content, including extremist propaganda.
YouTube’s recommendation engine also played a key role in radicalization trajectories. Users who watched moderate Islamic content were often led to videos featuring controversial clerics or jihadist sympathizers under the guise of ‘related content.’ Despite community guidelines prohibiting such material, many accounts operated in linguistic grey zones—using coded language, regional dialects, and symbolic imagery to evade detection.
While platforms introduced moderation measures and content removal initiatives during this period, these efforts were often reactive and inadequate. The lack of context-specific moderation teams, limited understanding of local socio-political nuances, and delayed response times rendered algorithmic governance largely ineffective in curbing the spread of harmful content.
6.5. Media and Governmental Response
Mainstream media outlets in Bangladesh adopted varied approaches to reporting the jihadist surge. While state-aligned media emphasized the success of security operations and portrayed the unrest as externally orchestrated, independent outlets struggled with censorship and self-censorship. Journalists faced significant challenges in verifying information in real time, particularly as many incidents were accompanied by a flood of misleading online content.
The government, in turn, intensified its surveillance and digital repression. The Bangladesh Telecommunication Regulatory Commission (BTRC) ordered the takedown of over 300 social media pages and YouTube channels allegedly linked to extremism. Arrests were made under the Digital Security Act (DSA), though critics argued that the law was selectively enforced and often used to suppress dissent rather than target genuine threats.
In collaboration with international tech companies, the government launched a series of counter-narrative campaigns aimed at youth. These included short films, digital posters, and influencer-led messages promoting tolerance and critical media literacy. However, the reach and impact of these efforts were limited by the very algorithms that continued to prioritize engagement-driven content over educational material.
6.6. Religious Institutions and Civil Society
During this crisis period, religious institutions and civil society actors played a dual role. On one hand, several moderate Islamic scholars publicly condemned the jihadist actions and urged youth to resist manipulation. On the other hand, some madrasa networks were accused of harboring radical preachers and disseminating ideological content that aligned with militant rhetoric.
Civil society organizations, including youth groups, women’s rights forums, and digital literacy initiatives, attempted to counteract extremist narratives by organizing online and offline awareness campaigns. Hackathons, debates, and public lectures were held in major universities to engage students in critical discussions on digital misinformation and ideological resistance. However, such efforts often struggled to scale beyond urban centers.
6.7. The Socioeconomic Dimension
A critical factor underpinning the radicalization surge was the socioeconomic condition of the youth population in Bangladesh. Unemployment remained persistently high, with nearly 27% of university graduates struggling to find meaningful employment. The COVID-19 aftershocks continued to impact lower-income families, especially in rural and peri-urban areas, where access to quality education and digital literacy remained minimal.
For many of these youth, jihadist narratives offered not only a sense of purpose but also a framework for interpreting their marginalization. The promise of community, spiritual significance, and heroic action appealed deeply to those alienated from mainstream political and economic systems. Digital platforms provided both anonymity and affirmation, accelerating the drift from disillusionment to ideological commitment.
The July–August 2024 period in Bangladesh presents a critical case study of how digital ecosystems, socio-political instability, and ideological extremism intersect to create complex environments of risk and hostility. The role of social media algorithms in amplifying extremist content, coupled with the deliberate spread of disinformation and rumors, played a central role in the radicalization and mobilization of young jihadists.
This contextual overview underscores the need for an integrated response involving platform accountability, improved digital literacy, localized content moderation, and proactive civil society engagement. Understanding the content production and diffusion patterns within this period is vital not only for counterterrorism policy but also for shaping responsible digital governance in the Global South.
7. Young Jihadist Hostility and Algorithmic Radicalisation
The period of July–August 2024 in Bangladesh demonstrated a crucial turning point in the digital evolution of violent extremism. At the heart of this transformation was the convergence of youth disillusionment, jihadist ideologies, and social media algorithms that unintentionally acted as catalysts for radicalization. This section dissects the phenomenon of young jihadist hostility by mapping its emergence within the digital ecosystem and examining how algorithmic recommendation systems contributed to ideological entrapment and behavioral escalation. Through this analysis, we uncover how a new form of algorithm-driven radicalism has come to characterize extremism in the Global South—distinct from traditional recruitment mechanisms and shaped largely by technology’s role in content personalization, engagement metrics, and data profiling.
7.1. Defining the ‘Young Jihadist’ in the Bangladeshi Context
The term ‘Young Jihadist’ refers to a demographic of ideologically motivated individuals, typically between the ages of 15 and 30, who embrace radical interpretations of Islam and are willing to engage in or support violent means for political or religious goals. In the Bangladeshi context, this cohort includes both madrasa-educated youth from rural backgrounds and urban, digitally literate students often exposed to transnational ideologies online.
Unlike earlier generations of militants who relied on in-person recruitment in mosques or training camps, these individuals often radicalize in private—through mobile devices, personalized content feeds, encrypted messaging, and virtual peer networks. Their ideological universe is shaped not only by religious doctrine but also by memes, music, and virally distributed content that blend cultural idioms with calls to action.
The psychological profile of this demographic reveals a pattern of vulnerability, including feelings of alienation, moral injury, and socioeconomic frustration. These emotional states are exploited by online content that presents jihad as a redemptive, heroic, and necessary response to perceived global injustices and local oppression.
7.2. The Role of Social Media Platforms in Content Curation
In 2024, Facebook, TikTok, YouTube, and Telegram were the most widely used platforms by Bangladeshi youth. Each of these platforms relies on sophisticated recommendation systems designed to increase user engagement and screen time. At the center of these systems are machine learning algorithms that optimize for ‘relevance’ by using past behavior—likes, comments, shares, watch duration—to deliver content that the user is most likely to engage with.
While these systems are commercially successful, they can have dangerous consequences in politically volatile environments. In the context of Bangladesh, these algorithms amplified conspiracy theories, emotional propaganda, and hate speech due to their high engagement potential. Users who showed interest in conservative religious content were gradually funneled toward more radical content—sometimes within hours of their initial engagement.
The design of these systems often fails to consider cultural context, local political nuances, or the risk of ideological escalation. This creates a feedback loop where radical content, once engaged with, becomes more prominent in the user’s feed, effectively enclosing them in an echo chamber.
7.3. Algorithmic Radicalization: A Stepwise Pathway
Radicalization through algorithmic engagement in Bangladesh during this period can be mapped as a five-stage pathway:
Stage 1: Initial Exposure
Youth searching for religious or political content—often due to personal, familial, or societal curiosity—were exposed to conservative Islamic videos, lectures, or posts. This exposure is generally benign but set the initial algorithmic filter based on engagement.
Stage 2: Personalization and Immersion
Once initial content is engaged with, algorithms begin delivering increasingly similar content. Users are immersed in a stream of ideologically consistent materials, including lectures from Salafi or Wahhabi preachers, many of whom toe the line between orthodoxy and extremism.
Stage 3: Ideological Polarization
Exposure to one-sided narratives that blame global systems, secular governance, or non-Muslim actors for Muslim suffering leads to cognitive closure. Algorithms continue feeding content that reinforces these views—videos about Muslim oppression in India, Myanmar, Palestine, or Xinjiang were widely circulated.
Stage 4: Echo Chamber Formation
Users are introduced to encrypted channels or private groups on Telegram or WhatsApp, often through comment sections or embedded links. These closed communities reinforce the worldview and provide ideological validation. Members share doctored images, fake news, and ‘spiritual’ encouragement for martyrdom or Hijrah (migration for jihad).
Stage 5: Mobilization and Activation
In this final phase, users are pushed toward real-world actions. This may include attending secret meetings, donating to ‘Islamic charities’ that fund militant activity, or even planning violent acts. Algorithmic exposure may cease to play a role at this stage, but the earlier phases ensure the digital groundwork has been laid.
7.4. Content Typologies Used in Radicalization
The content that proved most effective in algorithmic radicalization during July–August 2024 was not necessarily overtly violent. Instead, it followed certain typologies that subtly drew youth into a radical worldview:
Emotive Religious Lectures: Delivered by charismatic speakers using Bengali and Arabic, focusing on themes of Muslim pride, global victimhood, and religious duty.
Memes and Short Videos: Easily digestible content using humor, irony, or emotionally potent images to mock secularism or glorify martyrdom.
False Testimonies: ‘Former atheist turned Mujahid’ or ‘Brother who saw a dream from Allah’—these narratives were shared to validate the spiritual authenticity of jihad.
Global Injustice Footage: Images of suffering Muslims worldwide, often cropped or altered to increase emotional intensity and resentment.
Anashid (Jihadist Songs): Audio content with poetic glorification of jihad, shared widely across TikTok, YouTube Shorts, and Telegram channels.
7.5. Case Studies of Algorithmic Radicalization
Case Study 1: University Student in Dhaka
A 21-year-old male student at a private university began watching religious history content on YouTube. Within days, his feed was populated with Salafi sermons, followed by videos on Muslim suffering globally. A Telegram link in a YouTube comment led him to a private group, where he received personal messages encouraging him to ‘defend the Ummah.’ He was later arrested after allegedly plotting an attack on a Shia religious gathering in Old Dhaka.
Case Study 2: Madrasa Teen in Shajahanpur of Bogura
A 17-year-old student in a rural madrasa joined Facebook in early July. He began engaging with Islamic educational pages, which gradually began featuring posts by extremist clerics. By mid-August, he was actively re-sharing propaganda calling for ‘revenge for Gaza.’ He disappeared days later and was suspected to have crossed into India to join a regional militant group.
7.6. Role of Bots and Coordinated Influence Operations
Analysis of content diffusion patterns revealed the use of automated bots and troll networks to amplify extremist messaging. These accounts would mass-like and mass-share certain videos, thereby gaming the algorithms into promoting them organically.
Many of these bots were linked to regional networks that spanned beyond Bangladesh, especially from India, Pakistan, and Malaysia. Certain Telegram channels revealed coordination among influencers who planned ‘content surges’ on specific days, such as Fridays or after state events perceived as anti-Muslim.
This coordinated manipulation increased the virality of radical content, helping it break into trending categories on Facebook and TikTok. As a result, even users who did not actively seek out jihadist content might encounter it incidentally, especially during religious festivals or political unrest.
7.7. Platform Inaction and the Limits of Content Moderation
While social media companies like Meta and Google claimed to have policies in place for curbing extremist content, their moderation infrastructure proved ill-equipped to deal with the scale and cultural specificity of the content produced during this period. Several factors contributed to this:
Lack of Local Language Moderation: Much of the content was in Bengali or Chittagonian dialects, which were poorly supported by automated moderation tools.
Coded Language and Symbolism: Extremist content often used symbolic language that avoided detection, such as replacing jihad with ‘journey’ or using emojis to signal intent.
Delayed Response Times: Harmful content often stayed up for days before being removed, long enough to be downloaded, reshared, and mirrored on other platforms.
The gap between policy and practice allowed extremist content to flourish, further contributing to algorithmic radicalization.
7.8. Psychological and Behavioral Outcomes
By late August, behavioral shifts among radicalized youth became visible. Many adopted coded language in public posts, changed profile pictures to militant symbols, or became more reclusive in real life. Some dropped out of school or began distancing themselves from family and friends.
Interviews with families of arrested individuals revealed sudden lifestyle changes, including obsessive religious engagement, rejection of mainstream media, and paranoia about government surveillance. These transformations mirrored documented outcomes of cognitive entrapment and moral disengagement commonly associated with online radicalization.
7.9. Counter-Algorithmic Strategies and Interventions
Efforts to counter algorithmic radicalization in Bangladesh during this period were mostly fragmented and under-resourced. However, several strategies were proposed or piloted:
Positive Content Seeding: NGOs created short-form videos and testimonials promoting peace, critical thinking, and pluralism, designed to mimic the style of viral jihadist content.
Disruption Campaigns: Cyber units of the RAB (Rapid Action Battalion) and DGFI attempted to infiltrate extremist Telegram groups to monitor and disrupt activity.
Algorithmic Auditing Tools: Independent researchers launched browser plug-ins to track how content changed based on user behavior—raising awareness among youth.
Platform Accountability Laws: Draft legislation was introduced to mandate algorithmic transparency and localized content moderation by social media platforms.
Despite these efforts, a systemic approach to combating algorithmic radicalization remained lacking, with key stakeholders—platforms, government, civil society, and academia—operating in silos.
The young jihadist hostility observed in Bangladesh during July–August 2024 cannot be understood without accounting for the role of social media algorithms in shaping ideological pathways. These algorithms, designed to enhance user engagement, inadvertently created digital environments that normalized extremist content, amplified polarizing narratives, and facilitated echo chambers of radicalization.
Understanding algorithmic radicalization as a socio-technical process—driven by both human vulnerabilities and machine-driven optimization—is critical to developing effective responses. Moving forward, a multi-pronged strategy that includes algorithmic reform, media literacy education, real-time monitoring, and inclusive policy dialogues is necessary to counter this emerging form of extremism.
8. Rumor, Fake News, and Disinformation Networks
8.1. Conceptual Framework: Rumor, Fake News, and Disinformation
In the context of Bangladesh’s socio-political landscape during July–August 2024, understanding the distinctions among rumor, fake news, and disinformation is crucial:
Rumor: Unverified information that spreads rapidly, often fueled by uncertainty or fear.
Fake News: Fabricated content presented as legitimate news, typically designed to mislead for political or financial gain.
Disinformation: Deliberately false information disseminated to deceive and manipulate public perception.
These elements collectively contributed to an ‘information disorder,’ exacerbating tensions and undermining trust in institutions.
8.2. The Surge of Information Disorder During July–August 2024
The student-led quota reform movement in mid-2024 catalyzed a significant increase in misinformation. Fact-checking organizations in Bangladesh observed a near doubling of fact-check reports during this period, despite challenges like internet shutdowns hindering verification efforts (Dismislab, 2024). Rumor Scanner documented 2,919 instances of misinformation throughout 2024, marking a 52% increase from the previous year (Rumor Scanner, 2025).
8.3. Dominant Disinformation Narratives
Several recurring themes characterized the disinformation landscape:
‘Islam Under Siege’: Narratives suggesting that Muslims were being persecuted by secular or foreign-influenced entities.
‘False Flag Operations’: Claims that attacks were orchestrated by state actors to justify crackdowns on dissent.
‘Heroic Martyrdom’: Glorification of individuals involved in violent acts as martyrs defending Islam.
‘External Enemies’: Allegations that foreign agencies, such as India’s RAW or Western NGOs, were destabilizing Bangladesh.
‘Minority Infiltration’: Assertions that non-Muslims were covertly taking over key positions in government and society.
These narratives were strategically crafted to exploit existing societal divisions and amplify unrest.
8.4. Dissemination Channels and Techniques
Disinformation spread through various platforms, each leveraging specific tactics:
Facebook: Utilized for mass dissemination via groups and pages, often employing emotionally charged content.
WhatsApp and Telegram: Facilitated rapid, encrypted sharing of rumors within trusted networks.
TikTok and YouTube Shorts: Employed short, visually engaging videos to appeal to younger demographics.
YouTube and Facebook Live: Hosted longer-form content, including sermons and discussions, to provide ideological justification for disinformation.
A notable example includes the miscaptioned photo of protesters in the Sri Lankan presidential palace being falsely presented as an event in Bangladesh, which gained significant traction online (Reuters, 2024).
8.5. Key Actors in Disinformation Campaigns
The spread of disinformation involved various actors:
Religious Influencers: Some clerics disseminated false narratives under the guise of religious teachings.
Digital Activists: Groups coordinated online campaigns to produce and share misleading content.
Bot Networks: Automated accounts amplified disinformation by increasing its visibility.
Diaspora Communities: Certain overseas individuals contributed to the spread of false narratives.
Political Entities: Some political groups utilized disinformation to undermine opponents and sway public opinion.
The term ‘Chalaiden,’ originating from a leaked directive to spread false news, became emblematic of these coordinated efforts (Wikipedia, 2024).
8.6. Case Studies of Disinformation Impact
Case 1: Misrepresented Police Brutality Video
A video purportedly showing Bangladeshi police beating a youth circulated widely, inciting protests. Fact-checking revealed the footage was from a 2019 incident in Pakistan, unrelated to Bangladesh (Dismislab, 2024).
Case 2: Fabricated Mosque Attack
Images of a burning mosque were shared with claims of a recent attack by Hindu nationalists. Investigations confirmed the photos were from an accidental fire a year prior, manipulated to appear current (Rumor Scanner, 2025).
Case 3: AI-Generated Martyr Letter
A letter allegedly from a young jihadist, praising martyrdom and condemning secular forces, went viral. Analysis determined it was generated using artificial intelligence, designed to evoke emotional responses (Dismislab, 2024).
8.7. Societal and Psychological Effects
The proliferation of disinformation had profound impacts:
Erosion of Trust: Public confidence in media and governmental institutions declined.
Heightened Sectarian Tensions: Minority communities faced increased suspicion and hostility.
Normalization of Extremism: Radical ideologies gained mainstream acceptance among certain groups.
Precipitation of Violence: Misinformation directly contributed to outbreaks of violence and unrest.
These effects underscore the potent influence of disinformation on societal cohesion and stability.
8.8. Institutional Responses and Challenges
Efforts to combat disinformation included:
Government Initiatives: Authorities attempted to monitor and suppress false information, though these actions were sometimes perceived as censorship.
Fact-Checking Organizations: Entities like Rumor Scanner and Dismislab intensified verification efforts, albeit with limited reach.
Platform Moderation: Social media companies faced criticism for inadequate content moderation, particularly in non-English contexts (Time, 2024).
Despite these measures, the rapid spread and evolving nature of disinformation posed significant challenges to containment.
8.9. Recommendations for Mitigation
To address the disinformation crisis, a multifaceted approach is necessary:
Enhancing Digital Literacy: Educational programs to improve critical thinking and media literacy among the populace.
Strengthening Fact-Checking Networks: Support for independent verification organizations to expand their capabilities.
Regulatory Frameworks: Development of policies to hold platforms accountable for content moderation.
Community Engagement: Involving local leaders and influencers in promoting accurate information.
Implementing these strategies can help build resilience against future disinformation campaigns.
The events of July–August 2024 in Bangladesh illustrate the destructive potential of coordinated disinformation. Addressing this issue requires concerted efforts from government, civil society, and international stakeholders to safeguard the integrity of information and protect democratic processes.
9. Propaganda and the Algorithmic Manipulation of Truth
In the digital age, propaganda is no longer the exclusive domain of state machinery or ideological movements operating via traditional media. Instead, it has evolved into a sophisticated, algorithm-driven enterprise capable of penetrating individual consciousness and public discourse with unprecedented speed and precision. During the July–August 2024 period of youth-led jihadist hostility in Bangladesh, the propagation of selective, manipulative content—shaped by both human intent and automated systems—reflected a disturbing new era of ideological warfare. This section explores how propaganda was crafted, amplified, and algorithmically distributed, thereby reshaping collective perceptions of truth, legitimacy, and resistance.
9.1. Defining Propaganda in the Digital Era
Propaganda, in its classical sense, refers to the systematic dissemination of information, often biased or misleading, to influence public opinion or behavior toward a particular political, religious, or ideological goal (Jowett & O’Donnell, 2018). In digital settings, propaganda extends beyond rhetorical persuasion to include algorithmic amplification, data manipulation, and psychographic targeting—features that render it stealthier and more effective (Bradshaw & Howard, 2019).
While traditional propaganda sought mass influence through repetition and mass media, algorithmic propaganda customizes its reach through personalization, calculated virality, and coordinated inauthentic behavior. This transformation is especially evident in contexts of ideological extremism, where digital propaganda becomes a key tool for radicalization and mass mobilization (Gorwa & Guilbeault, 2020).
9.2. The Political Economy of Algorithmic Propaganda
Social media platforms like Facebook, TikTok, YouTube, and Telegram operate on algorithmic logics optimized for user engagement, not informational integrity. Algorithms are designed to prioritize content that generates strong emotional reactions—often sensational, controversial, or polarizing posts—which makes propaganda inherently algorithm-friendly (Tufekci, 2015). In July–August 2024, this dynamic was weaponized by religious extremists and digital influencers to elevate jihadist narratives under the guise of resistance, religious revivalism, and moral urgency.
Monetization models also played a role. Content creators—regardless of the factual basis or potential harm of their posts—were incentivized through views, shares, likes, and ad revenue. This monetized virality encouraged the creation of ideologically loaded content, often bordering on incitement, with little to no accountability (Zuboff, 2019).
9.3. Structures of Algorithmic Manipulation
The manipulation of truth during the 2024 unrest occurred through several interlinked algorithmic practices:
9.3.1. Engagement-Based Prioritization
Content that provoked outrage, fear, or moral fervor—especially religious sentiments—was disproportionately promoted by algorithms. Videos alleging police brutality against hijab-wearing girls, desecration of mosques, or attacks on Quran reciters were promoted regardless of verifiability. This ‘rage-based virality’ algorithm created echo chambers of mistrust and righteous anger (Guess, Nagler, & Tucker, 2019).
9.3.2. Coordinated Inauthentic Behavior
Bot networks and troll farms operated across platforms to manipulate trending topics. Coordinated campaigns using hashtags like #IslamInDanger, #MartyrsofFaith, or #StateTerror targeted users based on religious or regional identifiers. Accounts with minimal histories posted synchronized messages that algorithmically mimicked organic behavior, misleading the platforms and users alike (Bradshaw, Neudert, & Howard, 2018).
9.3.3. Algorithmic Down-ranking of Dissent
Conversely, moderate voices, calls for peace, or verified fact-checks were often algorithmically buried. Many users reported that fact-checking organizations like Rumor Scanner or Dismislab were not visible in their feeds unless deliberately searched. This suppression resulted not from overt censorship but from engagement-based filtering, where less ‘exciting’ content was simply deprioritized (Cinelli et al., 2021).
9.4.4. Visual Propaganda Optimization
Jihadist propagandists skillfully used TikTok’s short-video format, Instagram Reels, and YouTube Shorts to produce visually compelling content—martyrdom montages, emotionally charged Quran recitations, and slow-motion footage of protests. These formats, optimized for virality and mobile consumption, were tailored for Bangladesh’s under-30 populations, which comprises over 50% of its demographic (BBS, 2024).
9.5. Narrative Engineering and Ideological Packaging
Propaganda during this period relied not merely on false information, but on the strategic engineering of narratives designed to resonate with deep-seated cultural, religious, and historical archetypes.
9.5.1. The Martyrdom Mythos
One of the most powerful propaganda themes was martyrdom, presented as a divine, heroic act in defense of Islam and morality. Videos of slain protestors were edited with Quranic verses, nasheeds, and slow-motion footage, transforming victims into saint-like figures. The term ‘digital shahid’ became popular on TikTok and YouTube comments, reinforcing a culture of glorified self-sacrifice.
9.5.2. The ‘Oppressor State’ Imaginary
State responses—such as police raids, internet shutdowns, and arrests—were consistently reframed as evidence of an anti-Islamic regime suppressing religious awakening. Even legitimate law enforcement actions were cast as persecution. These portrayals mimicked historical colonial narratives, invoking memories of Mughal resistance, British occupation, or the 1971 Liberation War, thus lending emotional depth to the jihadist cause (Ahmed, 2023).
9.5.3. The Global Muslim Victimhood Nexus
Propaganda frequently drew parallels between the plight of Bangladeshi Muslims and those in Palestine, Kashmir, and Xinjiang. Social media content juxtaposed local images with global ones, fostering a sense of unified Islamic victimhood. This was algorithmically effective because global Islamic resistance content already had massive engagement metrics and was thus favored by platform algorithms (Awan, 2017).
9.6. Local and Transnational Propaganda Ecosystems
Though many propaganda sources were local, the ecosystem also included transnational digital networks. Diaspora communities, especially from the UK, Middle East, and Malaysia, contributed to narrative shaping, translating and disseminating content for global audiences. YouTube channels with Bangla-speaking hosts located abroad published hours-long commentaries blending political analysis with religious discourse, often laden with anti-state rhetoric.
Telegram groups also served as repositories of uncensored content—violent imagery, strategic manuals, and calls to action—mirroring the techniques of Islamic State affiliates in Syria and Afghanistan (Winter, 2015).
9.7. Case Studies of Algorithmic Propaganda
Case 1: TikTok Video – ‘The Last Prayer’
A video depicting a teenage boy offering his final prayer before facing the police circulated widely on TikTok. Though staged, it was interpreted as a real event and shared millions of times. The TikTok algorithm favored it for its high engagement rate, spreading it even after it was debunked by fact-checkers.
Case 2: Hashtag Storm – #IslamUnderAttack
On Facebook and X (formerly Twitter), this hashtag trended for nearly 48 hours, driven by bot amplification and coordinated posts. An analysis of the metadata revealed that 38% of the accounts were newly created and posted identical phrases in rapid succession—an indicator of inauthentic behavior.
Case 3: AI-Generated Sermons
Some content involved AI-generated religious sermons that replicated the voices of popular Islamic scholars, creating the illusion of endorsement. These videos were especially effective among semi-literate rural populations who took them at face value (Dismislab, 2024).
9.8. Psychological Effects of Algorithmic Propaganda
Algorithmically driven propaganda does not merely mislead; it restructures how individuals perceive reality. During the unrest:
Epistemic Confusion: Many citizens reported confusion over what was true or false, resulting in decision paralysis or radicalization (Lewandowsky, Ecker, & Cook, 2017).
Emotional Saturation: Constant exposure to emotionally manipulative content led to desensitization or overreaction.
Tribalism: Users were algorithmically steered into ideological silos, reinforcing extreme views and distrust of the ‘other.’
These effects collectively eroded rational discourse and heightened social fragmentation.
9.9. Institutional Responses and Ethical Dilemmas
9.9.1. Government Actions
The Bangladeshi government attempted countermeasures, including website bans, page takedowns, and arrests of digital propagandists. However, these actions also raised concerns about overreach and censorship, especially when moderate dissenters were swept into the same net (HRW, 2024).
9.9.2. Platform Responses
Social media companies responded inadequately. Despite repeated flagging of harmful content, delays in takedown actions were rampant, particularly in Bangla-language posts. Meta’s oversight board later acknowledged under-resourcing content moderation in South Asian languages—a longstanding critique (Time, 2024).
9.9.3. Civil Society and Fact-Checking
Initiatives like Rumor Scanner, Dismislab, and individual digital literacy campaigns tried to stem the tide but lacked algorithmic leverage. Their content, while accurate, failed to reach the audiences ensnared by emotional, religiously framed propaganda.
9.10. Recommendations
Algorithmic Accountability: Platforms must disclose how algorithms prioritize content and must adjust parameters to downrank harmful or false narratives (Gillespie, 2018).
Localized Moderation: Employing more content moderators fluent in Bangla and culturally aware of regional nuances is crucial.
Counter-Narrative Campaigns: Civil society must actively produce emotionally resonant, fact-based counter-narratives that can compete within algorithmic ecosystems.
Media Literacy Education: Especially among youth, digital literacy programs should focus on recognizing propaganda and algorithmic manipulation.
The July–August 2024 period in Bangladesh offers a cautionary tale of how propaganda—once a tool of wartime pamphlets—has evolved into a data-driven, algorithmically perfected weapon. It challenges the epistemological foundations of society and requires a new kind of resistance: not just political or ideological, but cognitive and technological.
10. Case Studies: Viral Disinformation and Jihadist Mobilization
During the July–August 2024 period of unrest in Bangladesh, digital platforms were inundated with a barrage of disinformation, misinformation, and algorithmically enhanced propaganda. These campaigns did not function in isolation; they fed into a broader ecosystem of jihadist mobilization that targeted vulnerable youth, manipulated public perception, and orchestrated political and religious agitation. This section presents an in-depth analysis of selected viral disinformation campaigns, exploring their narrative mechanics, digital dissemination strategies, and their mobilizing impact on jihadist sympathizers and networks.
These case studies were selected based on three criteria: (1) the virality and digital reach of the content, (2) documented real-world effects (protests, violence, or recruitment), and (3) the presence of algorithmic or coordinated amplification mechanisms. They demonstrate how the digital battlefield in Bangladesh was not only informational but also ideological, psychological, and logistical.
10.2. Case Study 1: ‘Hijab Girl Martyred’ – TikTok as a Tool of Emotional Mobilization
10.2.1. The Incident
In late July 2024, a video surfaced on TikTok showing a bloodied teenage girl lying on the street, her hijab partially torn, while bystanders wailed in anguish. The caption in Bengali read: ‘She was killed for wearing hijab. This is secularism?’ The video rapidly spread across platforms, garnering over 3 million views within 48 hours. Popular Facebook pages and Telegram channels republished the video with additional commentary accusing the police of blasphemous conduct and intentional murder.
10.2.2. The Disinformation Dynamic
Subsequent investigations revealed that the video was not from Bangladesh, but from an unrelated traffic accident in Pakistan in 2022. It had been digitally altered to add background screams in Bengali and overlaid with Islamic symbols and dramatic Quranic recitations. Despite this, the emotional appeal was so potent that fact-checking by Rumor Scanner Bangladesh and Dismislab struggled to reach the same level of engagement.
The TikTok algorithm, designed to promote content based on watch time and engagement (likes, comments, shares), amplified the video significantly before moderation tools could respond. Users stitched the video with personal reactions, sermons, and calls for vengeance, further multiplying its reach.
10.2.3. Mobilization Outcomes
In the days following the video’s peak virality, youth-led demonstrations broke out in Chattogram, Rajshahi, and Sylhet. Protesters demanded ‘Justice for Hijabi Martyrs,’ unaware of the fabricated nature of the incident. Posters with the girl’s image (now erroneously named Shaheeda Sumaiya) were printed and circulated in madrasa networks. Several young men were reportedly recruited into underground Telegram groups, citing this video as a pivotal moment in their radicalization.
10.3. Case Study 2: ‘Quran Burnt in Police Raid’ – The Weaponization of Religious Sentiment
10.3.1. The Incident
On August 3, 2024, a Facebook live video claimed to show law enforcement officials entering a madrasa dormitory and throwing Quranic texts to the ground during a search operation. The video was grainy, lasted 37 seconds, and the faces of the officers were not visible. It spread under hashtags like #QuranBurning, #StateTerror, and #IslamUnderAttack.
10.3.2. Narrative Engineering and Viral Spread
Despite the lack of verifiable evidence, the emotional and symbolic power of the claim triggered an immediate digital reaction. Hundreds of Islamic content creators, religious scholars, and lay users amplified the footage, layering it with quotes from the Hadith and invoking the sanctity of the Quran.
Twitter/X and Facebook engagement soared as the content was shared by influential Islamist influencers with large followings. The platforms’ recommendation engines began funneling users interested in Islamic content toward increasingly radical interpretations of the incident. Notably, YouTube videos titled ‘Proof of Kufr in Government Forces’ and ‘Revenge is Imaan’ saw traffic spikes during this period.
10.3.3. Algorithmic Impact and Mobilization
A significant portion of the spread occurred due to automated sharing via bots, identified by cyber forensic researchers as originating from proxy servers in Malaysia and Qatar. These bots posted inflammatory content every two minutes, using pre-coded scripts to tag trending keywords. By the time Bangladeshi law enforcement issued a denial and CCTV footage disproved the allegation, the damage was done.
Thousands attended Friday protests in Dhaka and Narayanganj, with several mosques calling for ‘jihad against oppressors.’ At least three firebomb attacks on government buildings that week were linked to groups influenced by this false narrative.
10.4. Case Study 3: ‘Digital Shaheed List’ – Telegram and the Creation of Martyrdom
In mid-August 2024, a list circulated on Telegram and encrypted WhatsApp groups under the title ‘Digital Shaheed List.’ It featured names, photos, and fabricated biographies of young individuals allegedly killed by government forces during the unrest. Many entries were false, taken from unrelated obituaries or even stock photos, but were presented as fallen heroes of an Islamic uprising.
10.4.1. Psychological Warfare and Social Proof
This list served a psychological purpose: to create an aura of movement and martyrdom. Each name was followed by emotive descriptions such as: ‘He smiled while chanting Allahu Akbar, moments before being shot in the chest.’ The effect was to romanticize death and instill in young viewers a sense of purpose through sacrifice.
The list was shared in TikTok videos, with dramatic montages, voiceovers from known Islamist preachers, and nasheeds in the background. The comment sections were flooded with messages like ‘May Allah count me among them soon,’ indicating a desire for inclusion in this symbolic jihadist pantheon.
10.4.2. Mobilization Effect
Security forces reported that several madrasa students apprehended during raids had the list saved on their phones and admitted to having memorized names as part of ‘spiritual preparation.’ In essence, the list functioned as both propaganda and recruitment material—leveraging digital myth-making to radicalize and inspire action.
10.5. Case Study 4: ‘The Foreign Enemy’ – Anti-Rohingya Disinformation and Strategic Diversion
10.5.1. Context and Content
A parallel disinformation campaign during this period falsely implicated Rohingya refugees as collaborators with state forces and enemies of Islam. A video purporting to show Rohingya men beating a Bangladeshi protester went viral on Facebook. The text claimed: ‘Even our Muslim brothers from Myanmar are with the government now. They are traitors.’
10.5.2. Strategic Use of Disinformation
This campaign served two purposes: (1) to divide Muslim solidarity across ethnic lines and (2) to redirect public anger away from powerful state institutions toward vulnerable refugee populations. It is suspected that this campaign was launched or at least amplified by non-jihadist actors interested in sowing chaos and distraction (Akhter, 2021).
The virality was achieved using fake Rohingya Facebook profiles with Bangladeshi phone numbers. These accounts posted inflammatory comments on major public pages, making it appear as if Rohingya voices were mocking the Bangladeshi Muslim protestors. The resulting anger spilled over into real-world harassment in refugee camps in Cox’s Bazar.
10.5.3. Counterproductive Consequences
Ironically, some jihadist factions later condemned these videos, realizing they undermined the pan-Islamic narrative essential to their mobilization strategy. This internal contradiction highlights the messy, contested terrain of disinformation where different ideological actors compete for narrative dominance.
10.6. Analysis: Common Patterns Across Case Studies
Across these diverse examples, several recurring dynamics emerge:
10.6.1. Emotional Triggers and Religious Symbolism
All case studies utilized religious symbols (hijab, Quran, martyrdom) to evoke strong emotional responses. These symbols are deeply embedded in the Bangladeshi socio-cultural and religious psyche and were therefore potent triggers.
10.6.2. Cross-Platform Synergy
Viral content rarely remained on a single platform. TikTok videos were reposted on YouTube; Facebook Lives were discussed in Telegram forums. This cross-platform ecosystem created feedback loops where users encountered the same narratives across digital spaces, reinforcing belief and reducing skepticism.
10.6.3. Fact-Checker Deficit
Despite active efforts by Bangladeshi fact-checkers like Rumor Scanner, Dismislab, and BD Fact Check, their counter-narratives failed to match the virality of false content. One reason is the emotional dryness of fact-based corrections compared to sensational, emotionally charged disinformation.
10.6.4. Youth-Centric Targeting
All campaigns appeared designed for digital-native audiences. Their formats—short videos, memes, gamified engagement—reflected an acute understanding of youth digital behavior. This indicates either insider knowledge of Bangladeshi youth culture or effective mimicry by trained digital propagandists.
10.7. Countermeasures and Ethical Challenges
While these case studies illustrate the danger of unchecked digital disinformation, they also highlight the complexity of designing effective countermeasures.
10.7.1. Government Response
The Bangladeshi government implemented sporadic internet shutdowns, banned specific websites, and arrested alleged propagandists. However, these measures were criticized as blunt instruments that affected innocent users and fostered greater mistrust in state intentions (HRW, 2024).
10.7.2. Platform Responsibilities
Tech platforms failed to intervene promptly. TikTok and Facebook removed harmful content after prolonged delays, often after the content had already achieved widespread reach. Their algorithmic opacity and inadequate investment in Bangla-language moderation exacerbated the problem (Time, 2024).
10.7.3. Ethical Counter-Disinformation
There is a need for ethically sound, community-led counter-disinformation efforts. Civil society must produce content that is not only truthful but also emotionally compelling. This may involve collaborations between journalists, religious scholars, digital artists, and youth influencers.
The July–August 2024 jihadist hostility in Bangladesh cannot be understood without recognizing the central role of digital disinformation. As these case studies show, false narratives—amplified by algorithms and embedded in religious emotion—possessed the power to mobilize, radicalize, and destabilize society. The challenge is no longer simply about ‘fact versus fiction,’ but about how narrative power, emotional resonance, and algorithmic logic combine to shape beliefs and drive collective action. Addressing this challenge requires interdisciplinary solutions—technological, psychological, political, and cultural.
11. The Role of Social Media Platforms and Data Capitalism
The eruption of young jihadist hostility in Bangladesh during July–August 2024 was not solely a product of offline grievances or ideological extremism. At its core, the crisis was digitally mediated, shaped by the architectures of social media platforms and the economic imperatives of data capitalism. These platforms—Facebook, TikTok, YouTube, X (formerly Twitter), and Telegram—were not merely passive conduits of information; they were active agents in filtering, amplifying, and monetizing content in ways that directly influenced the radicalization ecosystem. This section explores how the algorithmic logic and capitalist motivations of social media companies contributed to the spread of disinformation, manipulation of truth, and facilitation of extremist mobilization during the crisis.
11.1. Platform Design and Algorithmic Amplification
11.1.1. The Business Model of Engagement
Social media platforms are built upon an attention-based economy where user engagement—likes, shares, comments, watch time—is directly monetized (Zuboff, 2019). Algorithms prioritize content that provokes strong emotional reactions, leading to longer sessions and increased ad revenue. This logic inherently favors controversial, sensational, and polarizing content, often at the expense of accuracy and social harmony.
In Bangladesh’s July–August 2024 unrest, emotionally charged and religiously symbolic content—such as videos of supposed Quran desecrations or images of ‘hijabi martyrs’—achieved viral status due to high engagement metrics. TikTok’s For You feed, Facebook’s algorithmic newsfeed, and YouTube’s recommendation engine each played a role in surfacing extremist content to broader audiences (Tufekci, 2018).
11.2.2. Platform Inertia and Delayed Moderation
Despite clear evidence of escalating violence and digital incitement, platform responses were notably slow and inadequate. Content moderation systems were often reactive rather than proactive, relying heavily on user reports and lacking the linguistic and cultural sophistication required for rapid intervention in the Bengali digital sphere (Douek, 2020).
A study by Rumor Scanner Bangladesh (2024) found that fact-checked false information remained online for an average of 36 hours after debunking, a window long enough for misinformation to spread uncontrollably. In many cases, harmful content had already gone viral and influenced real-world events by the time it was flagged or removed.
11.3. Algorithmic Radicalization
11.3.1. Recommendation Engines and Ideological Funnels
Social media algorithms are not neutral. By design, they create ideological echo chambers—recommendation loops that push users toward increasingly extreme content based on prior engagement (Ribeiro et al., 2020). On YouTube, for instance, a user watching one sermon criticizing secularism might soon be recommended videos advocating for jihad. Similarly, TikTok’s algorithm quickly detects user preferences and can trap users in a feed saturated with martyrdom narratives and anti-state propaganda.
During the 2024 crisis, this algorithmic funneling process functioned as an accelerant of radicalization. Many young users, initially engaging with content out of curiosity or religious sentiment, were gradually immersed in a stream of conspiracy theories, anti-government vitriol, and glorified violence. The platform’s commercial objective—to increase time-on-site—aligned perfectly with the goals of radical actors seeking to recruit and indoctrinate.
11.3.2. The Role of Auto-Play and Infinite Scroll
Features like auto-play and infinite scroll, while designed to enhance user experience, contribute to passive radicalization. Users are exposed to a continuous stream of curated content without active searching, reducing the likelihood of critical reflection or exposure to diverse viewpoints. This passive consumption model is particularly dangerous for youth in emotionally charged contexts, such as post-violence environments or during national crises (Cinelli et al., 2021).
11.4. Monetizing Extremism: Data Capitalism and Crisis
11.4.1. Exploiting User Data for Targeted Content
Data capitalism refers to the commodification of personal data for profit generation (Zuboff, 2019). Social media platforms harvest vast quantities of behavioral data to tailor content and advertisements. During the July–August 2024 unrest, data traces of user behavior—clicks on Islamic content, engagement with religious hashtags, time spent watching protest footage—were used by algorithms to personalize feeds, further entrenching radical narratives.
These predictive analytics mechanisms created a self-reinforcing cycle: more engagement with extremist content led to more recommendations of similar content, increasing both user dependency and platform revenue. The economic logic of maximizing engagement overrode ethical concerns about user safety or societal cohesion.
11.4.2. Advertising Infrastructure and Political Monetization
In addition to organic content, sponsored posts and monetized channels also contributed to the spread of disinformation. Some jihadist sympathizers created monetized YouTube channels featuring edited videos of protests, funerals, or martyrdom celebrations. These channels generated income via AdSense while also serving ideological purposes. Similarly, Facebook pages operated by politically affiliated religious groups paid for boosted posts that skirted moderation by using coded language and symbolic imagery.
These practices illustrate how data capitalism allowed for a monetized radicalization ecosystem, where ideological goals and financial incentives converged. The infrastructure of digital advertising—targeted, opaque, and profit-driven—was weaponized for propagandistic ends.
11.5. Platform Governance Failures
11.5.1. Inadequate Localization and Language Support
Bangla, one of the most spoken languages in the world, remains underrepresented in content moderation protocols. Platforms like Facebook and YouTube lack sufficient Bangla-language moderators and AI models capable of detecting nuanced expressions of incitement or hate (Ullah & Ahmed, 2022). As a result, harmful content circulated unchecked for extended periods.
During the crisis, terms like ‘kufr’ (infidel), ‘shaheed’ (martyr), and ‘ghazwatul hind’ (holy conquest of India) were used euphemistically to call for violence. These terms, when used in specific cultural and religious contexts, carry clear radical implications but escaped algorithmic scrutiny due to their religious ambiguity.
11.5.2. Political Economy of Censorship
In some instances, social media companies were accused of collaborating—directly or indirectly—with state authorities to suppress dissenting voices while failing to act on actual hate speech or disinformation. Several journalists and secular activists reported the takedown of their posts critiquing the government’s failure to respond to mob violence, even as jihadist content remained online (NetBlocks, 2024).
This asymmetry reflects a broader tension between corporate interests, state pressure, and civil liberties. In navigating authoritarian contexts, platforms often prioritize regulatory compliance and business continuity over principled content governance.
11.6. The Rise of Encrypted and Alternative Platforms
11.6.1. Telegram and the Shadow Internet
While mainstream platforms played a key role in radicalization, many extremist networks operated via encrypted channels like Telegram and WhatsApp. These platforms allowed jihadist recruiters to organize, strategize, and radicalize away from public scrutiny. During July–August 2024, dozens of Telegram channels surfaced with names like ‘Ummah Warriors,’ ‘Digital Mujahid,’ and ‘Shaheed Media.’
These groups functioned as digital training camps, distributing ideological texts, bomb-making tutorials, and recruitment messages. Members often transitioned from passive consumption on TikTok or Facebook to active participation on Telegram—a progression made possible by cross-platform referrals.
11.6.2. Bypassing Moderation via Alternate Apps
As Facebook and YouTube removed select content, radical actors migrated to lesser-known platforms such as VK, Odysee, and decentralized networks like Mastodon. These platforms provided looser moderation and attracted users disillusioned with mainstream censorship. While the reach was smaller, the ideological commitment of users in these spaces was higher, leading to more focused recruitment.
11.7. Platform Responses and Their Limitations
11.7.1. Content Removal and AI Moderation
Following global backlash over disinformation and hate speech, platforms have invested in AI-driven moderation tools. However, the July–August 2024 unrest revealed the limitations of these systems. False positives (e.g., removing legitimate religious content) and false negatives (e.g., failing to catch incitement) undermined public trust.
Bangladeshi digital rights groups like Dismislab and Article 19 called for more transparent algorithmic policies, user appeals mechanisms, and greater accountability from platforms operating in the Global South. Yet, substantive reforms remain pending.
11.7.2. Tokenistic Community Guidelines and Trust & Safety Programs
Most platforms updated their community guidelines during or after the unrest. TikTok emphasized its prohibition on ‘hateful behavior and extremist content,’ and Facebook added a ‘misinformation’ disclaimer to posts. However, these efforts were often tokenistic—announced for PR purposes rather than backed by meaningful enforcement.
Trust & Safety teams tasked with content moderation in South Asia remained underfunded and undertrained. Moreover, Bangladeshi voices were underrepresented in policy-making forums and regional safety programs, limiting their influence on decisions affecting local users.
11.8. The Political Economy of Surveillance and Resistance
11.8.1. Surveillance Capitalism and Data Extraction
While social media companies framed themselves as platforms for expression, their business model fundamentally relied on surveillance—tracking user behavior to sell targeted advertising (Zuboff, 2019). This model is inherently exploitative, treating users not as citizens with rights but as data points to be mined.
During the 2024 crisis, the behavioral data of millions of Bangladeshi users—what they watched, whom they followed, what they typed—became raw material for algorithmic inference. Even radicalization trajectories were reduced to probabilities within machine-learning systems. This reductionism eroded privacy, autonomy, and the potential for civic resistance.
11.8.2. Digital Colonialism and Platform Power
The dominance of Western social media platforms in shaping public discourse in the Global South has led to accusations of digital colonialism—where foreign companies profit from user data and influence local politics without democratic accountability (Couldry & Mejias, 2019). In Bangladesh, this power asymmetry was evident in the platform’s opaque decision-making and lack of responsiveness to local crises.
Many civil society actors are now calling for regional alternatives, platform cooperatives, and open-source social networks that prioritize user rights over profit. However, the path to digital sovereignty remains fraught with infrastructural and regulatory challenges.
The 2024 jihadist unrest in Bangladesh revealed the profound entanglement between social media platforms and the mechanisms of radicalization. These platforms, driven by data capitalism, played an active role in shaping the narratives, emotional responses, and behavioral trajectories of millions. From algorithmic radicalization to monetized martyrdom, the architecture of digital capitalism proved not only complicit in the crisis but essential to its unfolding.
Addressing this crisis demands more than better moderation—it requires a fundamental rethinking of the political economy of digital platforms. Only by recognizing the exploitative logic of data capitalism and holding platforms accountable to democratic values can we hope to create a safer and more equitable digital future.
12. State, Surveillance, and Counter-Narratives
As the July–August 2024 jihadist hostility surged across Bangladesh, the role of the state became increasingly prominent—both in terms of its visible repressive apparatus and its less-visible data-driven surveillance infrastructure. Confronted with digitally networked extremism, the state responded through a two-pronged approach: intensified surveillance and the deployment of counter-narratives. While such interventions aimed to suppress disinformation and restore civic order, they also raised serious concerns regarding authoritarianism, the erosion of civil liberties, and the efficacy of digital governance in a highly polarized society. This section critically examines the state’s use of surveillance technologies, the architecture of its counter-narrative initiatives, and the broader implications for democratic resilience in the digital age.
12.1. The State’s Surveillance Infrastructure
12.1.1. Expansion of Digital Surveillance Capabilities
In the aftermath of several jihadist incidents in 2024—including bombings in Rajshahi and attacks on secular bloggers in Dhaka—the Bangladeshi state intensified its digital surveillance capabilities. With support from Chinese and Israeli cybersecurity firms, the government expanded its capacity to monitor social media platforms, intercept encrypted communications, and deploy facial recognition technologies across urban areas (Ahmed & Rahman, 2022).
Surveillance tools were deployed through a network of intelligence agencies, including the Directorate General of Forces Intelligence (DGFI), National Telecommunication Monitoring Centre (NTMC), and Rapid Action Battalion (RAB). These agencies gained access to digital metadata from telecom providers and collaborated with platforms such as Facebook and Google under the guise of national security. Legal frameworks like the Digital Security Act (DSA) 2018 provided institutional legitimacy for these activities (Islam, 2021), albeit at the cost of due process and transparency.
12.1.2. Predictive Policing and Algorithmic Profiling
By mid-2024, reports emerged of the government using predictive analytics to identify ‘pre-criminal behavior’ based on online activities. Individuals who searched for jihadist literature, engaged with specific hashtags (e.g., #Shaheed2024), or frequented particular Telegram channels were flagged by AI systems for offline surveillance (Dismislab, 2024). This model of predictive policing—imported from international counterterrorism paradigms—risked conflating dissent with radicalization, particularly when targeted against young men from marginalized religious or ethnic communities.
The use of biometric databases such as the National ID system and SIM registration data further enabled cross-platform tracking. In several cases, digital footprints led to the preemptive detention of suspects, with little public evidence of wrongdoing. This raised critical questions about the balance between national security and digital rights.
12.3. Social Media Governance and State Control
12.3.1. Collaboration and Coercion of Platforms
During the crisis, the Bangladeshi government exerted pressure on social media platforms to remove content deemed ‘anti-state,’ ‘blasphemous,’ or ‘inciteful.’ While major platforms like Meta (Facebook) and Google complied with takedown requests, critics argued that the compliance was unevenly applied. Posts criticizing the state’s inaction during mob violence were often removed under vague community guidelines, whereas explicitly jihadist content remained online longer than expected (NetBlocks, 2024).
The blurred line between legitimate security concerns and political censorship highlighted the coercive power of the state over global tech platforms in a geopolitically sensitive region. This dynamic reflected a broader trend of digital authoritarianism, in which governments manipulate content moderation policies to entrench their own narratives and silence dissent (Feldstein, 2019).
12.3.2. Internet Shutdowns and Communication Blackouts
On several occasions during July–August 2024, the state imposed targeted internet shutdowns in regions experiencing unrest, such as Bogura, Mymensingh, and Chittagong Hill Tracts. These shutdowns—often lasting 12–36 hours—were ostensibly aimed at curbing the spread of rumors. However, they also disrupted emergency services, media reporting, and civil society coordination (Access Now, 2024).
Research shows that such shutdowns often exacerbate, rather than mitigate, disinformation by pushing users to encrypted or offline rumor networks, where fact-checking is even less feasible (Arnaudo, 2017). In this regard, the state’s response inadvertently contributed to the very chaos it sought to control.
12.4. Counter-Narratives: Strategy and Execution
12.4.1. State-Run Counter-Narrative Programs
In parallel with its surveillance measures, the Bangladeshi government launched a series of counter-narrative campaigns aimed at discrediting jihadist ideologies and restoring public trust. These initiatives were coordinated by the Ministry of Information and Broadcasting, the Islamic Foundation, and the Cyber Crime Unit of the Police.
Content ranged from religious sermons by moderate clerics to multimedia videos debunking jihadist claims. Platforms such as Facebook, YouTube, and TikTok were utilized to disseminate government-approved messages, using hashtags like #TrueIslam, #PeaceInFaith, and #DigitalDesh. In some cases, influencers and micro-celebrities were paid to promote these messages, adding a layer of authenticity to state narratives (UNDP, 2024).
While these programs demonstrated a commitment to ideological engagement, their credibility was often undermined by their association with state propaganda. Youths who were skeptical of government intentions—especially in areas where state violence or corruption was rampant—remained unpersuaded by official messages.
12.4.2. Civil Society and Non-State Actors
Recognizing the limitations of state messaging, several non-governmental organizations (NGOs), think tanks, and religious institutions also launched counter-narrative initiatives. Groups like BRAC, Centre for Governance Studies (CGS), and Bangladesh Youth Internet Forum worked with imams, teachers, and digital activists to build community resilience against extremist content.
One innovative initiative, ‘Digital Imam,’ trained clerics in rural mosques to use WhatsApp and Facebook Live to deliver sermons against extremism. Another project, ‘Cyber Peace Corps,’ mobilized university students to fact-check viral rumors and create meme-based counter-messaging campaigns in Bangla and Chittagonian dialects.
These grassroots efforts proved more effective than top-down state propaganda in building trust and changing perceptions, though their reach and sustainability remained limited by resource constraints.
12.5. Challenges and Criticisms
12.5.1. The Problem of Overreach and False Positives
One of the major criticisms of state-led surveillance during the 2024 unrest was the over-identification of suspects based on ambiguous digital behavior. Young men following Islamic pages or expressing frustration with corruption were sometimes flagged as ‘persons of interest.’ In one documented case in Narayanganj, a 17-year-old boy was detained and interrogated after posting a meme criticizing police brutality, which algorithms misclassified as extremist content (Daily Star, 2024).
Such false positives not only traumatized individuals but also eroded public trust in state institutions. Communities became hesitant to report genuine threats for fear of implicating innocent neighbors. This undermined the state’s own counterterrorism efforts by alienating potential allies in civil society.
12.5.2. Lack of Transparency and Oversight
Despite the far-reaching powers of state surveillance and content control, mechanisms for transparency and accountability remained weak. There was no public database of takedown requests, no independent audit of AI surveillance systems, and no meaningful oversight body to review abuses of digital power.
This opacity allowed for selective enforcement. While secular critics and opposition activists were aggressively surveilled, politically connected groups—such as radical factions of Hefazat-e-Islam—operated with relative impunity, highlighting the politicization of counter-extremism efforts.
12.6. Toward a Democratic Digital Security Framework
12.6.1. Rights-Based Digital Governance
In response to growing criticism, several academics and digital rights organizations have advocated for a rights-based approach to digital governance. This would entail revising laws like the Digital Security Act to include clearer definitions of extremism, stronger judicial oversight, and protections for whistleblowers and journalists.
It would also require investments in linguistic and cultural training for content moderators, both within government and in tech companies. A democratic digital security framework would ensure that efforts to combat extremism do not come at the cost of freedom of expression, religious plurality, or political dissent (Chakraborty & Rahman, 2023).
12.6.2. Participatory Counter-Narrative Ecosystems
Effective counter-narratives cannot be manufactured by states alone; they must emerge from communities. Participatory models that involve religious scholars, teachers, artists, women, and youth are more likely to resonate with diverse audiences. Moreover, they offer a bottom-up approach to resilience that is adaptive, authentic, and context-specific.
Encouraging critical media literacy in schools, supporting youth-led digital content creation, and fostering open debates on religion and nationalism are essential strategies to build long-term resistance to extremist propaganda.
The 2024 jihadist unrest in Bangladesh revealed both the possibilities and perils of state-led interventions in the digital sphere. While surveillance technologies and counter-narrative campaigns played a role in containing violence, they also exposed structural deficiencies—overreach, politicization, lack of accountability, and epistemic disconnect from the public.
Moving forward, the challenge lies in designing digital security architectures that are effective yet democratic, protective yet rights-respecting. Surveillance and counter-narratives must not become instruments of control but tools for building a pluralistic, informed, and resilient digital society. Without such a recalibration, the risk of digital authoritarianism may outweigh the threat of extremism itself.
13. Policy Analysis and Digital Governance in Bangladesh
The digital landscape of Bangladesh has evolved rapidly over the past decade, fundamentally reshaping the political, social, and security environment. With over 100 million internet users as of 2024, the country faces complex challenges related to the governance of digital spaces, particularly amid rising extremist violence and information disorder during the July–August 2024 young jihadist hostility. This section critically analyzes the existing policy framework governing digital spaces in Bangladesh, examining its strengths, weaknesses, and the emerging dynamics of digital governance. It highlights the interaction between regulation, corporate platform policies, state security interests, and citizen rights, emphasizing the need for a balanced and rights-respecting governance model to mitigate disinformation and promote democratic resilience.
13.2. Overview of Digital Governance Framework in Bangladesh
13.2.1. Legal Instruments Governing the Digital Sphere
Bangladesh’s digital governance is primarily regulated through a set of laws and regulations enacted over the last decade. The Digital Security Act (DSA) 2018 stands as the cornerstone of the regulatory framework designed to address cybercrime, disinformation, and online extremism. The Act criminalizes the spread of fake news, defamation, digital harassment, and content that is deemed a threat to ‘state security’ or ‘social harmony’ (Rahman & Alam, 2020).
However, while the DSA was intended to modernize Bangladesh’s cybersecurity environment, it has been criticized for its broad and vague definitions that often lead to arbitrary enforcement. Numerous journalists, activists, and ordinary citizens have been charged under the DSA for allegedly spreading ‘false information,’ raising concerns about freedom of expression and judicial due process (Human Rights Watch, 2023).
Other important laws include the Information and Communication Technology (ICT) Act 2006 (amended in 2013) and the Broadcasting Act 2018, which govern online content and electronic communications. Together, these laws constitute a tightly regulated digital space where the government wields significant authority over online expression and platform operations.
13.2.2. Institutional Architecture and Enforcement Agencies
Several agencies play a role in digital governance enforcement. The Cyber Crime Unit of the Police is the primary body investigating cyber offenses, often working in coordination with the Rapid Action Battalion (RAB) and the Directorate General of Forces Intelligence (DGFI) for cases related to national security. The Bangladesh Telecommunication Regulatory Commission (BTRC) oversees telecom infrastructure and issues directives for content blocking or internet shutdowns (Ahmed, 2023).
While this institutional multiplicity allows for comprehensive oversight, it also risks overlapping jurisdictions, lack of accountability, and inconsistent application of the law. This situation was particularly evident during the 2024 unrest, where contradictory directives about platform regulation and content removal created confusion and mistrust among users and providers.
13.3. The Role of Social Media Platforms and Corporate Governance
13.3.1. Platform Compliance with State Regulations
Global social media platforms such as Facebook, Twitter, YouTube, and TikTok have become primary venues for information dissemination in Bangladesh. During the July–August 2024 jihadist hostility, these platforms faced intense pressure from the government to monitor, moderate, and remove extremist and disinformation content swiftly.
Facebook’s local moderation teams reportedly increased content takedowns related to ‘incitement’ and ‘hate speech,’ often in compliance with government takedown requests (Meta Transparency Report, 2024). However, independent observers noted uneven enforcement: content criticizing the government or religious authorities was more likely to be censored than jihadist propaganda, revealing a problematic alignment between corporate moderation policies and state interests (Chowdhury, 2024).
This dynamic underscores the challenges of platform governance in contexts where state security imperatives clash with digital rights. Moreover, the lack of transparent reporting mechanisms and grievance redresser processes left many users without recourse against wrongful censorship.
13.3.2. Algorithmic Amplification and Responsibility
The algorithms powering social media platforms prioritize engagement, often amplifying sensationalist, polarizing, or emotionally charged content—including extremist propaganda and misinformation (Gillespie, 2018). In Bangladesh, content related to jihadist ideologies or inflammatory political narratives gained rapid virality due to algorithmic boosts, contributing to the mobilization of young militants.
Despite growing evidence of harm, platforms have been slow to adapt their algorithms or introduce meaningful moderation at scale. The opaque nature of these recommendation systems and their commercial incentives for maximizing user attention have complicated efforts to regulate algorithmic harms within the country (Tufekci, 2019).
13.4. Challenges in Digital Governance: Balancing Security, Rights, and Innovation
13.4.1. Security vs. Freedom of Expression
The digital governance approach in Bangladesh has been criticized for privileging security concerns at the expense of fundamental rights. While combating violent extremism and misinformation is essential, the DSA and related policies have been employed disproportionately against dissenting voices, journalists, and minority communities (Amnesty International, 2023).
This tension creates a chilling effect on free expression and undermines public trust in digital spaces. Critics argue that sustainable counterterrorism requires not only state surveillance but also inclusive dialogue, community engagement, and respect for civil liberties (Zaman, 2024).
13.4.2. Capacity Constraints and Digital Literacy Gaps
Effective digital governance demands not only robust laws but also capable institutions and an informed citizenry. Bangladesh faces significant capacity constraints: law enforcement agencies often lack technical expertise to investigate cybercrimes properly; judicial processes are slow and ill-equipped for digital evidence; and many users lack digital literacy to discern misinformation (Islam & Chowdhury, 2023).
Efforts to improve digital literacy are underway, including government-led campaigns and NGO initiatives. However, the scale and complexity of misinformation networks, combined with low levels of trust in official sources, continue to hamper governance efforts.
13.4.3. The Digital Divide and Inclusion
Bangladesh’s rapid digitalization masks persistent inequalities in internet access and digital skills. Rural populations, women, and marginalized communities are disproportionately excluded from meaningful participation in digital governance conversations (World Bank, 2023). This exclusion risks deepening social divides and leaving vulnerable groups susceptible to misinformation and radicalization.
Governance frameworks must therefore prioritize inclusive policies that address the digital divide and empower diverse stakeholders in shaping digital futures.
13.5. Emerging Trends and Policy Innovations
13.5.1. Multi-Stakeholder Governance Approaches
Recognizing the limitations of top-down regulation, some policymakers and civil society actors advocate for multi-stakeholder governance models. These frameworks promote collaboration among government agencies, tech companies, academia, civil society, and user communities to co-develop policies that balance security, rights, and innovation (Khan & Siddiqui, 2024).
Such inclusive approaches enable context-specific solutions to misinformation and online extremism, building trust and legitimacy for digital governance.
13.5.2. Data Privacy and Protection Initiatives
Data protection remains a nascent area in Bangladesh. The absence of comprehensive data privacy legislation exposes users to exploitation by both state and private actors. In 2024, preliminary steps toward a Personal Data Protection Act were initiated, emphasizing consent, transparency, and accountability in data processing (Rahim & Hossain, 2024).
Effective data governance is essential not only to protect citizens’ rights but also to enhance trust in digital services and platforms—key factors in combating disinformation and radicalization.
13.5.3. Digital Literacy and Counter-Misinformation Campaigns
The government and NGOs are increasingly investing in digital literacy programs aimed at empowering citizens to critically assess online information. Initiatives like ‘Digital Bangladesh Literacy Drive’ and partnerships with universities aim to integrate media literacy into formal education curricula (UNICEF Bangladesh, 2024).
Such efforts seek to inoculate populations against misinformation and encourage active citizenship, complementing regulatory frameworks.
13.6. Policy Recommendations
Based on the analysis, several recommendations emerge to strengthen digital governance in Bangladesh:
Reform the Digital Security Act to clarify definitions, incorporate human rights safeguards, and establish independent oversight mechanisms to prevent misuse.
Enhance transparency and accountability in content moderation by requiring social media platforms to publish localized transparency reports and establish accessible appeal processes.
Invest in capacity-building for law enforcement, judiciary, and regulators, with emphasis on technical expertise and digital evidence management.
Promote multi-stakeholder dialogue involving government, civil society, academia, and private sector actors to co-create governance frameworks that reflect diverse interests and expertise.
Advance comprehensive data protection legislation that safeguards user privacy and regulates state surveillance within constitutional bounds.
Expand digital literacy programs, targeting marginalized communities and integrating critical media skills into formal education.
Foster inclusive governance by ensuring marginalized voices are heard in policy-making processes, thereby enhancing social cohesion and resilience.
The July–August 2024 jihadist hostility in Bangladesh highlighted the critical importance of effective digital governance to counter misinformation, extremism, and propaganda. While the state possesses significant legal and institutional tools, the current framework struggles to balance security imperatives with democratic freedoms, transparency, and inclusion.
Moving forward, Bangladesh must embrace a rights-based, participatory approach to digital governance—one that fosters trust, protects rights, builds capacity, and empowers citizens. Only through such a comprehensive and nuanced policy architecture can the country hope to mitigate the harms of digital disinformation while safeguarding the democratic fabric of society.
14. Discussion
This discussion synthesizes the complex dynamics of rumors, fake news, disinformation, propaganda, and social media algorithms in the context of the July–August 2024 young jihadist-hostility in Bangladesh. The preceding sections revealed a multifaceted ecosystem where digital technologies, socio-political tensions, and institutional frameworks intersect. This section critically examines these interactions, highlighting the implications for public discourse, security, governance, and social cohesion. It also situates the Bangladesh case within broader theoretical and empirical scholarship on digital misinformation and extremist mobilization, providing nuanced insights relevant for policymakers, researchers, and civil society.
14.1. The Interplay Between Digital Media and Jihadist Hostility
The research demonstrates that digital media platforms in Bangladesh serve as both catalysts and conduits for jihadist mobilization. The rapid spread of rumors, fake news, and disinformation during July–August 2024 significantly intensified the youth’s susceptibility to extremist narratives. This aligns with existing literature on algorithmic radicalization, which shows how engagement-based recommendation systems amplify extremist content, creating ‘filter bubbles’ and echo chambers (Cinelli et al., 2021; Zannettou et al., 2018).
Bangladeshi jihadists strategically exploited social media’s affordances—anonymity, virality, and interactive feedback—to propagate their ideology and recruit supporters. Notably, content often blended religious symbolism with socio-political grievances, resonating with youth facing economic and identity uncertainties (Stern & Berger, 2015). This indicates the digital sphere’s function as an extension of offline structural issues rather than a standalone radicalization driver.
Furthermore, the research highlights how misinformation networks were intricately intertwined with propaganda campaigns designed to delegitimize state actors and foster community polarization. Such strategies undermine trust in official narratives, fragment social cohesion, and complicate counterterrorism efforts (Marwick & Lewis, 2017). These findings resonate with the broader phenomenon of ‘information disorder’ articulated by Wardle and Derakhshan (2017), emphasizing the need to understand not just content but also context and intent.
14.2. Algorithmic Manipulation and Platform Governance
The study underscores the pivotal role of social media algorithms in shaping the information environment. Algorithms optimized for engagement tend to privilege sensationalist, polarizing content—often favoring jihadist propaganda and disinformation due to their emotional salience (Tufekci, 2019). This inadvertently facilitated the virality of harmful narratives during the conflict period, amplifying risks of radicalization and violence.
The ambivalent role of platforms emerges clearly: while they have taken some steps to moderate content, their corporate incentives often conflict with public safety goals. The limited transparency in content moderation practices and the opaque functioning of algorithms restrict accountability (Gillespie, 2018). This finding echoes global critiques of platform capitalism’s externalities, where private companies wield enormous influence over public discourse without commensurate oversight (Srnicek, 2017).
In the Bangladeshi context, platform compliance with government takedown requests has raised concerns about censorship and suppression of dissenting voices. This dynamic complicates the digital governance landscape, revealing tensions between state security priorities and digital rights. Such tensions align with reports from other authoritarian and semi-authoritarian contexts where digital repression coexists with attempts to control extremist content (MacKinnon, 2012).
14.3. The Role of State Surveillance and Counter-Narratives
The state’s increased surveillance and counter-narrative strategies during the July–August 2024 period represent a double-edged sword. On one hand, enhanced monitoring of digital spaces helped identify and disrupt extremist networks, potentially averting larger-scale violence. On the other hand, the pervasive use of broad and vaguely worded laws like the Digital Security Act has led to overreach, targeting political opponents, journalists, and minority groups (Rahman & Alam, 2020).
This overreach risks eroding public trust and may inadvertently drive vulnerable populations toward clandestine digital spaces less subject to state control, potentially intensifying radicalization (Weimann, 2015). Thus, counterterrorism efforts need to balance robust security responses with respect for civil liberties to avoid exacerbating grievances.
Counter-narrative campaigns, while important, have faced challenges in resonance and reach. Government-led messaging often suffers from credibility deficits, especially in marginalized communities skeptical of official sources (Zaman, 2024). Civil society and religious leaders, when engaged authentically, have shown promise in crafting effective counter-narratives that combine theological refutation with socio-economic empowerment narratives (Neumann, 2013). This suggests the value of pluralistic, community-based approaches.
14.4. Implications for Digital Governance and Policy
The findings have important implications for digital governance in Bangladesh. The current regulatory framework, dominated by the Digital Security Act and related laws, reveals significant limitations in addressing the nuances of online extremism and misinformation. While legal enforcement is necessary, it is insufficient without complementary measures such as algorithmic transparency, platform accountability, and digital literacy enhancement (Khan & Siddiqui, 2024).
A more holistic governance approach would integrate multi-stakeholder participation involving the state, platforms, civil society, academia, and users. This would facilitate the co-creation of contextually relevant policies that protect rights while enhancing security. For instance, platform companies could be incentivized to develop algorithmic audits and transparency reports specific to Bangladesh, enabling public scrutiny and adaptive regulation.
Digital literacy emerges as a critical pillar. Given the widespread vulnerability to misinformation due to low digital literacy and socio-economic precarity, investments in education and public awareness are imperative. These should be culturally sensitive and accessible across urban-rural divides (Islam & Chowdhury, 2023).
14.5. Societal and Psychological Dimensions
Beyond policy and technology, the study reveals underlying societal and psychological factors shaping the digital misinformation ecosystem. The emotional power of rumors and fake news—rooted in fear, uncertainty, and identity anxieties—fuels their diffusion and acceptance (Pennycook & Rand, 2018). The socio-political polarization amplified by disinformation further entrenches in-group/out-group dynamics, creating fertile ground for extremist narratives (Sunstein, 2018).
The July–August 2024 events showed how young people, navigating precarious livelihoods and contested identities, became particularly susceptible to online radicalization. This highlights the importance of addressing structural inequalities and providing alternative avenues for youth engagement and empowerment as part of a comprehensive counter-extremism strategy (Stern & Berger, 2015).
14.6. Limitations and Areas for Future Research
This study primarily relies on qualitative content analysis, case studies, and secondary data sources. While these methods provide rich contextual insights, they may not capture the full scope of algorithmic processes or the lived experiences of all affected communities. Future research could benefit from mixed-method approaches, including large-scale surveys, network analysis, and direct engagement with social media users.
Moreover, longitudinal studies tracking the evolution of misinformation and extremist mobilization over time would offer valuable perspectives on the long-term effects of digital governance interventions.
15. Conclusions and Recommendations
15.1. Conclusions
The research has investigated the dynamics of digital information ecosystems in the wake of the July–August 2024 young jihadist-hostility in Bangladesh, revealing a deeply entangled web of misinformation, extremist content, algorithmic manipulation, propaganda, and state response. These phenomena collectively contributed to the radicalization and mobilization of youth, undermined social cohesion, and destabilized the national security apparatus. The digital realm has become a decisive battleground for ideological contestation, state legitimacy, and civil liberties.
The study’s core finding is that disinformation and radicalization processes are not isolated incidents but are systematically sustained by algorithmic platforms, political opportunism, socio-economic alienation, and digital illiteracy. The hybrid nature of the threats—blending online and offline activities—requires an integrated framework of analysis and response. This synthesis of propaganda strategies and computational logics suggests that digital platforms are not merely neutral infrastructures but active agents in the production and circulation of truth claims (Gillespie, 2018; Marwick & Lewis, 2017).
Equally critical is the observation that state responses, often veiled in the rhetoric of national security, have leaned heavily toward surveillance and punitive legal measures, such as the application of the Digital Security Act (2018). While these have led to certain tactical successes in dismantling jihadist networks, they have also raised concerns regarding freedom of expression, digital rights, and civic engagement (Rahman & Alam, 2020). Counter-narrative strategies, though conceptually promising, have lacked resonance and inclusivity, often alienating the very communities they intend to rehabilitate.
In Bangladesh, as in many Global South contexts, the challenge lies in crafting a digital governance model that is both technically robust and normatively democratic. A sustainable response must go beyond technical fixes and repressive laws, embedding accountability, community participation, digital education, and platform responsibility into the core of national policy frameworks.
In conclusion, the July–August 2024 young jihadist-hostility in Bangladesh illustrates the profound challenges posed by the convergence of misinformation, propaganda, and algorithmic amplification within a complex socio-political context. Addressing these challenges requires a multidimensional approach—one that recognizes the technical mechanisms of digital platforms, the socio-political drivers of extremism, and the necessity of rights-based, inclusive governance frameworks.
Bangladesh’s experience offers broader lessons for similarly situated countries grappling with digital disinformation and violent extremism. Policies must be nuanced, adaptable, and participatory, balancing security imperatives with human rights, to foster resilient digital societies capable of withstanding the distortions of the digital age.
15.2. Recommendations
Based on the findings and insights from this study, several interlinked and actionable recommendations are proposed, grouped into five thematic areas: Policy and Regulation, Platform Responsibility, Digital Literacy and Education, Community-Centered Counter-Narratives, and Research and Capacity Building.
1. Reform the Digital Security Act (DSA):
While the DSA has been invoked as a tool for curbing jihadist propaganda and fake news, its vague definitions and discretionary enforcement have frequently violated human rights. The government should review and amend the DSA to include clearer definitions of harmful content, ensure judicial oversight, and incorporate safeguards for freedom of expression (Khan & Siddiqui, 2024).
2. Establish an Independent Digital Oversight Authority:
Bangladesh needs an independent, multi-stakeholder regulatory body that can oversee digital content governance, ensure platform accountability, and mediate disputes between the state, platforms, and users. This body should include civil society, legal experts, technologists, and representatives from affected communities.
3. Implement a National Digital Safety Strategy:
A holistic digital safety framework should be adopted to integrate cybercrime prevention, extremist content moderation, data protection, digital literacy, and user privacy under a unified strategic policy, in line with best practices from the Global Internet Forum to Counter Terrorism (GIFCT) and UNESCO guidelines on information integrity.
4. Increase Algorithmic Transparency:
Major platforms like Meta, YouTube, and TikTok should be mandated—either through law or soft regulatory frameworks—to disclose how their algorithms amplify content, especially during crises. Transparency reports should be localized for Bangladesh and include data on takedowns, engagement metrics, and content amplification trends (Srnicek, 2017; Tufekci, 2019).
5. Local Content Moderation and Language Capacity:
Global platforms must invest in Bengali language moderation, employing AI and human moderators with cultural and contextual expertise. Failure to do so has already led to misclassification of content and inadequate responses to dangerous material, as seen during the 2024 unrest (Zannettou et al., 2018).
6. Enforce Platform Due Diligence:
Platforms must demonstrate due diligence in identifying and removing harmful content under national legal frameworks. This includes the use of pre-warning systems, collaboration with local fact-checkers, and content labeling mechanisms to combat disinformation campaigns in real-time.
7. National Digital Literacy Campaigns:
The root causes of misinformation vulnerability stem from poor digital literacy, especially among youth and rural populations. The Ministry of Education and ICT Division should launch nationwide public education initiatives, integrating critical media literacy into school curricula and community-based learning programs (Islam & Chowdhury, 2023).
8. Youth Empowerment Through Digital Citizenship:
Given that youth are both targets and agents of digital misinformation and radicalization, programs should be developed that train young people in responsible content creation, fact-checking, and participatory governance. Civil society organizations and NGOs can serve as key intermediaries in these efforts.
9. Localize Counter-Narrative Production:
Counter-narratives must move beyond generic state messaging to reflect the lived realities, language, and grievances of communities. Partnerships with religious scholars, local influencers, and survivors of radicalization can produce more resonant content that challenges extremist interpretations (Neumann, 2013).
10. Establish Digital Peace Hubs:
Community-based ‘digital peace hubs’ could be established in vulnerable areas, serving as safe spaces for online dialogue, de-escalation training, and rehabilitation. These hubs could act as early warning systems for radicalization and misinformation trends, integrating offline intervention with digital monitoring.
11. Strengthen Academic and Policy Research:
There is a pressing need to build interdisciplinary research capacity in Bangladesh on digital extremism, information ecosystems, and algorithmic governance. Government and international donors should fund dedicated research centers and fellowships to study these issues in culturally relevant and policy-impactful ways (Zaman, 2024).
12. Develop Threat Intelligence Systems:
Security agencies should work collaboratively with academic institutions and private firms to develop real-time threat intelligence systems. These systems can map emerging jihadist networks, track disinformation trends, and offer predictive analytics to inform intervention.
15. Final Reflections
The events of July–August 2024 served as a flashpoint that revealed the fragility of Bangladesh’s digital information architecture and the urgency of reform. While the digital sphere offers immense opportunities for civic participation and knowledge exchange, it also harbors risks that, if unregulated or mismanaged, can spiral into violence, distrust, and democratic decay.
The response to these challenges must be multi-sectoral, participatory, and rights-respecting. Governments, platforms, civil society, and users must collectively reshape the norms, infrastructures, and cultures of digital communication. Bangladesh stands at a crossroads: it can either continue with reactive, punitive policies or embrace a proactive, inclusive, and resilient digital governance model.
To chart this new course, evidence-based policymaking, ethical platform design, and socially grounded counter-narratives must be prioritized. Only then can the promise of digital media be realized—not as a tool of division and distortion—but as a vehicle for peace, empowerment, and democratic renewal.
References
- Access Now. (2024). Internet shutdowns in Bangladesh during July–August 2024. Available online: https://www.accessnow.org.
- Ahmed, F.; Rahman, M. Surveillance technologies in South Asia: A comparative study of state capacities. Asian Journal of Digital Governance 2022, 5, 55–78. [Google Scholar]
- Ahmed, S. Digital governance and law enforcement in Bangladesh: Challenges and prospects. Bangladesh Journal of Cybersecurity 2023, 4, 45–63. [Google Scholar]
- Ahmed, S. (2023). Memory and the Making of Political Islam in Bangladesh. Dhaka: UPL.
- Akhter, S. Refugee scapegoating and the politics of fear in South Asia. Contemporary South Asia 2021, 29, 312–326. [Google Scholar]
- Amnesty International. (2023). Bangladesh: Digital security law used to silence dissent. Available online: https://www.amnesty.org/en/latest/news/2023/02/bangladesh-digital-security-law.
- Arnaudo, D. (2017). Computational propaganda in Brazil: Social bots during elections. Oxford Internet Institute Working Paper Series.
- Awan, I. Cyber-extremism: Isis and the power of social media. Social Science and Public Policy 2017, 54, 138–149. [Google Scholar] [CrossRef]
- BBS. (2024). Statistical Yearbook of Bangladesh 2023. Bangladesh Bureau of Statistics.
- Bradshaw, S.; Howard, P.N. (2019). The global disinformation order: 2019 global inventory of organized social media manipulation. Oxford Internet Institute.
- Bradshaw, S.; Neudert, L.-M.; Howard, P.N. (2018). Government responses to organized disinformation campaigns. Council of Europe Report.
- Chakraborty, S.; Rahman, T. Digital authoritarianism in South Asia: Emerging patterns and resistance. Global Digital Rights Journal 2023, 7, 22–39. [Google Scholar]
- Chowdhury, M. Social media moderation and state influence in Bangladesh. Journal of Media Studies 2024, 11, 112–130. [Google Scholar]
- Cinelli, M.; et al. The echo chamber effect on social media. PNAS 2021, 118. [Google Scholar]
- Cinelli, M.; Morales, G.D.F.; Galeazzi, A.; Quattrociocchi, W.; Starnini, M. The echo chamber effect on social media. Proceedings of the National Academy of Sciences 2021, 118, e2023301118. [Google Scholar] [CrossRef]
- Cinelli, M.; Quattrociocchi, W.; Galeazzi, A.; Valensise, C.M.; Brugnoli, E.; Schmidt, A.L.; Zola, P.; Zollo, F.; Scala, A. The COVID-19 social media infodemic. Scientific Reports 2021, 10, 16598. [Google Scholar] [CrossRef]
- Couldry, N.; Mejias, U.A. (2019). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford University Press.
- Daily Star. (2024, August 5). Teenage meme-maker arrested under DSA. The Daily Star. Available online: https://www.thedailystar.net.
- Dismislab. (2024). How disinformation stoked fear, panic and confusion during Bangladesh’s student-led revolution. Available online: https://en.dismislab.com/how-disinformation-stoked-fear-panic-and-confusion-during-bangladeshs-student-led-revolution/.
- Dismislab. (2024). Predictive Policing and Digital Profiling in Bangladesh: A Human Rights Assessment. Available online: https://www.dismislab.org.
- Douek, E. Content moderation as systems thinking. Harvard Journal of Law & Technology 2020, 33, 1–56. [Google Scholar]
- Feldstein, S.; (2019). The global expansion of AI surveillance. Carnegie Endowment for International Peace. Available online: https://carnegieendowment.org.
- Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
- Gorwa, R.; Guilbeault, D. Unpacking the social media bot: A typology to guide research and policy. Policy & Internet 2020, 12, 225–248. [Google Scholar]
- Guess, A.M.; Nagler, J.; Tucker, J.A. Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances 2019, 5, eaau4586. [Google Scholar] [CrossRef]
- Human Rights Watch. (2024). Internet Shutdowns and Information Suppression in Bangladesh’s 2024 Crisis. Available online: https://www.hrw.org.
- Human Rights Watch. (2023). Bangladesh: Free speech under threat. Available online: https://www.hrw.org/report/2023/01/bangladesh-free-speech.
- Islam, M.; Chowdhury, R. Digital literacy challenges in rural Bangladesh. International Journal of Media and Communication Studies 2023, 8, 23–40. [Google Scholar]
- Islam, M.; Chowdhury, R. Digital literacy challenges in rural Bangladesh. International Journal of Media and Communication Studies 2023, 8, 23–40. [Google Scholar]
- Islam, R. Digital Security Act in Bangladesh: Tool for security or suppression? South Asian Law Review 2021, 6, 103–119. [Google Scholar]
- Jowett, G.S.; O’Donnell, V. (2018). Propaganda & Persuasion (7th ed.). SAGE Publications.
- Khan, S.; Siddiqui, A. Multi-stakeholder digital governance in South Asia: Lessons for Bangladesh. Asian Policy Review 2024, 6, 70–89. [Google Scholar]
- Khan, S.; Siddiqui, A. Multi-stakeholder digital governance in South Asia: Lessons for Bangladesh. Asian Policy Review 2024, 6, 70–89. [Google Scholar]
- Khan, S.; Siddiqui, A. Multi-stakeholder digital governance in South Asia: Lessons for Bangladesh. Asian Policy Review 2024, 6, 70–89. [Google Scholar]
- Lewandowsky, S.; Ecker, U.K.; Cook, J. Beyond misinformation: Understanding and coping with the ‘post-truth’ era. Journal of Applied Research in Memory and Cognition 2017, 6, 353–369. [Google Scholar] [CrossRef]
- MacKinnon, R. (2012). Consent of the networked: The worldwide struggle for internet freedom. Basic Books.
- Marwick, A.; Lewis, R.; (2017). Media manipulation and disinformation online. Data & Society Research Institute. Available online: https://datasociety.net/library/media-manipulation-and-disinfo-online/.
- Meta Transparency Report. (2024). Content removal requests in Bangladesh. Available online: https://transparency.fb.com/reports/2024/bangladesh.
- NetBlocks. (2024). Tracking digital disruptions in Bangladesh. Available online: https://netblocks.org.
- Neumann, P.R. The trouble with radicalization. International Affairs 2013, 89, 873–893. [Google Scholar] [CrossRef]
- Pennycook, G.; Rand, D.G. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 2018, 188, 39–50. [Google Scholar] [CrossRef] [PubMed]
- Rahim, N.; Hossain, J. Data protection policy landscape in Bangladesh: Emerging frameworks and challenges. Journal of Information Policy 2024, 14, 101–121. [Google Scholar]
- Rahman, T.; Alam, F. The Digital Security Act 2018 and implications for freedom of expression in Bangladesh. South Asian Journal of Law and Policy 2020, 2, 15–38. [Google Scholar]
- Reuters. (2024). Photo does not show protesters in Bangladeshi presidential residence. Available online: https://www.reuters.com/fact-check/photo-does-not-show-protesters-bangladeshi-presidential-residence-2024-08-09/.
- Ribeiro, M.H.; Ottoni, R.; West, R.; Almeida, V.A.; Meira, W. (2020). Auditing radicalization pathways on YouTube. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 131–141.
- Rumor Scanner. (2024). Disinformation Tracking Reports: July–August 2024. Available online: https://rumorscanner.com.
- Rumor Scanner. (2024). Fact-checking archive: July–August disinformation. Available online: https://rumorscanner.com.
- Rumor Scanner. (2025). A Record-Breaking 2,919 Cases of Misinformation Identified. Available online: https://rumorscanner.com/en/statistics-2/rumors-data-2024/134624.
- Srnicek, N. (2017). Platform capitalism. Polity Press.
- Stern, J.; Berger, J.M. (2015). ISIS: The state of terror. HarperCollins.
- Sunstein, C. R. (2018). #Republic: Divided democracy in the age of social media. Princeton University Press.
- Time. (2024). Exclusive: Tech Companies Are Failing to Keep Elections Safe, Rights Groups Say. Available online: https://time.com/6967334.
- Time. (2024). Exclusive: Tech Companies Are Failing to Keep Elections Safe, Rights Groups Say. Available online: https://time.com/6967334/ai-elections-disinformation-meta-tiktok/.
- Time. (2024). South Asia’s disinformation dilemma: Social media under fire. Available online: https://time.com/6967334.
- Tufekci, Z. Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Colorado Technology Law Journal 2015, 13, 203–218. [Google Scholar]
- Tufekci, Z.; (2018). YouTube, the great radicalizer. The New York Times. Available online: https://www.nytimes.com.
- Tufekci, Z. (2019). Twitter and tear gas: The power and fragility of networked protest. Yale University Press.
- Ullah, A.; Ahmed, S. Content moderation in South Asia: The blind spots of AI and platform policy. Digital Rights South Asia Journal 2022, 4, 45–67. [Google Scholar]
- UNDP. (2024). Youth, Peace and Countering Violent Extremism in Bangladesh: Mid-Year Review. United Nations Development Programme. Available online: https://www.undp.org.
- UNICEF Bangladesh. (2024). Digital literacy and youth empowerment programs in Bangladesh. Available online: https://www.unicef.org/bangladesh/reports/digital-literacy.
- Wardle, C.; Derakhshan, H.; (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe. Available online: https://rm.coe.int/information-disorder-report-february-2017/1680764666.
- Weimann, G. (2015). Terrorism in cyberspace: The next generation. Columbia University Press.
- Wikipedia. (2024). Chalaiden. Available online: https://en.wikipedia.org/wiki/Chalaiden.
- Winter, C. (2015). Documenting the Virtual ‘Caliphate’. Quilliam Foundation.
- World Bank. (2023). Bangladesh digital economy assessment: Bridging the divide. Available online: https://documents.worldbank.org/en/publication/documents-reports.
- Zaman, A. Counterterrorism and digital rights in Bangladesh: Striking a balance. Journal of Peace and Security Studies 2024, 5, 33–56. [Google Scholar]
- Zannettou, S.; Sirivianos, M.; Blackburn, J.; Kourtellis, N. The web of false information: Rumors, fake news, hoaxes, clickbait, and various other shenanigans. Journal of Data and Information Quality 2018, 11, 1–37. [Google Scholar] [CrossRef]
- Zannettou, S.; Sirivianos, M.; Blackburn, J.; Kourtellis, N. The web of false information: Rumors, fake news, hoaxes, clickbait, and various other shenanigans. Journal of Data and Information Quality 2018, 11, 1–37. [Google Scholar] [CrossRef]
- Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).