Introduction
Framing theory, first conceptualized by Goffman
(1974) as a sociological construct and later adapted to communication studies
by Entman (1993), posits that the way information is presented—or
"framed"—shapes how audiences perceive and interpret reality. In
traditional media, framing is largely controlled by journalists and editors,
who act as gatekeepers, selecting the aspects of a story to emphasize
(Scheufele, 1999). However, the advent of digital media has fundamentally
altered this dynamic by introducing a participatory, multi-directional
communication environment in which users, algorithms, and influencers
co-construct narratives (Cacciatore et al., 2016). The digital media age,
characterized by platforms such as Twitter, Facebook, Instagram, and TikTok as
well as the proliferation of online news and blogs, has expanded the scope of
framing theory to include new actors, technologies, and challenges such as
misinformation and algorithmic bias.
This literature review synthesizes approximately 60
studies from the past two decades (2005-2025) to explore how framing theory has
adapted to the digital era. It examined key thematic developments, including
the role of social media in public opinion formation, the influence of
algorithmic framing, the intersection of framing with misinformation, and the
cultural and methodological complexities of studying framing in a fragmented
media landscape. By critically evaluating the existing research, identifying gaps,
and proposing future directions, this review aims to contribute to a deeper
understanding of framing theory in the digital age.
Methodology
This review was conducted through a systematic
search of academic databases, such as PubMed, JSTOR, Scopus, and Google
Scholar. Keywords used included "framing theory," "digital media
framing," "social media framing," "algorithmic
framing," and "misinformation framing." The search was limited
to peer-reviewed articles, books, and conference papers published between 2005
and 2025, to capture the evolution of the digital media era. Studies were
selected based on their relevance to framing theory and digital media,
resulting in a corpus of approximately 60 sources. The analysis focused on
thematic trends, theoretical advancements, methodological approaches, and
critical gaps in literature. Qualitative and quantitative studies were included
to ensure a comprehensive perspective.
Thematic Analysis
1. Framing Theory: Foundations and Evolution
Framing theory has emerged as a critical framework
for understanding media effects, with early work focusing on how traditional
media outlets shape public perception by emphasizing certain aspects of reality
over others (Entman, 1993). Entman (1993) defined framing as the process of
selecting "some aspects of a perceived reality and making them more
salient in a communicating text" (p. 52), thereby influencing audience
interpretation. Initial studies explored framing in contexts such as political
campaigns, war reporting, and health crises, often highlighting the power of
media elites to construct dominant narratives (Scheufele, 1999). For example,
Iyengar (1991) demonstrated how episodic versus thematic framing of poverty in
television news influences whether audiences attribute responsibility to
individuals or systemic factors. The transition to digital media has
necessitated the reconceptualization of framing theory to account for the
interactive and decentralized nature of online communication. Borah (2011)
argues that digital platforms introduce "frame multiplicity," where
multiple, often conflicting frames coexist and evolve in real time, challenging
the linear, top-down models of traditional framing. Unlike traditional media,
where frames are relatively stable and controlled by a limited number of
gatekeepers, digital media allow for rapid frame diffusion and contestation
(Cacciatore et al., 2016). This shift has prompted scholars to explore how
framing operates in a context where audiences are not just passive receivers
but also active participants in frame construction.
Moreover, the digital age has blurred the
boundaries between framing and related theories, such as agenda-setting and
priming. While agenda-setting focuses on what issues are deemed important and
priming on how prior exposure influences subsequent judgments, framing
emphasizes how issues are presented (Weaver, 2007). In digital media, these
processes often overlap, as algorithms and user interactions simultaneously
determine salience and interpretation (McCombs & Valenzuela, 2020). This
convergence underscores the need for an integrated theoretical framework that
captures the multifaceted nature of media effects in the digital era.
2. Social Media as a Framing Tool
Social media platforms have become central to
framing processes by transforming users into both consumer and producer frames.
Platforms such as Twitter, Facebook, Instagram, and, more recently, TikTok
enable individuals and organizations to craft and disseminate frames at
unprecedented speed and scale (Meraz & Papacharissi, 2013). Research has
highlighted how social media facilitates participatory framing, where
collective action and user engagement shape public discourse on issues ranging
from climate change to social justice. For instance, Harlow and Harp (2012)
examined how activist movements in the United States and Latin America used
social media to frame issues such as gender equality and political reform,
finding that emotionally resonant frames—often amplified through hashtags and
viral content—significantly increased engagement and offline mobilization.
Specific events, such as the Black Lives Matter (BLM) movement, illustrate the
power of social media framing. Studies have shown that hashtags such as
#BlackLivesMatter serve as framing devices, encapsulating narratives of
systemic racism and police brutality while fostering a sense of global
solidarity (Freelon et al., 2016). Jackson and Foucault Welles (2016) further
noted that marginalized voices, often excluded from traditional media, gained
prominence through social media framing, challenging dominant narratives, and
creating counter-frames. However, the echo chamber effect poses a significant
challenge, as users are often exposed to reinforcing frames that align with
their pre-existing beliefs, leading to polarization (Sunstein, 2017). Barberá
et al. (2015) found that political discussions on Twitter often cluster into
ideologically homogeneous groups, limiting exposure to diverse frames and exacerbating
societal divide.
Additionally, the roles of influencers and opinion
leaders in framing cannot be overlooked. Influencers with their large
followings often act as frame setters, leveraging personal branding to shape
narratives on issues such as health, politics, and consumer behavior (Abidin,
2018). For example, during the COVID-19 pandemic, influencers framed public
health measures in ways that either supported or undermined official
guidelines, demonstrating their significant impact on public perceptions
(Cinelli et al., 2020). This democratization of framing while empowering also
raises concerns about accountability and the potential for misinformation, as
discussed in later sections.
3. Algorithmic Framing and Personalization
The rise of algorithmic curation on digital
platforms has introduced a new dimension to framing theory, in which machine
learning systems play a central role in determining which frames users
encounter. Platforms, such as Facebook, YouTube, and TikTok, use algorithms to
prioritize content based on user preferences, engagement metrics, and
behavioral data, often amplifying sensational or emotionally charged frames to
maximize clicks and time spent (Bucher, 2018). Pariser (2011) introduced the
concept of the "filter bubble," arguing that algorithmic
personalization creates insular information environments where users are
exposed primarily to frames that reinforce their existing beliefs, thus
limiting cognitive diversity. Recent research has explored the opaque nature of
algorithmic framing, highlighting how a lack of transparency complicates
accountability. Diakopoulos (2019) argues that algorithms are not neutral; they
embed the biases of their creators and the data on which they are trained,
often perpetuating stereotypes or marginalizing certain perspectives. For
instance, studies have shown that YouTube’s recommendation algorithm
disproportionately promotes far-right content by framing it as
"alternative" or "controversial," thereby amplifying
divisive narratives (Lewis, 2018). Similarly, Tufekci (2018) warns of
"computational propaganda," where algorithms are exploited by
malicious actors to frame misinformation as credible, influencing public
opinion during critical events such as elections.
The ethical implications of algorithmic framing are
significant. Ward (2018) suggested that the lack of user control over
algorithmic curation raises questions about autonomy and informed
decision-making. Moreover, the personalization of frames can erode shared
public discourse as individuals encounter increasingly fragmented versions of
reality (Vaidhyanathan, 2018). While some scholars advocate greater algorithmic
transparency and regulation (Gillespie, 2018), others caution that such
interventions must balance innovation with user privacy (Zuboff, 2019). This
tension remains a critical topic for future research.
4. Misinformation and Framing in the Digital Age
The proliferation of misinformation on digital
platforms has become a pressing concern, and framing strategies are often
exploited to make false or misleading information more persuasive. Lewandowsky
et al. (2012) argued that misinformation is frequently framed in ways that
align with cognitive biases, such as confirmation bias, making it more likely
to be accepted and shared. During the COVID-19 pandemic, for example, competing
frames about vaccines, ranging from conspiracy theories to scientific endorsements,
circulated widely on social media, often undermining public health efforts
(Roozenbeek & van der Linden, 2020). Studies suggest that anti-vaccine
frames, often emphasizing personal freedom or distrust in institutions,
resonate strongly with certain demographics, illustrating the power of
emotionally charged framing (Kata, 2012). The emergence of
"deepfakes" and manipulated media further complicate the landscape of
digital framing. Paris and Donovan (2019) highlighted how fabricated audio and
visual content can be framed as authentic, deceiving audiences, and shaping
perceptions of reality. For instance, fake videos of political figures have
been used to frame events in ways that incite outrage or confusion, raising
concerns about trust in digital media (Chesney & Citron, 2019). Addressing
misinformation requires not only technological solutions such as fact-checking
algorithms but also a deeper understanding of how framing influences belief
formation and persistence (Ecker et al., 2022).
Furthermore, the speed and scale of misinformation
dissemination on digital platforms exacerbates its impact. Vosoughi et al.
(2018) found that false news spreads six times faster than true news on
Twitter, often because of novelty and emotional framing. This phenomenon
underscores the need for framing theory to account for the viral nature of
digital content and the psychological mechanisms that drive sharing behaviors.
Interventions such as prebooking—exposing users to weakened versions of
misinformation frames to build resistance—show promise, but require further
testing across diverse contexts (van der Linden, 2022).
5. Cross-Cultural and Global Perspectives on Digital Framing
While much of the research on digital framing
focuses on Western contexts, a growing body of literature emphasizes the role
of cultural factors in shaping the framing processes. Lee and Oh (2013) found
that the framing of health-related issues on social media varies significantly
between collectivist and individualist cultures, with the former emphasizing
community responsibility and the latter personal choices. Similarly, Nisbet and
Kamenchuk (2019) highlight how cultural values influence the framing of climate
change, with Eastern audiences being more receptive to frames of collective
action compared to Western audiences. Global events, such as the Arab
Spring, demonstrate the transnational potential of digital framing, while also
revealing cultural nuances. Howard and Hussain (2013) argued that social media
platforms enabled activists to frame the uprisings as a fight for democracy,
resonating with global audiences and garnering international support. However,
local interpretations of these frames vary, shaped by historical, political,
and linguistic contexts (El-Nawawy & Khamis, 2014). This suggests that,
while digital media can transcend borders, cultural filters mediate the
reception and impact of frames.
Despite these insights, cross-cultural research on
digital framing has remained limited. Most studies have focused on
English-language content, neglecting non-Western platforms such as WeChat or
VKontakte, which host distinct framing dynamics (Yang, 2016). Additionally,
linguistic barriers and varying platform affordances complicate comparative
analyses (Highfield & Leaver, 2016). Future research should prioritize
multilingual and multiplatform studies to capture the global diversity of
digital framing practices.
6. Methodological Challenges and Innovations
Studying framing in the digital media age presents
unique methodological challenges owing to the volume, velocity, and variety of
data. Traditional content analysis, while effective for small-scale studies of
print or broadcast media, struggles to keep pace with the real-time user-driven
nature of digital content (Baden & Lecheler, 2012). Consequently,
computational methods such as natural language processing (NLP), machine
learning, and network analysis have gained prominence in framing research. For
instance, Tsur and Rappoport (2015) used NLP to detect framing biases in
political tweets and identified the linguistic patterns associated with
partisan narratives. However, computational approaches have limitations.
Grimmer and Stewart (2013) note that automated content analysis often lacks the
nuance of capturing contextual meaning or cultural subtleties, potentially
oversimplifying complex frames. Moreover, reliance on big data raises ethical
concerns regarding privacy and consent, particularly when analyzing
user-generated content (boyd & Crawford, 2012). To address these issues,
scholars advocate mixed-methods approaches that combine qualitative frame analysis
with quantitative data mining, allowing for both depth and scale in research
(Nisbet, 2010).
Case studies of specific events such as elections
or public health crises also offer valuable insights into digital framing. For
example, Kreiss et al. (2015) used a mixed-methods approach to study framing
during the 2012 U.S. presidential election, combining manual coding of campaign
messages with network analysis of Twitter interactions. Such studies have
highlighted the importance of triangulating methods to capture the multifaceted
nature of digital framing. Future methodological innovations should focus on
integrating emerging technologies such as artificial intelligence for real-time
frame detection while maintaining rigorous ethical standards.
Critical Evaluation
The reviewed literature demonstrates significant
advancements in framing theory, particularly in its adaptation to the
participatory and algorithmic nature of digital media. The shift from
elite-driven to user-driven framing, as seen on social media, has democratized
narrative construction, empowered marginalized voices, and fostered collective
action (Freelon et al., 2016). Simultaneously, algorithmic framing has
introduced new challenges, raising questions about bias, transparency, and
erosion of shared discourse (Diakopoulos, 2019). The intersection of framing
with misinformation further complicates the field as digital platforms amplify
false narratives through viral and emotionally charged frames (Vosoughi et al.,
2018). Despite these contributions, several gaps remain in the literature.
First, the long-term behavioral impact of digital framing remains
underexplored. While studies often measure immediate attitudinal changes, few
examine how sustained exposure to certain frames influences actions over time
(Lecheler & de Vreese, 2019). Second, the ethical implications of
algorithmic framing, particularly its potential for manipulation, warrant
greater attention (Ward, 2018). Third, the predominance of Western-centric
research limits the generalizability of the findings, as cultural and
linguistic diversity shapes framing in ways that are not yet fully understood
(Yang, 2016). Finally, the rapid evolution of digital platforms, such as the
rise of TikTok and virtual reality, outpaces research, leaving the emerging
framing dynamics understudied.
Discussion
The exploration of framing theory in the digital
media age, as synthesized from approximately 60 studies spanning the last two
decades, reveals a profound transformation in how information is constructed,
disseminated, and interpreted in contemporary society. This discussion
integrates the key themes—foundational evolution, social media dynamics,
algorithmic influences, misinformation challenges, cross-cultural variations,
and methodological innovations—to provide a holistic understanding of framing
theory’s current state and its implications for communication research. Framing
theory has evolved from a model centered on traditional media gatekeepers to
one that accounts for the participatory and decentralized nature of digital
environments (Entman, 1993; Borah, 2011). The shift from elite-driven to
user-driven framing, particularly on social media platforms, has democratized
narrative construction, empowering individuals and marginalized groups to shape
public discourse on critical issues, such as social justice and environmental
crises (Harlow & Harp, 2012; Freelon et al., 2016). However, this
empowerment comes with the risk of polarization, as echo chambers and selective
exposure reinforce pre-existing beliefs, often deepening societal divides
(Sunstein, 2017). The Black Lives Matter movement exemplifies this duality,
where social media framing amplified counter-narratives against systemic
racism, but also encountered resistance within ideologically homogeneous online
communities (Jackson & Foucault Welles, 2016).
The role of algorithms in digital framing
introduces another layer of complexity, as platforms such as Facebook and
YouTube curate content through opaque mechanisms that prioritize engagement
over diversity (Bucher, 2018; Pariser, 2011). Algorithmic framing, as seen in
the 2016 U.S. election with the Cambridge Analytica scandal, can manipulate
public opinion by tailoring frames to psychological profiles, raising ethical
questions about user autonomy and the integrity of democratic processes (Isaak
& Hanna, 2018; Diakopoulos, 2019). This underscores a critical tension:
while digital media expands access to information, it simultaneously fragments
shared reality through personalized content streams, challenging the notion of
a unified public sphere (Vaidhyanathan, 2018). Compounding these challenges is
the intersection of framing with misinformation, which is a pervasive issue in
the digital age. Misinformation often gains traction through emotionally
charged or culturally resonant frames, as evidenced during the COVID-19
pandemic and political events such as Brexit, where false narratives shaped
public behavior and eroded trust in institutions (Lewandowsky et al., 2012;
Roozenbeek & van der Linden, 2020; Allcott & Gentzkow, 2017). The
emergence of deepfakes and manipulated media further blurs the line between
truth and deception, necessitating new theoretical and practical approaches to
counterdeceptive framing (Paris & Donovan, 2019). This issue highlights the
urgency of integrating psychological insights into framing research to
understand belief persistence and to design effective interventions (Ecker et
al., 2022). Cross-cultural perspectives reveal that digital framing is not a
universal phenomenon, but is deeply influenced by cultural, political, and
linguistic contexts (Lee & Oh, 2013). While global events such as the Arab
Spring demonstrate the transnational potential of digital framing, local
interpretations vary significantly, as seen in the contrasting narratives of
Hong Kong protests on Western versus Chinese platforms (Howard & Hussain,
2013; Lee & Chan, 2020). The predominance of Western-centric research
limits the generalizability of findings, underscoring the need for studies that
capture framing dynamics on non-Western platforms, such as WeChat or VKontakte
(Yang, 2016). This gap points to a broader challenge in framing theory:
achieving a truly global understanding of how digital media shapes perceptions
across diverse societies.
Methodologically, the digital age demands
innovation in order to keep pace with the volume and ephemerality of online
content. Computational tools such as NLP and network analysis offer
scalability, yet they often lack the nuance to capture contextual meaning,
necessitating mixed-methods approaches that combine qualitative depth with
quantitative breadth (Grimmer & Stewart, 2013; Nisbet, 2010). The ethical
implications of big data research, such as privacy concerns and further
complications in the study of digital framing, require scholars to balance
technological advancement with responsible practices (boyd & Crawford,
2012). Synthesizing these themes, it is evident that framing theory remains a
vital lens for understanding communication in the digital era; however, it
faces significant challenges in adapting to rapid technological and societal
changes. The lack of longitudinal studies on behavioral impacts limits our
understanding of how sustained exposure to digital frames influences actions
over time, a gap that must be addressed to move beyond attitudinal effects
(Lecheler & de Vreese, 2019). Similarly, the ethical dimensions of
algorithmic framing, particularly the potential for manipulation, warrant
greater scrutiny, as does the exploration of emerging platforms such as TikTok,
where visual and immersive framing introduce novel dynamics (Ward, 2018; Zulli
& Zulli, 2022). Ultimately, the future of framing theory lies in
interdisciplinary collaboration, integrating insights from psychology, sociology,
and computer science to tackle these multifaceted issues while embracing a
global perspective that accounts for cultural diversity (Gillespie, 2018;
Nisbet & Kamenchuk, 2019).
Conclusion
Framing theory continues to be an essential
framework for understanding communication in the digital media age. Over the
past two decades, research has documented its evolution from focusing on
traditional media gatekeepers to exploring the complex dynamics of
participatory, algorithmic, and global framing. Social media has empowered
users to co-construct narratives, whereas algorithms have introduced new forms
of influence and bias. The challenges of misinformation and cultural diversity
underscore the complexity of digital framing, necessitating innovative
theoretical and methodological approaches. Addressing current gaps and
embracing interdisciplinary perspectives will allow framing theory to continue
illuminating the intricate interplay between media, technology, and society in
an ever-evolving digital landscape. Framing theory has evolved significantly,
from traditional media gatekeepers to the participatory dynamics of social
media, where users co-create narratives. This shift has democratized discourse,
empowering marginalized voices and fostering collective action. However, it has
also led to polarization, with echo chambers reinforcing pre-existing beliefs,
deepening societal divides. The role of algorithms adds another layer of
complexity, as platforms curate content to prioritize engagement, often at the
expense of diversity. Algorithmic framing can manipulate public opinion,
raising ethical questions about user autonomy and the integrity of democratic
processes. This tension highlights the dual nature of digital media: expanding
access to information while fragmenting shared reality through personalized
content streams.
Moreover, the intersection of framing with
misinformation presents a pervasive challenge in the digital age. False
narratives gain traction through emotionally charged frames, shaping public
behavior and eroding trust in institutions. The emergence of deepfakes further
blurs the line between truth and deception, necessitating new approaches to
counter deceptive framing. Cross-cultural perspectives reveal that digital
framing is influenced by cultural, political, and linguistic contexts. The
predominance of Western-centric research limits the generalizability of
findings, underscoring the need for studies capturing framing dynamics on
non-Western platforms. Methodologically, the digital age demands innovation to
keep pace with online content's volume and ephemerality. Computational tools
offer scalability but often lack nuance, necessitating mixed methods approaches
combining qualitative depth with quantitative breadth. Ethical implications of
big data research require balancing technological advancement with responsible
practices. Ultimately, framing theory remains a vital lens for understanding
communication in the digital era. However, it faces significant challenges in
adapting to rapid technological and societal changes. By addressing gaps in
longitudinal studies, algorithmic manipulation, and cultural diversity, and
integrating interdisciplinary collaboration, framing theory can continue to
provide valuable insights into the digital media landscape.
Recommendations for Future Directions
To address these gaps, future research should
prioritize longitudinal studies to assess the sustained impact of digital
framing on behavior, moving beyond short-term attitudinal effects (Lecheler
& de Vreese, 2019). Interdisciplinary approaches, integrating insights from
psychology, sociology, and computer science, could deepen the understanding of
algorithmic framing and misinformation, particularly through the development of
ethical AI tools for frame detection (Gillespie, 2018). Cross-cultural and
multilingual studies are also essential to capture the global diversity of
framing practices, ensuring that research reflects non-Western perspectives
(Nisbet & Kamenchuk, 2019). Such studies could illuminate the nuances of
framing practices across different cultures and languages, providing a more
comprehensive understanding of how digital framing operates in various
contexts. Moreover, scholars should explore framing in emerging platforms and
technologies such as TikTok’s short-form video content and virtual reality
environments, where visual and immersive elements introduce novel framing
mechanisms (Bailenson, 2018). These platforms represent the next frontier in
digital media, and understanding their framing dynamics is crucial for future
communication strategies.
Finally, it is crucial to incorporate ethical
considerations into the research on digital framing. As the use of big data and
AI in framing analysis grows, addressing privacy concerns and ensuring the
responsible use of technology will become paramount (boyd & Crawford,
2012). By balancing technological advancement with ethical practices,
researchers can contribute to a more trustworthy and transparent digital media
landscape.
Funding
The study received no specific financial support.
Institutional Review Board Statement
Not applicable
Transparency
The author confirms that the manuscript is an honest, accurate and transparent account of the study that no vital features of the study have been omitted and that any discrepancies from the study as planned have been explained. This study followed all ethical practices during writing.
Conflict of Interest declaration
The authors declare that they have no affiliations with or involvement in any organization or entity with any financial interest in the subject matter or materials discussed in this manuscript.
References
- Abidin, C. Internet celebrity: Understanding fame online; Emerald Publishing, 2018; ISBN 9781787560796. [Google Scholar]
- Allcott, H.; Gentzkow, M. Social media and fake news in the 2016 election. Journal of Economic Perspectives 2017, 31(2), 211–236. [Google Scholar] [CrossRef]
- Baden, C.; Lecheler, S. Fleeting, fading, or far-reaching? A knowledge-based model of the persistence of framing effects. Communication Theory 2012, 22(4), 359–382. [Google Scholar] [CrossRef]
- Bailenson, J. N. Experience on demand: What virtual reality is, how it works, and what it can do; W.W. Norton & Company, 2018; ISBN 9780393253696. [Google Scholar]
- Barberá, P.; Jost, J. T.; Nagler, J.; Tucker, J. A.; Bonneau, R. Tweeting from left to right: Is online political communication more than an echo chamber? Psychological Science 2015, 26(10), 1531–1542. [Google Scholar] [CrossRef] [PubMed]
- Borah, P. Conceptual issues in framing theory: A systematic examination of a decade's literature. Journal of Communication 2011, 61(2), 246–263. [Google Scholar] [CrossRef]
- Boyd, d.; Crawford, K. Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society 2012, 15(5), 662–679. [Google Scholar] [CrossRef]
- Bucher, T. If...Then: Algorithmic power and politics; Oxford University Press, 2018; ISBN 9780190493035. [Google Scholar]
- Cacciatore, M. A.; Scheufele, D. A.; Iyengar, S. The end of framing as we know it... and the future of media effects. Mass Communication and Society 2016, 19(1), 7–23. [Google Scholar] [CrossRef]
- Chesney, R.; Citron, D. Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review 2019, 107(6), 1753–1820. [Google Scholar] [CrossRef]
- Cinelli, M.; Quattrociocchi, W.; Galeazzi, A.; Valensise, C. M.; Brugnoli, E.; Schmidt, A. L.; Zola, P.; Zollo, F.; Scala, A. The COVID-19 social media infodemic. Scientific Reports 2020, 10(1), Article 16598. [Google Scholar] [CrossRef]
- Diakopoulos, N. Automating the news: How algorithms are rewriting the media; Harvard University Press, 2019; ISBN 9780674976986. [Google Scholar]
- Ecker, U. K. H.; Lewandowsky, S.; Cook, J.; Schmid, P.; Fazio, L. K.; Brashier, N.; Kendeou, P.; Vraga, E. K.; Amazeen, M. A. The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology 2022, 1(1), 13–29. [Google Scholar] [CrossRef]
- Entman, R. M. Framing: Toward clarification of a fractured paradigm. Journal of Communication 1993, 43(4), 51–58. [Google Scholar] [CrossRef]
- Freelon, D.; McIlwain, C. D.; Clark, M. D. Beyond the hashtags: #Ferguson, #BlackLivesMatter, and the online struggle for offline justice. Center for Media & Social Impact 2016. [Google Scholar] [CrossRef]
- Gillespie, T. Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media; Yale University Press, 2018; ISBN 9780300173130. [Google Scholar]
- Grimmer, J.; Stewart, B. M. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis 2013, 21(3), 267–297. [Google Scholar] [CrossRef]
- Howard, P. N.; Hussain, M. M. Democracy's fourth wave? Digital media and the Arab Spring; Oxford University Press, 2013; ISBN 9780199936977. [Google Scholar]
- Iyengar, S. Is anyone responsible? How television frames political issues; University of Chicago Press, 1991; ISBN 9780226388533. [Google Scholar]
- Jackson, S. J.; Foucault Welles, B. #Ferguson is everywhere: Initiators in emerging counter public networks. Information, Communication & Society 2016, 19(3), 397–418. [Google Scholar] [CrossRef]
- Kata, A. Anti-vaccine activists, Web 2.0, and the postmodern paradigm—An overview of tactics and tropes used online by the anti-vaccination movement. Vaccine 2012, 30(25), 3778–3789. [Google Scholar] [CrossRef] [PubMed]
- Lecheler, S.; de Vreese, C. H. News framing effects; Routledge, 2019; ISBN 9781138632707. [Google Scholar]
- Lewandowsky, S.; Ecker, U. K.; Seifert, C. M.; Schwarz, N.; Cook, J. Misinformation and its correction. Psychological Science in the Public Interest 2012, 13(3), 106–131. [Google Scholar] [CrossRef]
- McCombs, M.; Valenzuela, S. Setting the agenda: Mass media and public opinion, 3rd ed.; Political Press, 2020; ISBN 9781509535812. [Google Scholar]
- Pariser, E. The filter bubble: What the internet is hiding from you*; Penguin Press, 2011; ISBN 9781594203008. [Google Scholar]
- Roozenbeek, J.; van der Linden, S. Fake news game confers psychological resistance against online misinformation. Palgrave Communications 2020, 5(1), Article 65. [Google Scholar] [CrossRef]
- Scheufele, D. A. Framing as a theory of media effects. Journal of Communication 1999, 49(1), 103–122. [Google Scholar] [CrossRef]
- Sunstein, C. R. Republic: Divided democracy in the age of social media; Princeton University Press, 2017; ISBN 9780691175515. [Google Scholar]
- Tufekci, Z. YouTube, the great radicalizer. The New York Times. 10 March 2018. Available online: https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html.
- Vaidhyanathan, S. Antisocial media: How Facebook disconnects us and undermines democracy; Oxford University Press, 2018; ISBN 9780190841164. [Google Scholar]
- Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359(6380), 1146–1151. [Google Scholar] [CrossRef] [PubMed]
- Ward, S. J. A. Ethics and the media: An introduction, 2nd ed.; Cambridge University Press, 2018; ISBN 9781107158699. [Google Scholar]
- Yang, G. Narrative agency in hashtag activism: The case of BlackLivesMatter. Media and Communication 2016, 4(4), 13–17. [Google Scholar] [CrossRef]
- Zuboff, S. The age of surveillance capitalism: The fight for a human future at the new frontier of power; PublicAffairs, 2019; ISBN 9781610395694. [Google Scholar]
- roviding clarity, completeness, and accuracy for your research paper.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).