Preprint
Article

This version is not peer-reviewed.

Gatekeeping and Agenda-Setting Theories in the Age of Algorithmic Media: A Critical Synthesis and Theoretical Extension (2000–2025)

Submitted:

13 February 2026

Posted:

14 February 2026

You are already at the latest version

Abstract
The parallel evolution of gatekeeping and agenda-setting theory constitutes one of the most consequential intellectual trajectories in communication studies, yet the two traditions have developed largely in isolation from one another despite their deep functional interdependence. This paper undertakes a critical, integrative review of both theoretical traditions across the period 2000 to 2025, a quarter-century defined by the migration of public discourse from institutionally controlled media environments to algorithmically mediated digital platforms. Drawing on a structured synthesis of peer-reviewed scholarship, the paper traces the transformation of gatekeeping from a process enacted by identifiable human decision-makers within institutional hierarchies to a distributed, computationally governed phenomenon in which algorithms, platform architectures, and user behaviors collectively determine the visibility of information.Simultaneously, it examines the reconfiguration of agenda-setting from a linear transfer of salience between media institutions and mass publics to a recursive, networked process shaped by personalization technologies, platform-specific affordances, and the fragmentation of formerly unified audiences into algorithmically constituted micro-publics. The paper introduces the concept of "salience agency" as an integrative meta-theoretical framework capable of capturing the convergence of gatekeeping and agenda-setting within contemporary communication systems. It argues that the algorithmic mediation of information visibility represents not merely a technological augmentation of existing communicative processes but a structural transformation that demands new theoretical vocabularies, new empirical methodologies, and new normative commitments. Implications for communication scholarship, media management, regulatory policy, and democratic governance are discussed, and priorities for future research are identified.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

The capacity of media systems to determine which issues, events, and interpretations achieve public visibility has long been recognized as among the most consequential dimensions of communicative power in democratic societies. Two theoretical traditions have provided the dominant analytical frameworks for understanding this capacity. Gatekeeping theory, originating in Kurt Lewin’s (1947) wartime studies of food distribution and subsequently adapted to communication research by David Manning White (1950), examines the processes, actors, and institutional structures through which information is selected, filtered, and channeled toward publics. Agenda-setting theory, inaugurated by McCombs and Shaw’s (1972) landmark study of the 1968 United States presidential campaign, investigates the mechanisms through which media attention to particular issues shapes public perceptions of their importance, establishing a correspondence between the “media agenda” and the “public agenda” that has been replicated across hundreds of empirical studies in diverse national and cultural contexts.
Despite their obvious functional relationship — gatekeeping determines what information passes through institutional filters, while agenda-setting describes the consequences of that filtering for public cognition — these two traditions have developed along largely separate intellectual trajectories. Gatekeeping research has tended to focus on the internal sociology of media organizations, examining the decision-making routines, professional norms, and institutional pressures that shape editorial selection. Agenda-setting research has tended to focus on the external effects of media coverage on public opinion, treating the media agenda as a largely exogenous variable whose formation is analytically prior to the question of its influence. The result has been what this paper characterizes as a disciplinary bifurcation: a division of labor that has generated important insights within each tradition but has simultaneously obscured the systemic connections between the processes of information selection and their consequences for public salience.
The digital transformation of the media landscape over the period 2000 to 2025 has rendered this bifurcation untenable. The rise of social media platforms, algorithmic content curation, and participatory digital culture has fundamentally altered the structures through which information achieves public visibility, collapsing the institutional boundaries that once separated the gatekeeping function from its agenda-setting consequences. In the algorithmic media environment, the act of filtering information and the act of conferring salience upon it are no longer sequential processes performed by distinct actors within distinct institutional settings; they are simultaneous, recursive, and frequently indistinguishable operations performed by computational systems whose decision-making logic is opaque to both the publics they serve and the scholars who study them (Chadwick, 2017; Shoemaker & Vos, 2009).
This paper undertakes a critical, integrative review of gatekeeping and agenda-setting theory across this transformative period, pursuing three interconnected objectives. The first is descriptive: to map the evolution of both theoretical traditions from their pre-digital foundations through their successive adaptations to the realities of digital and algorithmic media. The second is analytical: to identify and examine the points of theoretical convergence between gatekeeping and agenda-setting that the digital transformation has made visible, including the shared mechanisms of algorithmic filtering, platform governance, and networked information flow that now govern both processes simultaneously. The third is constructive: to propose the concept of “salience agency” as an integrative meta-theoretical framework that captures the distributed, recursive, and technically mediated character of information visibility in the contemporary communication ecosystem. The paper argues that salience agency — defined as the capacity of any actor, whether human, institutional, or algorithmic, to influence the visibility, prominence, and perceived importance of information within the public communication system — provides a vocabulary capable of bridging the analytical gap between gatekeeping and agenda-setting, accommodating the empirical complexity of the digital media landscape, and generating productive directions for future research.
The significance of this undertaking extends beyond the boundaries of academic communication theory. The processes through which information achieve public visibility are foundational to the functioning of democratic governance, the formation of public opinion, and the capacity of citizens to make informed judgements about matters of collective concern. When these processes are governed by opaque algorithmic systems optimized for commercial engagement rather than democratic deliberation, the implications for the quality of public discourse and the health of democratic institutions are profound (Sunstein, 2017). Understanding the convergence of gatekeeping and agenda-setting in the algorithmic age is therefore not merely a scholarly exercise but a civic imperative.

2. Methodological Approach

This paper employs a structured integrative review methodology designed to synthesize theoretical and empirical contributions from across communication studies literature, supplemented by relevant work in information science, political communication, computational social science, and platform governance studies. The integrative review method was selected for its capacity to accommodate both quantitative and qualitative research traditions, to identify theoretical convergences across heterogeneous bodies of literature, and to generate new conceptual frameworks through systematic synthesis rather than mere aggregation (Torraco, 2005).
The review encompasses peer-reviewed journal articles, scholarly monographs, edited volumes, and substantive institutional reports published between 2000 and 2025, with selective engagement with foundational texts from earlier periods where necessary for establishing theoretical lineage. Literature was identified through systematic searches of major academic databases — including Scopus, Web of Science, Communication Abstracts, and Google Scholar — using search terms encompassing “gatekeeping theory,” “agenda-setting,” “algorithmic curation,” “platform governance,” “digital media,” “networked gatekeeping,” and related constructs. Searches were supplemented by backward citation tracking from key recent publications and forward citation tracking from foundational texts in both theoretical traditions.
The scope of the review is deliberately broad, encompassing work from diverse national and cultural contexts, in recognition of the fact that the digital transformation of media systems is a global phenomenon whose dynamics vary significantly across geopolitical settings. Particular attention has been paid to scholarship examining platform governance in non-Western contexts, including work on media environments in sub-Saharan Africa, East Asia, South Asia, and the Middle East, to resist the universalization of theoretical models derived primarily from Western liberal-democratic media systems.
The analytical framework employed in the review is organized around three interconnected dimensions. The first is temporal, tracing the evolution of both theoretical traditions across the twenty-five-year period under examination and identifying key inflection points at which technological, institutional, or political developments necessitated theoretical revision. The second is structural, examining the architectural features of digital platforms that shape the processes of information filtering and salience conferral, including algorithmic recommendation systems, content moderation policies, and platform-specific affordances. The third is normative, assessing the democratic implications of the observed transformations and evaluating the adequacy of existing regulatory and governance frameworks for addressing the challenges they present.

3. Theoretical Foundations: Gatekeeping and Agenda-Setting Theories in the Pre-Digital Era

The study of how media systems shape public visibility of issues, events, and interpretations has been central to understanding communicative power in democratic societies. Two influential theoretical traditions—gatekeeping and agenda-setting—have provided the primary frameworks for analyzing these processes. Gatekeeping theory, rooted in the work of Kurt Lewin and later adapted to media studies, explores how information is selected and filtered through institutional structures and individual decision-makers before reaching the public. In contrast, agenda-setting theory, introduced by McCombs and Shaw, examines the impact of media coverage on public perceptions of issue importance, establishing a link between the priorities set by media organizations and the concerns of the wider public. In the pre-digital era, these theories developed along distinct paths: gatekeeping focused on the internal mechanics of media organizations, while agenda-setting emphasized the external influence of media on public opinion. Together, they laid the groundwork for understanding the flow and salience of information in traditional media environments, providing essential insights into the dynamics that shaped public discourse before the advent of digital and algorithmic platforms.

3.1. The Gatekeeping Tradition

Gatekeeping theory emerged from a fortuitous interdisciplinary transfer. Kurt Lewin’s (1947) observation that food items pass through a series of “channels” governed by “gates” — decision points at which items are either admitted or rejected — was adapted by White (1950) to the study of news selection in his celebrated case study of a wire editor, pseudonymously named “Mr. Gates.” White’s study demonstrated that the selection of news for publication was governed not by objective criteria of newsworthiness alone but by the subjective judgements, personal biases, and professional routines of the individual gatekeeper. This finding — that the construction of “the news” is an inherently selective and interpretive process — provided the foundational insight upon which subsequent gatekeeping research was built.
Over the following decades, gatekeeping theory underwent progressive theoretical elaboration. Shoemaker (1991) developed a multi-level model that situated the individual gatekeeper within a series of concentric analytical levels — individual, routine, organizational, extra-media, and ideological — each exerting constraining influence on the selection process. This model represented a decisive advance beyond White’s individualistic framework, demonstrating that gatekeeping is not reducible to the preferences of individual decision-makers but is systematically shaped by institutional routines, organizational hierarchies, economic pressures, and broader ideological formations. Shoemaker and Vos (2009) further refined this framework, introducing the concept of “audience gatekeeping” to acknowledge the growing capacity of media consumers to participate in the selection and dissemination of information, anticipating the participatory dynamics that would become central to the digital media environment.
A critical turning point in the evolution of gatekeeping theory was Barzilai-Nahon’s (2008) reconceptualization of gatekeeping as a networked phenomenon. Observing that the internet had introduced a communicative architecture fundamentally different from the one-to-many broadcast model assumed by classical gatekeeping theory, Barzilai-Nahon proposed a “Network Gatekeeping Theory” that foregrounded the role of network position, information control mechanisms, and the political relationship between gatekeepers and the “gated” — those whose access to information is controlled by gatekeeping processes. This reconceptualization was significant not only for its analytical innovation but for its normative implications: by identifying the “gated” as a distinct analytical category with interests potentially opposed to those of the gatekeeper, Network Gatekeeping Theory introduced questions of power, accountability, and legitimacy that had been relatively underdeveloped in earlier formulations. Subsequent work by Singer (2014) on “secondary gatekeeping” and by Bruns (2018) on “gate watching” further extended the theory to accommodate the participatory dynamics of social media, in which users select, share, and amplify content in ways that complement and contest institutional editorial authority.
Table 1. Key Milestones in the Evolution of Gatekeeping Theory (1947–2018).
Table 1. Key Milestones in the Evolution of Gatekeeping Theory (1947–2018).
Period Scholar(s) Key Contribution Analytical Focus
1947 Lewin Channel-and-gate metaphor for information flow Conceptual foundation
1950 White Application to news selection (“Mr. Gates” case study) Individual gatekeeper
1991 Shoemaker Multi-level model of gatekeeping influences Individual, routine, organizational, extra-media, ideological
2008 Barzilai-Nahon Network Gatekeeping Theory Network position; gatekeeper–gated relations
2009 Shoemaker & Vos Audience gatekeeping; updated multilevel theory Audience as active gatekeepers
2014 Singer Secondary gatekeeping in shared media spaces User-generated visibility
2018 Bruns Gate watching and news curation Participatory observation and redistribution
2018 Bucher Algorithmic power and programmed sociality Computational/algorithmic gatekeeping
Note. Adapted and extended from Shoemaker and Vos (2009). Full citations for each entry appear in the reference list.

3.2. The Agenda-Setting Tradition

Agenda-setting theory was established by McCombs and Shaw’s (1972) study of undecided voters in Chapel Hill, North Carolina, during the 1968 presidential election. The study demonstrated a strong correlation between the issues emphasized in local news coverage and the issues that voters identified as most important, providing empirical support for the hypothesis that media do not tell people what to think but rather what to think about. This foundational insight — that media influence operates primarily through the conferral of salience rather than the direct persuasion of opinion — has been replicated in hundreds of subsequent studies and has become one of the most robust findings in communication research.
The evolution of agenda-setting theory has proceeded through several identifiable phases. The “first level” of agenda-setting, established by the original Chapel Hill study, concerns the transfer of object salience — the capacity of media to make certain issues more prominent in public consciousness. The “second level,” developed by McCombs and colleagues in the 1990s, extends the analysis to attribute salience — the capacity of media to shape not only which objects the public thinks about but which attributes of those objects are perceived as most important (McCombs & Shaw, 1993). A newspaper that covers a political candidate primarily in terms of leadership qualities, for example, transfers salience not only to the candidate as an object of attention but to leadership quality as the salient attribute through which the candidate is evaluated.
The emergence of digital media prompted the articulation of a “third level” of agenda-setting, known as Network Agenda Setting (NAS), which examines how media construct associative networks linking multiple objects and attributes, shaping not only what the public thinks about and the terms in which it thinks but the cognitive associations through which discrete issues are connected into coherent interpretive frameworks (Guo & McCombs, 2016). NAS represents a significant theoretical advance, moving agenda-setting from a dyadic model of salience transfer to a network model of cognitive architecture in which the relationships among agenda items are as consequential as the items themselves. More recently, Vargo and Guo (2017) extended the NAS model to the analysis of intermedia agenda-setting across digital platforms, demonstrating that the associative networks through which issues are cognitively linked are increasingly shaped by the algorithmic logic of platform-mediated information flows. In the digital environment, the “bundling” of issues — the cognitive association of immigration with crime, or climate change with economic loss, for example — is not merely a function of editorial framing but of algorithmic co-presentation, in which recommendation engines systematically present certain issues in conjunction with certain others based on engagement optimization rather than journalistic judgement.
Table 2. Levels of Agenda-Setting Theory.
Table 2. Levels of Agenda-Setting Theory.
Level Core Concept Key Scholars Mechanism
First level Object salience transfer McCombs and Shaw (1972) Media emphasis on issues → public perception of issue importance
Second level Attribute salience transfer McCombs and Shaw (1993) Media emphasis on attributes → public perception of attribute importance
Third level (NAS) Network agenda setting Guo and McCombs (2016); Vargo and Guo (2017) Media construction of associative issue–attribute networks → public cognitive networks
Note. Adapted from McCombs and Shaw (1993) and Guo and McCombs (2016).

3.3. Points of Pre-Digital Convergence

Even before the digital transformation, perceptive scholars identified structural parallels between gatekeeping and agenda-setting that pointed toward their eventual convergence. Both traditions address the fundamental communicative process through which certain information achieves public visibility while other information remains obscure. Both recognize that this process is not neutral but systematically shaped by institutional structures, professional norms, and power relations. Both implicitly acknowledge that the selection of information (gatekeeping) and the conferral of importance upon it (agenda-setting) are functionally inseparable — an item that passes through the gate simultaneously acquires a degree of public salience by virtue of its selection.
What prevented the explicit integration of these traditions in the pre-digital era was the institutional architecture of the legacy media system. In a media environment characterized by a relatively small number of broadcasting and publishing institutions, each employing professional journalists governed by shared normative frameworks, the gatekeeping function was institutionally localized — it occurred within identifiable organizations whose decision-making processes could be studied through ethnographic, interview, and content-analytic methods. The agenda-setting function, by contrast, was diffused across the entire media system and manifested in its cumulative effects on public opinion. The institutional separation of these functions — one inside the newsroom, the other outside in the domain of public cognition — provided a pragmatic basis for their analytical separation, even as the theoretical grounds for that separation were questionable.
The digital transformation dissolved this pragmatic basis by dissolving the institutional boundaries that sustained it. When algorithms replace editors, when users become distributors, and when platform architecture determines the conditions of information visibility, the gatekeeping function is no longer institutionally localized, and its agenda-setting consequences are no longer analytically separable from the filtering processes that produce them. The remainder of this paper examines this dissolution and its implications.

4. The Digital Media Landscape: Key Transformations (2000–2025)

The early 21st century ushered in a period of rapid and profound transformation in the media environment, reshaping how information is produced, disseminated, and consumed. As digital technologies matured and platforms emerged to challenge the dominance of traditional media, the foundations of public communication were fundamentally altered. This section provides an overview of the pivotal changes that redefined the media landscape between 2000 and 2025, highlighting the shift from institutionally controlled channels to algorithmically mediated networks. By tracing these developments, we can better understand the evolving dynamics of gatekeeping and agenda-setting in a world where technology companies and digital platforms play an increasingly central role in shaping public discourse.

4.1. From Institutional to Algorithmic Mediation

The period under examination witnessed a transformation in the architecture of public communication that is without precedent in the history of media. At the turn of the millennium, the dominant intermediaries between information producers and publics were institutional media organizations — newspapers, television networks, radio stations — staffed by professional journalists whose gatekeeping decisions were governed by established norms of newsworthiness, editorial standards, and institutional accountability. By 2025, these organizations, while still significant participants in the information ecosystem, had been substantially displaced as the primary intermediaries of public discourse by a small number of technology companies whose platforms — Meta (formerly Facebook), X (formerly Twitter), Google (including YouTube), TikTok, and a handful of others — governed the visibility of information for billions of users through proprietary algorithmic systems (Napoli, 2019; Nechushtai, 2018).
Table 3. Phases of Digital Media Transformation (2000–2025).
Table 3. Phases of Digital Media Transformation (2000–2025).
Phase Period Defining Features Predominant Gatekeeping Mode Predominant Agenda-Setting Mode
Phase 1 2000–2008 Digitization of legacy media; emergence of blogs and early social platforms Primarily institutional, with emergent participatory elements Predominantly unidirectional; legacy media dominant
Phase 2 2008–2016 Ascendancy of social media; mobile internet proliferation; algorithmic news feeds Migration from editorial to algorithmic selection; hybrid models Reciprocal inter-media influence; platform-mediated salience
Phase 3 2016–2025 Platform scrutiny; regulatory responses (DSA, AI Act); AI-driven curation Algorithmic dominance with platform governance overlay Fragmented micro-agendas; personalized salience hierarchies
Note. Synthesized from Chadwick (2017) and Napoli (2019).
The first phase, spanning roughly from 2000 to 2008, was characterized by the digitization of legacy media and the emergence of early participatory platforms, including blogs, forums, and early social networking sites. During this period, the internet was frequently conceptualized as a supplement to existing media structures rather than as a qualitatively new communicative environment. Legacy media organizations established online presences that largely reproduced their editorial hierarchies and professional norms in digital form, and audience participation, while technically enabled, remained marginal to the primary circuits of information production and dissemination (Singer, 2014).
The second phase, from approximately 2008 to 2016, witnessed the rapid ascendancy of social media platforms as major nodes in the information ecosystem. The proliferation of smartphones, the expansion of mobile internet access, and the development of algorithmically curated news feeds transformed platforms from social networking tools into primary channels of news consumption for large segments of the global population (Fletcher & Nielsen, 2018). During this period, the gatekeeping function began its migration from human editorial decision-makers to algorithmic systems, as platform companies discovered that algorithmically personalized content feeds generated substantially higher levels of user engagement — and therefore advertising revenue — than chronologically ordered or editorially curated alternatives (Pariser, 2011).
The third phase, from 2016 to the present, has been defined by the recognition — among scholars, policymakers, and publics — that the algorithmic mediation of information carries profound consequences for democratic governance, social cohesion, and epistemic integrity. The role of social media platforms in the 2016 United States presidential election, the Brexit referendum, and a series of subsequent political upheavals across the globe catalyzed critical scrutiny of the mechanisms through which algorithmic systems shape public discourse (Napoli, 2019). This scrutiny has generated a rapidly expanding body of scholarship on platform governance, algorithmic accountability, and the democratic implications of computationally mediated communication — scholarship that provides the empirical foundation for the theoretical synthesis undertaken in this paper.

4.2. Hybrid Media Dynamics and the Dissolution of Categorical Boundaries

The concept of the “hybrid media system,” introduced by Chadwick (2017), provides an essential analytical framework for understanding the contemporary information landscape. Chadwick argues that the media environment is characterized not by the simple displacement of “old” media by “new” media but by the ongoing interaction, competition, and mutual adaptation of diverse media logics — broadcast and networked, professional and participatory, institutional and algorithmic — within a single, complex communicative ecosystem. In this hybrid system, the boundaries between media production and consumption, between editorial selection and algorithmic curation, and between institutional authority and participatory influence are not fixed but fluid, contested, and continuously renegotiated.
The hybrid media framework has profound implications for both gatekeeping and agenda-setting theory. For gatekeeping, it suggests that the relevant unit of analysis is no longer the individual gatekeeper or the individual media organization but the entire assemblage of human and non-human actors — journalists, editors, algorithms, platform policies, user behaviors, regulatory frameworks — that collectively determine the conditions of information visibility. For agenda-setting, it suggests that the “media agenda” is no longer a singular, unified construct that can be measured through content analysis of a finite set of media outlets but a fragmented, platform-specific, and algorithmically personalized phenomenon that varies across audiences, platforms, and temporal contexts (Thorson & Wells, 2016).
The dissolution of categorical boundaries between media types has also generated new forms of inter-media agenda-setting. Research by Harder et al. (2017) demonstrates that the directional flow of agenda influence between legacy media and digital platforms is not unidirectional but reciprocal and dynamic, with issues originating on social media platforms increasingly setting the agenda for legacy media coverage and vice versa. The speed of this reciprocal influence has accelerated dramatically over the period under examination, with social media capable of elevating issues to national prominence within hours — a temporal compression that has fundamentally altered the rhythms of public discourse and the capacity of media professionals to exercise deliberative editorial judgement.

4.3. Platform Governance as Structural Gatekeeping

A critical analytical development of the post-2016 period has been the recognition that the governance structures of digital platforms — their terms of service, content moderation policies, and algorithmic design choices — function as structural gatekeeping mechanisms with consequences as far-reaching as those of any editorial decision (Gillespie, 2018). Platform governance determines not only what content is visible but the conditions under which visibility is possible: the criteria by which content is recommended or suppressed, the standards by which it is evaluated for compliance with community guidelines, and the mechanisms through which users can contest moderation decisions.
Empirical research has increasingly documented the consequential nature of these governance structures. Roberts (2019) demonstrates that content moderation on major platforms functions not merely as a technical mechanism for removing harmful material but as a labor-intensive and politically significant process that shapes the informational environment in accordance with platform priorities that are frequently opaque to external observers. Siapera (2014) documents how platform governance during the mediation of the Palestinian political situation reveals the ways in which ostensibly universal community standards may function as instruments of epistemic exclusion when applied to communications emerging from contexts of conflict and occupation. Similarly, Golovchenko et al. (2018) show how the information warfare over the Ukraine conflict was shaped by the interaction of state actors, media institutions, and citizen curators of digital disinformation operating within and against platform governance structures.
These cases underscore a broader analytical point: platform governance constitutes a form of gatekeeping that is structurally distinct from both traditional editorial gatekeeping and the emergent algorithmic gatekeeping discussed above. Where editorial gatekeeping operates through the professional judgement of identifiable individuals and algorithmic gatekeeping operates through computational processes optimized for engagement, platform governance gatekeeping operates through the establishment of systemic rules and norms that define the parameters within which both editorial and algorithmic gatekeeping occur. It is, in this sense, a form of meta-gatekeeping — gatekeeping over the conditions of gatekeeping itself (Gillespie, 2018).

5. Empirical Analysis and Contemporary Data Synthesis (2000–2025)

This section provides a comprehensive empirical analysis of the evolving mechanisms of information filtering, gatekeeping, and agenda-setting within the media ecosystem from 2000 to 2025. Drawing upon recent scholarships, industry data, and comparative trends across different platform architectures, the analysis synthesizes developments that have redefined the relationships among information producers, intermediaries, and audiences. The period under review encompasses the transition from the digitization of legacy media to the rise of algorithmically curated social platforms and the subsequent emergence of platform governance as a pivotal force in shaping public discourse. Through a combination of case studies, comparative tables, and critical review of literature, this section illuminates the complex and rapidly changing dynamics that structure contemporary information visibility and salience.

5.1. Comparative Trends in Information Filtering Across Platform Architectures

Empirical investigation across the period 2000 to 2025 reveals a fundamental and accelerating transformation in how information is prioritized, filtered, and disseminated to publics. The shift was not merely quantitative — more content, more channels, more users — but qualitative, involving a structural reconfiguration of the relationships among information producers, intermediaries, and audiences. By the early 2010s, platforms such as Facebook and Twitter had established themselves as significant nodes in the information ecosystem, capable of shaping the visibility of content in ways that paralleled and increasingly rivalled the editorial authority of legacy media institutions (Nechushtai, 2018). By 2025, the algorithmic systems governing these platforms had assumed a gatekeeping function of historically unprecedented scale and complexity.
Platform architecture constrains the logic of information filtering in distinctive and consequential ways. Research into hybrid media platforms provides evidence that journalism in digital contexts faces a structural double bind: professional norms developed under conditions of institutional autonomy are coerced into coexistence with platform-based algorithmic constraints that operate according to fundamentally different logics of value (Nielsen & Ganter, 2018). The advantages of each platform type shape not only what content is visible but how that content is consumed, shared, and interpreted by audiences, creating platform-specific information ecologies that resist generalization.
Table 4. Comparative Gatekeeping Mechanisms Across Platform Types.
Table 4. Comparative Gatekeeping Mechanisms Across Platform Types.
Platform Type Primary Gatekeeping Mechanism Filtering Logic Salience Criteria Illustrative Example
Legacy media (print/broadcast) Editorial selection by professional journalists Professional news values (newsworthiness) Timeliness, impact, proximity, prominence Newspaper front-page placement
Social media (Meta, X) Content moderation + algorithmic ranking Community standards + engagement optimization Engagement signals, policy compliance Facebook News Feed algorithm
Video platforms (YouTube, TikTok) Recommendation engine Behavioral prediction + personalization Watch time, individual relevance, retention TikTok “For You” page
Collaborative platforms (Wikipedia) Community editing + consensus processes Verifiability, notability, neutral point of view Collective editorial judgement Wikipedia article prominence and deletion policy
Search engines (Google) Algorithmic indexing + ranking Relevance scoring, authority signals PageRank, content freshness, user signals Google Search results ordering
Note. Adapted from Gillespie (2018) and Diakopoulos (2019).
In the case of Meta and X, gatekeeping is primarily enacted through content moderation policies and the selective algorithmic downgrading of content deemed to violate community standards or to be of low quality. Research on these platforms demonstrates that content moderation functions not merely as a technical intervention but as a consequential mechanism promoting forms of engagement and reinforcing platform governance objectives (Roberts, 2019). For algorithm-intensive platforms such as TikTok, the gatekeeping mechanism is embedded within the recommendation engine itself, which curates content aligned with deeply personal identities, preferences, and behavioral histories through what Bucher (2018) analyses as “programmed sociality” the algorithmic shaping of social relations and information exposure according to computational logics that are largely invisible to users. This represents a qualitative departure from the gatekeeping logic of legacy media. Where traditional editors selected content based on professionally defined criteria of newsworthiness — timeliness, proximity, impact, prominence — the TikTok algorithm selects content based on predicted individual relevance, effectively substituting collective newsworthiness with individualized pertinence at the foundational level of information filtering.
The agenda-setting implications of this substitution are profound. When the primary criterion for information visibility is individual relevance rather than collective importance, the structural conditions for the formation of a shared public agenda are undermined (Sunstein, 2017). The classical agenda-setting model presupposed a common informational environment in which diverse publics were exposed to a broadly similar set of issues and events, enabling the formation of collective judgements about matters of shared concern. The algorithmic personalization of information feeds fragments this common environment into millions of individualized micro-environments, each reflecting the preferences, behaviors, and demographic characteristics of its individual user (Pariser, 2011; Thorson & Wells, 2016). The public agenda, under these conditions, is no longer a collective construct but an aggregation of individual agendas whose points of convergence are incidental rather than structurally produced.

5.2. Algorithmic Bias, Salience Distortion, and Democratic Cognition

Algorithmic bias — defined as the systematic and inequitable distortion of information visibility resulting from the design, training, or deployment of automated systems — carries direct and measurable consequences for the construction of public salience hierarchies. The empirical literature demonstrates convincingly that algorithms do not filter information in a neutral or transparent manner; they prioritize information on the basis of features and optimization objectives that may diverge substantially from criteria of public interest, democratic relevance, or informational accuracy. Research on exposure to online political information provides a particularly instructive illustration of these dynamics. Guess et al. (2020) demonstrate that algorithmically mediated exposure to untrustworthy websites during the 2016 United States presidential election was concentrated among specific demographic groups, with older and ideologically extreme users disproportionately exposed to low-quality information. The mechanism is consequential in effect: algorithms trained to maximize engagement metrics systematically favor content that provokes emotional responses over content that informs deliberative judgement, with cumulative consequences for the quality of democratic citizenship.
Table 5. Selected Empirical Evidence of Algorithmic Salience Distortion.
Table 5. Selected Empirical Evidence of Algorithmic Salience Distortion.
Source Context Key Finding Implication for Salience
Guess et al. (2020) 2016 US presidential election Exposure to untrustworthy websites was concentrated among older, ideologically extreme users Algorithmic exposure creates demographically stratified salience hierarchies
Bender et al. (2021) Large language model deployment LLMs amplify biases embedded in training data, including racial and gender biases AI-mediated curation systematically reproduces inequitable visibility patterns
Benkler et al. (2018) US political media ecosystem Asymmetric polarization in media consumption driven by platform and network dynamics Structural asymmetries in salience conferral across political orientations
Starbird et al. (2019) Mass shooting events Alternative media ecosystems produce coordinated counter-narratives through participatory disinformation Strategic actors exploit algorithmic amplification for agenda manipulation
Cushion and Thomas (2018) UK election coverage Media ownership structures and platform logics shape candidate attribute visibility Commercial and ownership interests distort political salience hierarchies
Note. Compiled by the author from the cited sources. Full references appear in the reference list.
Technical mechanisms through which algorithmic bias operates are increasingly being made visible through methodological innovation. Diakopoulos (2019) documents how algorithm auditing techniques — including sock-puppet audits, crowdsourced audits, and reverse-engineering approaches — have enabled researchers to identify the specific input variables that determine visibility outcomes in news recommendation systems. Large-scale analyses assessing the capabilities of large language models reveal that these systems can amplify pre-existing biases embedded in training data, including biases related to gender, race, institutional prestige, and political orientation, in ways that are both systematic and difficult to detect through conventional quality assurance (Bender et al., 2021). When such systems are deployed to curate information feeds for millions of users, the resulting distortion of salience acquires a structural character that transcends individual instances of bias and becomes a systemic feature of the information environment.
Analysis of media coverage patterns during recent United Kingdom general elections reveals that media ownership structures and platform-driven logics of content optimization materially influence the salience of political candidates, with certain attributes — leadership style, personal background, policy positions — systematically emphasized or diminished in accordance with the engagement characteristics of different platform environments (Cushion & Thomas, 2018). This evidence suggests that in the algorithmic media environment, the classical “transfer of salience” identified by McCombs and Shaw (1972) is simultaneously a “transfer of bias,” wherein the optimization objectives of the algorithmic gatekeeper become, through recursive feedback, the cognitive priorities of the public.
The dynamics of the broader American political media ecosystem provide a particularly well-documented case of these dynamics operating at scale. Benkler et al. (2018) demonstrate that the networked architecture of online political communication, combined with platform-based amplification dynamics and audience engagement patterns, facilitated the emergence of an asymmetrically polarized media ecosystem in which disinformation and strategically crafted political narratives achieved national salience. Starbird et al. (2019) document how the participatory nature of strategic information operations — in which peripheral actors coordinate the production and dissemination of alternative narratives around mass shooting events — exploits the amplification dynamics of algorithmic platforms to elevate fringe interpretations to the level of publicly salient counter-narratives. These cases demonstrate that algorithmically mediated salience distortion represents not merely a technical artefact but a systemic feature of contemporary public discourse with demonstrable consequences for democratic deliberation.

5.3. Citizen Journalism, Decentralized Networks, and Counter-Hegemonic Agenda-Setting

The emergence of citizen journalism and distributed news networks offers a counter-narrative to the concentration of gatekeeping power in algorithmic platforms, demonstrating that the digital media environment remains meaningful if structurally constrained — possibilities for participatory and counter-hegemonic agenda-setting. In contexts where institutional media is constrained by state censorship, commercial capture, or platform governance policies, distributed gatekeeping has enabled what might be termed “insurgent” agendas to surface and achieve public visibility.
The empirical record provides numerous illustrations. Siapera (2014) documents how Palestinian communicators navigating platform community standards have developed adaptive strategies for maintaining informational presence despite moderation policies that frequently restrict content related to the Palestinian political situation. Golovchenko et al. (2018) demonstrate how the information environment surrounding the Ukraine conflict was shaped by distributed networks of citizen curators who selectively amplified, reframed, and contested the narratives produced by both state media and institutional journalism. In China during the early 2000s, netizens engaged in citizen journalism to construct an alternative information ecosystem that challenged the narrative monopoly of state-run media, demonstrating the capacity of decentralized communication to create spaces for political discourse even within authoritarian environments (Hassid, 2012).
Platforms such as Wikipedia have been shown to interact with global news media to generate what Mesgari et al. (2015) describe as a collaborative knowledge ecosystem — a form of decentralized agenda network in which the salience of topics is determined through collaborative editing processes that aggregate the judgements of thousands of contributors rather than the decisions of a small number of institutional gatekeepers. The Arab Spring uprisings provide a particularly instructive case: civil society actors deployed decentralized communication technologies — social media, mobile phones, encrypted messaging applications — to resist autocratic control over the information environment and construct political agendas independent of state-controlled channels (Howard & Hussain, 2013).
However, the empirical evidence also reveals significant limitations of citizen journalism as an alternative to institutional gatekeeping. During the COVID-19 pandemic, audiences demonstrated continued and, in many cases, increased reliance on traditional news media for fact verification and editorial safeguarding, suggesting that professional gatekeeping retains a form of public legitimacy that citizen journalism has not yet achieved (Newman et al., 2022). Moreover, even within the most structurally distributed networks, platforms exercise what might be termed “shadow gatekeeping.” Case studies of open online collectives demonstrate that while participation is actively encouraged through platform design and rhetorical framing, the salience of contributions is ultimately determined by structural features of the platform, including the visibility decisions of administrators and the algorithmic ordering of content (Shaw, 2012). Technology firms shape the parameters of political communication through infrastructure provision, data analytics, and partnership arrangements that concentrate consequential decision-making authority within platform companies even as the surface dynamics of participation appear decentralized (Kreiss & McGregor, 2018). The decentralized surface of participatory platforms frequently conceals concentrated decision-making authority, reproducing in new technological forms the gatekeeping hierarchies they ostensibly seek to displace (Zuboff, 2019).
These findings suggest that the digital media environment does not present a simple binary between centralized institutional gatekeeping and decentralized participatory openness but rather a complex ecology in which multiple modes of gatekeeping and agenda-setting coexist, interact, and compete. The challenge for theoretical models is to capture this complexity without either romanticizing the democratizing potential of decentralized networks or dismissing them as epiphenomenal to the dominant logic of algorithmic platform power.

6. Synthesizing Gatekeeping and Agenda-Setting Theories in the Algorithmic Ecosystem

In the rapidly evolving digital media landscape, the boundaries between gatekeeping and agenda-setting have become increasingly blurred. Traditional media models treated these processes as distinct: gatekeeping involved the selection and filtering of information by journalists and editors, while agenda-setting referred to the prioritization of issues and topics for public attention. However, the rise of algorithmic platforms has reconfigured these roles, embedding both selection and salience within complex, automated systems. This section explores how the convergence of gatekeeping and agenda-setting functions in the algorithmic ecosystem necessitates new theoretical frameworks and analytical tools. By examining the recursive, distributed, and technologically mediated mechanisms of information visibility and prominence, we aim to synthesize these foundational concepts and account for the new dynamics shaping public discourse in an era defined by computational curation.

6.1. The Convergence Thesis

The empirical patterns examined in the preceding sections collectively substantiate the central argument of this paper: that the digital transformation of the media landscape has produced a functional convergence of gatekeeping and agenda-setting that demands theoretical integration. In the algorithmic media environment, the act of filtering information and the act of conferring salience upon it are no longer sequential processes performed by distinct actors within distinct institutional settings. They are simultaneous, recursive, and frequently indistinguishable operations performed by computational systems whose design and optimization objectives embed specific — if often unacknowledged — assumptions about what information should be visible and to whom (Bucher, 2018).
This convergence is manifest across multiple analytical dimensions. At the level of mechanism, algorithmic recommendation systems simultaneously perform the gatekeeping function of selecting which content is presented to users and the agenda-setting function of determining the relative prominence and perceived importance of that content. At the level of structure, the platform architectures within which these algorithms operate constitute the institutional framework for both information filtering and salience conferral, replacing the newsroom as the primary site at which the conditions of public visibility are determined (Nechushtai, 2018). At the level of outcome, the personalized information environments produced by algorithmic curation simultaneously reflect the filtering decisions of the algorithm (gatekeeping) and shape the salience perceptions of the user (agenda-setting), in a recursive loop that defies the linear, sequential logic of classical communication models.
The implications of this convergence for existing theoretical frameworks are substantial. Classical gatekeeping theory assumed that the gate was a discrete decision point at which information was either admitted or rejected by an identifiable human agent. In the algorithmic environment, the “gate” is a continuous, probabilistic process in which information is not simply admitted or rejected but assigned a visibility score that determines its position within a personalized content feed — a position that may change dynamically in response to real-time user behavior and engagement signals (Diakopoulos, 2019). Classical agenda-setting theory assumed that salience was transferred from a unified media agenda to a mass public through repeated exposure to common content. In the algorithmic environment, there is no unified media agenda, no mass public, and no common content — only a multiplicity of algorithmically constituted micro-agendas tailored to individual users based on predicted relevance and engagement potential (Thorson & Wells, 2016).

6.2. Salience Agency: An Integrative Framework

The foregoing analysis motivates the introduction of “salience agency” as a unifying conceptual framework for analyzing the convergence of gatekeeping and agenda-setting in the algorithmic media environment. Salience agency is defined as the capacity of any actor — human, institutional, or algorithmic — to influence the visibility, prominence, and perceived importance of information within the public communication system. Unlike traditional conceptualizations of gatekeeping authority, which locate agencies in identifiable decision-makers operating within institutional hierarchies, salience agency is distributed, recursive, and emergent. It encompasses the micro-gatekeeping acts of individual users (sharing, liking, commenting), the meso-level curation performed by platform algorithms (recommendation, moderation, ranking), and the macro-level agenda-setting influence of institutional media, regulatory bodies, and civil society organizations.
Table 6. Dimensions of Salience Agency: An Analytical Framework.
Table 6. Dimensions of Salience Agency: An Analytical Framework.
Dimension Description Guiding Analytical Question
Agent identity Human individual, institutional actor, or algorithmic system Who or what exercises influence information visibility?
Mechanism of influence Editorial selection, algorithmic ranking, social sharing, regulatory intervention Through what process is visibility determined?
Scale of effect Individual, community, national, global At what level does the salience effect operate?
Degree of transparency Fully transparent, partially opaque, entirely opaque To what extent are the criteria of visibility accessible to external scrutiny?
Normative orientation Public interest, commercial engagement, political control, community consensus What values or objectives guide the exercise of salience agency?
Note. Proposed by the author as an integrative analytical framework synthesizing insight from Shoemaker and Vos (2009), Barzilai-Nahon (2008), and Bucher (2018).
The concept of salience agency addresses three analytical gaps in existing literature. First, it provides vocabulary for describing the functional equivalence of diverse gatekeeping acts that operate at different scales and through different mechanisms but produce converging effects on public salience hierarchies. A journalist’s editorial decision, an algorithm’s ranking calculation, and a user’s decision to share a post are ontologically distinct acts, yet each contributes to the same aggregate outcome: the construction of what the public perceives as important. Second, salience agency captures the recursive character of contemporary information flows, in which the outputs of gatekeeping and agenda-setting processes feed back into the inputs that drive subsequent iterations. The salience loop — content visibility generates engagement, engagement generates algorithmic amplification, amplification generates further visibility — is a defining feature of the algorithmic media environment that linear models of communication fail to represent adequately. Third, the framework accommodates the normative heterogeneity of the contemporary information ecosystem, in which professional journalistic ethics, platform terms of service, community norms, and individual preferences simultaneously govern — and compete over — the criteria by which information achieves visibility.
Operationally, salience agency can be analyzed across the five dimensions set out in Table 6. By mapping empirical cases across these dimensions, researchers can generate systematic comparative analyses of salience agencies across platforms, political contexts, and historical periods, identifying patterns of concentration, contestation, and change that would remain invisible within the separate analytical frameworks of gatekeeping and agenda-setting theory alone.

6.3. Recursive Dynamics: The Salience Loop

The recursive character of information flows in the algorithmic environment deserves particular theoretical emphasis, as it represents perhaps the most fundamental departure from the linear, sequential logic of classical communication models. In the legacy media environment, the flow of influence was predominantly unidirectional: media organizations selected and presented information (gatekeeping), audiences formed salience perceptions on the basis of that presentation (agenda-setting), and audience feedback — in the form of ratings, circulation figures, and letters to the editor — influenced subsequent editorial decisions only indirectly and with significant temporal delay. The information cycle was measured in days and weeks, and each cycle was largely independent of its predecessors.
In the algorithmic environment, this cycle has been compressed to seconds and made continuously recursive. An algorithm presents content to a user; the user’s engagement (or non-engagement) generates data that feeds back into the algorithm, modifying the criteria by which subsequent content is selected and ranked; the modified algorithm presents new content that reflects the user’s revealed preferences as interpreted by the system; and the cycle repeats, continuously and in real time (Bucher, 2018; Pariser, 2011). Each iteration simultaneously performs gatekeeping (selecting what the user sees) and agenda-setting (shaping what the user perceives as important), with the output of each cycle serving as the input to the next.
Table 7. Classical Versus Algorithmic Models of Communication.
Table 7. Classical Versus Algorithmic Models of Communication.
Feature Classical Model Algorithmic Model
Gate structure Discrete decision points (admit/reject) Continuous probabilistic visibility scoring
Gatekeeper identity Identifiable human agents (editors, producers) Opaque computational systems and platform policies
Audience conception Mass public with broadly shared exposure Fragmented micro-publics with personalized exposure
Information flow direction Linear, predominantly unidirectional Recursive, self-reinforcing (salience loop)
Temporal dynamics News cycles measured in days or weeks Real-time, continuous algorithmic updating
Feedback mechanism Indirect and delayed (ratings, circulation, letters) Direct and instantaneous (engagement data, behavioral signals)
Accountability framework Professional norms, editorial codes of ethics Terms of service, algorithmic design choices, regulatory instruments
Agenda formation Unified media agenda transferred to public agenda Multiple platform-specific, personalized micro-agendas
Note. Synthesized by the author from the theoretical analysis presented in this paper, drawing on Shoemaker and Vos (2009), McCombs and Shaw (1972), Bucher (2018), and Thorson and Wells (2016).
This recursive dynamic has three consequential properties. The first is path dependence: because each iteration of the salience loop builds upon the outputs of previous iterations, initial conditions — including the user’s first interactions with the platform, the content available now of initial engagement, and the algorithmic parameters in effect at that time — exert disproportionate influence on subsequent information exposure. The second is convergence: the optimization objectives of algorithmic systems, which are designed to maximize engagement metrics, tend to drive the salience loop toward content that is emotionally arousing, ideologically confirming, or socially validating, regardless of its informational accuracy or democratic relevance (Sunstein, 2017; Zuboff, 2019). The third is opacity: because the recursive dynamics of the salience loop operate within proprietary algorithmic systems whose internal logic is not accessible to external scrutiny, the processes through which public salience is constructed are effectively invisible to the public whose cognitive priorities they shape (Diakopoulos, 2019).

7. Discussion: Implications for Theory, Practice, and Policy

This section synthesizes the key findings and conceptual advancements presented in the preceding analysis, examining their broader implications across three vital domains: communication theory, media management and professional journalism, and policy and regulatory frameworks. As gatekeeping and agenda-setting converge within algorithmic media environments, the recursive dynamics of salience agency reshape how information is filtered, prioritized, and made visible to the public. The discussion addresses how these changes demand new approaches to understanding, managing, and regulating the construction of public knowledge, highlighting both the challenges and opportunities that arise for scholars, practitioners, and policymakers in a rapidly evolving digital landscape.

7.1. Theoretical Implications

The convergence of gatekeeping and agenda-setting in the algorithmic media environment necessitates a fundamental reorientation of communication theory. The salience agency framework proposed in this paper represents one such reorientation, but its implications extend beyond the specific concepts it introduces. This analysis indicates that communication scholars should shift from examining individual media effects to studying broader systemic dynamics, including how information ecosystems are formed and how public knowledge and ignorance are collectively shaped.
This shift requires methodological innovation as well as theoretical revision. The recursive dynamics of the salience loop cannot be adequately captured by the cross-sectional survey designs and content analyses that have dominated agenda-setting research, nor by the ethnographic and interview methods that have characterized much gatekeeping research. Computational methods — including agent-based modelling, network analysis, algorithm auditing, and large-scale digital trace data analysis — are essential for tracking the real-time, recursive processes through which salience is constructed in the algorithmic environment (Diakopoulos, 2019; Vargo & Guo, 2017). At the same time, these computational methods must be integrated with qualitative approaches capable of capturing the interpretive, cultural, and political dimensions of information filtering that purely quantitative analyses risk obscuring. The future of communication research in this domain lies in mixed method designs that combine computational rigor with interpretive depth.
The analysis also highlights the need for greater attention to the normative dimensions of communication theory. Classical gatekeeping and agenda-setting research was predominantly descriptive and explanatory, seeking to identify the mechanisms through which information was filtered, and salience was conferred without necessarily evaluating the democratic adequacy of these mechanisms. In the algorithmic environment, where the processes governing public information are opaque, commercially motivated, and demonstrably capable of distorting democratic deliberation, purely descriptive approaches are insufficient. Communication theory must develop normative frameworks capable of evaluating the democratic legitimacy of different configurations of salience agency — frameworks that specify not only how salience is constructed but how it ought to be constructed in a democratic society committed to informed self-governance (Sunstein, 2017).

7.2. Implications for Media Management and Professional Practice

For media managers and professional journalists, the convergence of gatekeeping and agenda-setting presents both challenges and opportunities. The challenge is one of institutional adaptation: media organizations developed for a communicative environment in which editorial authority was institutionally concentrated must now operate within an ecosystem in which that authority is distributed across a complex assemblage of human and algorithmic actors. This requires not only the development of new technical competencies — particularly in relation to platform analytics, algorithmic optimization, and digital audience engagement — but the negotiation of profound tensions between journalistic norms of public service and the engagement-driven logics of platform-mediated distribution (Nielsen & Ganter, 2018).
The opportunity lies in the distinctive value that professional journalism can offer within the algorithmic ecosystem. In an information environment saturated with unverified, algorithmically amplified content, the capacity for independent verification, contextual analysis, and accountable editorial judgement constitutes a form of institutional legitimacy that algorithmic systems cannot replicate. The COVID-19 pandemic demonstrated that audiences value this capacity and turn to professional media institutions in moments of crisis when the stakes of informational accuracy are perceived as high (Newman et al., 2022). Media organizations that invest in these distinctive competencies — and that communicate their value to audiences — may be better positioned to maintain relevance and public trust within the algorithmic media ecosystem.
The concept of “entrepreneurial journalism” (Vos & Singer, 2016) provides a useful framework for understanding how media organizations might adapt to the hybridized information environment. By developing platform-literate editorial strategies that leverage algorithmic dynamics while maintaining professional commitments to accuracy, fairness, and public service, media organizations can position themselves as what might be termed “salience intermediaries” actors that mediate between the engagement-driven logic of algorithmic platforms and the deliberative requirements of democratic publics.

7.3. Policy and Regulatory Implications

The analysis presented in this paper carries substantial implications for regulatory policy. The concentration of salience agency in a small number of technology companies whose algorithmic systems govern information visibility for billions of users represents a historically unprecedented concentration of communicative power (Zuboff, 2019). This concentration is not adequately addressed by existing regulatory frameworks, which were developed for a media environment characterised by institutional pluralism, editorial accountability, and relatively transparent decision-making processes.
The European Union’s Digital Services Act (2022) and the AI Act (2024) represent significant regulatory innovations, establishing requirements for algorithmic transparency, content moderation accountability, and the assessment of systemic risks posed by very large online platforms. However, the effectiveness of these regulatory instruments depends on the development of enforcement mechanisms capable of monitoring algorithmic behaviour at scale and of assessing the democratic consequences of platform governance decisions that are technically complex, commercially sensitive, and continuously evolving.
Research on the regulation of algorithmic systems in diverse national contexts suggests that regulatory effectiveness depends critically on the availability of technical expertise within regulatory agencies, the willingness of platform companies to cooperate with external auditing processes, and the existence of independent research infrastructure capable of conducting ongoing assessments of algorithmic impact on public discourse (Napoli, 2019; Nechushtai, 2018). The salience agency framework proposed in this paper may be of practical value to regulators by providing a common analytical vocabulary for assessing how different platform architectures, algorithmic designs, and governance policies distribute the capacity to influence information visibility across different actors — and whether the resulting distribution is compatible with democratic norms of communicative pluralism and informed self-governance.
The cultivation of algorithmic literacy among citizens represents a complementary policy priority. Just as media literacy education has sought to equip citizens with the capacity to critically evaluate media messages, algorithmic literacy education would equip citizens with the capacity to understand the mechanisms through which their information environments are constructed, the commercial and political interests that shape those mechanisms, and the strategies available for diversifying their information exposure. Research on the persistence of personalization effects suggests that without such literacy, the recursive dynamics of the salience loop will continue to narrow information exposure and reinforce existing beliefs and preferences in ways that undermine the epistemic foundations of democratic deliberation (Guess et al., 2020; Pariser, 2011).
In addition, the convergence of gatekeeping and agenda-setting under algorithmic conditions confronts all stakeholders—scholars, practitioners, policymakers, and citizens—with urgent questions about the future of public knowledge and democratic communication. For scholars, the challenge is to advance theoretical and methodological tools that match the complexity and opacity of algorithmic media systems, enabling both critical diagnosis and practical guidance. For media professionals, the task is to sustain editorial values while adapting to and, where possible, shaping the technical infrastructures that increasingly mediate information access and public deliberation. For policymakers and regulators, the imperative is to design governance mechanisms that ensure transparency, accountability, and pluralism in the distribution of salience agency—balancing innovation and freedom of expression with the need to safeguard the communicative foundations of democracy.
Looking ahead, the recursive, distributed, and technically mediated character of the salience loop means that interventions at any single point—whether technical, professional, regulatory, or educational—are unlikely to be sufficient on their own. Instead, sustained progress will require collaborative and cross-sectoral efforts that integrate expertise from computer science, social science, law, and media practice. Interdisciplinary dialogue will be essential to anticipate emerging risks, evaluate the effects of new platform designs, and articulate normative standards for the construction of public salience in ways that are both democratically legitimate and technologically feasible.
Moreover, as algorithmic systems continue to evolve and as new forms of media emerge, ongoing empirical research will be needed to map changing patterns of information visibility, identify new sites of agency and influence, and assess the real-world effects of policy interventions and professional innovations. This includes not only large-scale quantitative assessments of information flows but also qualitative studies of how individuals and communities experience, interpret, and contest the salience decisions made by algorithmic platforms. Only through such comprehensive and adaptive inquiry can the field hope to address the profound challenges—and seize the opportunities—of the algorithmic media environment.
Ultimately, the goal must be to support the development of information environments that empower citizens to make informed, autonomous judgments about matters of public concern, foster inclusive and pluralistic public spheres, and sustain the deliberative capacities upon which democratic self-governance depends. The salience agency framework, by foregrounding the complex interplay of technical, institutional, and normative forces in the construction of public knowledge, provides one foundation for this ongoing collective project. Its further refinement and application will be crucial as societies grapple with the communicative realities of the digital age.

8. Conclusion

The trajectory identified in this paper — from human-controlled gates to algorithmically modulated visibility, from collective public agendas to personalized micro-agendas, from institutional accountability to platform opacity — represents not the end of gatekeeping and agenda-setting as communicative phenomena but their transformation into forms that demand new theoretical tools, new empirical methodologies, and new normative commitments. The twenty-five-year period examined here has witnessed a structural reconfiguration of the processes through which democratic publics come to perceive certain issues as important and others as invisible, and this reconfiguration carries consequences that extend far beyond the boundaries of academic communication theory.
The salience agency framework proposed in this paper offers one integrative vocabulary for analyzing the distributed, recursive, and technically mediated processes that characterize the algorithmic media environment. By reconceptualizing gatekeeping and agenda-setting as dimensions of a single, unified process — the construction of information visibility — rather than as separate theoretical traditions addressing discrete communicative phenomena, the framework enables a more comprehensive understanding of how public salience is produced, contested, and transformed in the digital age. Its five analytical dimensions — agent identity, mechanism of influence, scale of effect, degree of transparency, and normative orientation — provide a structured basis for empirical investigation and comparative analysis across platforms, political contexts, and historical periods.
The empirical evidence reviewed in this paper demonstrates that the algorithmic mediation of information visibility is neither politically neutral nor democratically benign. Algorithmic gatekeeping systematically favors engagement over accuracy, emotional arousal over deliberative substance, and commercial profitability over public interest (Benkler et al., 2018; Zuboff, 2019). The recursive dynamics of the salience loop amplify these tendencies over time, creating self-reinforcing patterns of information exposure that narrow rather than broaden the epistemic horizons of democratic citizens. The concentration of salience agency in a small number of technology companies whose decision-making processes are opaque to external scrutiny represents a structural challenge to the communicative foundations of democratic governance.
Addressing these challenges will require sustained effort across multiple domains. Communication scholars must develop theoretical frameworks and empirical methodologies adequate to the complexity of the algorithmic media environment, integrating computational and interpretive approaches in ways that capture both the technical mechanisms and the political consequences of algorithmic information filtering. Media professionals must negotiate the tensions between platform-driven engagement logics and professional commitments to public service, developing editorial strategies that leverage the affordances of digital platforms without subordinating journalistic values to algorithmic imperatives. Policymakers and regulators must develop governance frameworks capable of ensuring algorithmic transparency, accountability, and democratic legitimacy, drawing on the technical expertise of the research community and the practical knowledge of media professionals. And citizens must be equipped with the algorithmic literacy necessary to understand, critically evaluate, and actively shape the information environments within which their political judgements are formed.
The convergence of gatekeeping and agenda-setting in the algorithmic age is, at its core, a question about the conditions under which democratic publics can form the informed, shared judgements upon which self-governance depends. The stakes of this question — for communication scholarship, for media practice, for regulatory policy, and for democratic governance itself — could scarcely be higher. The salience agency framework proposed here represents one contribution to the ongoing collective effort to understand and address these stakes. Its adequacy will be determined not by its theoretical elegance but by its capacity to generate empirical insights, inform practical interventions, and contribute to the construction of information environments worthy of the democratic publics they serve.
Looking forward, the evolution of algorithmic media systems will continue to present new challenges and opportunities for all stakeholders involved in shaping public discourse. As platforms refine their algorithms and adopt emerging technologies such as artificial intelligence and machine learning, the mechanisms influencing information visibility will become increasingly complex and dynamic. This ongoing transformation underscores the necessity for adaptive theoretical frameworks and flexible regulatory policies that can respond to rapid technological change while upholding democratic values.
In this context, interdisciplinary collaboration will be indispensable. Scholars from computer science, law, political science, and media studies must work together to develop robust methodologies for analyzing the impact of algorithmic curation on public knowledge and civic engagement. Such collaborative efforts can facilitate the development of normative standards and best practices for algorithmic transparency, accountability, and pluralism, ensuring that technological innovation supports rather than undermines the communicative foundations of democracy.
Moreover, ongoing empirical research will be vital for tracking shifts in information visibility and assessing the real-world effects of policy interventions and media innovations. Both quantitative analyses of information flows and qualitative studies of individual and community experiences will be necessary to capture the full spectrum of algorithmic influence. Only through comprehensive, adaptive inquiry can the field hope to address the profound challenges posed by the algorithmic media environment and seize its potential for positive transformation.
Ultimately, the collective goal should be to foster information ecosystems that empower citizens to make informed decisions, promote inclusive and pluralistic public spheres, and sustain the deliberative capacities essential for democratic self-governance. The continued refinement and application of the salience agency framework will play a crucial role in guiding this process, ensuring that the construction of public knowledge remains responsive to technological change and aligned with democratic ideals.
As societies grapple with the realities of digital communication, it is clear that the future of public knowledge and democratic discourse depends on our ability to understand, critique, and shape the algorithmic systems that mediate our access to information. The stakes are high, but with sustained, collaborative effort across multiple domains, it is possible to build information environments that serve the needs and aspirations of democratic publics in the digital age.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP2602).

Institutional Review Board Statement

Not applicable.

Transparency

The author confirms that the manuscript is an honest, accurate, and transparent account of the study, that no vital features of the study have been omitted, and that any discrepancies from the study as planned have been explained. This study followed ethical practices during the writing process.

Conflicts of Interest

The authors declare that they have no affiliations with or involvement in any organization or entity with any financial interest in the subject matter or materials discussed in this manuscript.

References

  1. Barzilai-Nahon, K. Toward a theory of network gatekeeping: A framework for exploring information control. Journal of the American Society for Information Science and Technology 2008, 59(9), 1493–1512. [Google Scholar] [CrossRef]
  2. Bender, E. M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021; Association for Computing Machinery; pp. 610–623. [Google Scholar] [CrossRef]
  3. Benkler, Y.; Faris, R.; Roberts, H. Network propaganda: Manipulation, disinformation, and radicalization in American politics; Oxford University Press, 2018. [Google Scholar] [CrossRef]
  4. Bruns, A. Gatewatching and news curation: Journalism, social media, and the public sphere; Peter Lang, 2018. [Google Scholar] [CrossRef]
  5. Bucher, T. If...then: Algorithmic power and politics; Oxford University Press, 2018. [Google Scholar] [CrossRef]
  6. Chadwick, A. The hybrid media system: Politics and power, 2nd ed.; Oxford University Press, 2017. [Google Scholar] [CrossRef]
  7. Cushion, S.; Thomas, R. Reporting elections: Rethinking the logic of campaign coverage; Polity Press, 2018. [Google Scholar]
  8. Diakopoulos, N. Automating the news: How algorithms are rewriting the media; Harvard University Press, 2019. [Google Scholar]
  9. Fletcher, R.; Nielsen, R. K. Are people incidentally exposed to news on social media? A comparative analysis. New Media & Society 2018, 20(7), 2450–2468. [Google Scholar] [CrossRef]
  10. Gillespie, T. Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media; Yale University Press, 2018. [Google Scholar]
  11. Golovchenko, Y.; Hartmann, M.; Adler-Nissen, R. State, media and civil society in the information warfare over Ukraine: Citizen curators of digital disinformation. International Affairs 2018, 94(5), 975–994. [Google Scholar] [CrossRef]
  12. Guess, A. M.; Nyhan, B.; Reifler, J. Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour 2020, 4(5), 472–480. [Google Scholar] [CrossRef]
  13. The power of information networks: New directions for agenda setting; Guo, L., McCombs, M., Eds.; Routledge, 2016. [Google Scholar] [CrossRef]
  14. Harder, R. A.; Sevenans, J.; Van Aelst, P. Toward a more realistic model of agenda-setting? The reciprocal effects of media coverage and politics. Journalism & Mass Communication Quarterly 2017, 94(2), 576–596. [Google Scholar] [CrossRef]
  15. Hassid, J. Safety valve or pressure cooker? Blogs in Chinese political life. Journal of Communication 2012, 62(2), 212–230. [Google Scholar] [CrossRef]
  16. Howard, P. N.; Hussain, M. M. Democracy’s fourth wave? Digital media and the Arab Spring; Oxford University Press, 2013. [Google Scholar] [CrossRef]
  17. Kreiss, D.; McGregor, S. C. Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 U.S. presidential cycle. Political Communication 2018, 35(2), 155–177. [Google Scholar] [CrossRef]
  18. Lewin, K. Frontiers in group dynamics II: Channels of group life; social planning and action research. Human Relations 1947, 1(2), 143–153. [Google Scholar] [CrossRef]
  19. McCombs, M. E.; Shaw, D. L. The agenda-setting function of mass media. Public Opinion Quarterly 1972, 36(2), 176–187. [Google Scholar] [CrossRef]
  20. McCombs, M. E.; Shaw, D. L. The evolution of agenda-setting research: Twenty-five years in the marketplace of ideas. Journal of Communication 1993, 43(2), 58–67. [Google Scholar] [CrossRef]
  21. Mesgari, M.; Okoli, C.; Mehdi, M.; Nielsen, F. Å.; Lanamäki, A. “The sum of all human knowledge”: A systematic review of scholarly research on the content of Wikipedia. Journal of the Association for Information Science and Technology 2015, 66(2), 219–245. [Google Scholar] [CrossRef]
  22. Napoli, P. M. Social media and the public interest: Media regulation in the disinformation age; Columbia University Press, 2019. [Google Scholar]
  23. Nechushtai, E. Could digital platforms capture the media through infrastructure? Journalism 2018, 19(8), 1043–1058. [Google Scholar] [CrossRef]
  24. Newman, N.; Fletcher, R.; Robertson, C. T.; Eddy, K.; Nielsen, R. K. Reuters Institute digital news report 2022. Reuters Institute for the Study of Journalism. 2022. Available online: https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2022.
  25. Nielsen, R. K.; Ganter, S. A. Dealing with digital intermediaries: A case study of the relations between publishers and platforms. New Media & Society 2018, 20(4), 1600–1617. [Google Scholar] [CrossRef]
  26. Pariser, E. The filter bubble: What the internet is hiding from you; Penguin Press, 2011. [Google Scholar]
  27. Roberts, S. T. Behind the screen: Content moderation in the shadows of social media; Yale University Press, 2019. [Google Scholar]
  28. Shaw, A. Centralized and decentralized gatekeeping in an open online collective. Politics & Society 2012, 40(3), 349–388. [Google Scholar] [CrossRef]
  29. Shoemaker, P. J. Gatekeeping; Sage, 1991. [Google Scholar]
  30. Shoemaker, P. J.; Vos, T. P. Gatekeeping theory; Routledge, 2009. [Google Scholar] [CrossRef]
  31. Siapera, E. Tweeting #Palestine: Twitter and the mediation of Palestine. International Journal of Cultural Studies 2014, 17(6), 539–555. [Google Scholar] [CrossRef]
  32. Singer, J. B. User-generated visibility: Secondary gatekeeping in a shared media space. New Media & Society 2014, 16(1), 55–73. [Google Scholar] [CrossRef]
  33. Starbird, K.; Arif, A.; Wilson, T. Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations. Proceedings of the ACM on Human-Computer Interaction 2019, 3(CSCW), 127. [Google Scholar] [CrossRef]
  34. Sunstein, C. R. #Republic: Divided democracy in the age of social media; Princeton University Press, 2017. [Google Scholar] [CrossRef]
  35. Thorson, K.; Wells, C. Curated flows: A framework for mapping media exposure in the digital age. Communication Theory 2016, 26(3), 309–328. [Google Scholar] [CrossRef]
  36. Torraco, R. J. Writing integrative literature reviews: Guidelines and examples. Human Resource Development Review 2005, 4(3), 356–367. [Google Scholar] [CrossRef]
  37. Vargo, C. J.; Guo, L. Networks, big data, and intermedia agenda setting: An analysis of traditional, partisan, and emerging online U.S. news. Journalism & Mass Communication Quarterly 2017, 94(4), 1031–1055. [Google Scholar] [CrossRef]
  38. Vos, T. P.; Singer, J. B. Media discourse about entrepreneurial journalism: Implications for journalistic capital. Journalism Practice 2016, 10(2), 143–159. [Google Scholar] [CrossRef]
  39. White, D. M. The “gate keeper”: A case study in the selection of news. Journalism Quarterly 1950, 27(4), 383–390. [Google Scholar] [CrossRef]
  40. Zuboff, S. The age of surveillance capitalism: The fight for a human future at the new frontier of power; PublicAffairs, 2019. [Google Scholar]

Author Bio

Dr. Safran Safar Almakaty is distinguished for his significant contributions to communication, media studies, and higher education within Saudi Arabia and the wider Middle East. As a Professor at Imam Mohammad ibn Saud Islamic University (IMSIU) in Riyadh, Dr. Almakaty has been instrumental in advancing scholarly dialogue on media transformation and international communication. He holds a Master of Arts from Michigan State University and a PhD from the University of Kentucky, offering a strong interdisciplinary approach to his research and instruction. His academic focus encompasses media evolution in the region, examining the influences of emerging technologies, global developments, and sociopolitical factors on public discourse and information dissemination.
In addition to his academic roles, Dr. Almakaty serves as a consultant in communication strategy, corporate communications, and international relations, providing guidance to government bodies, corporations, and non-profit organizations. His expertise extends to higher education policy development, particularly at the intersection of media literacy, digital transformation, and educational reform.
Dr. Almakaty’s research interests include the effectiveness of hybrid conference formats for diplomacy and the strategic use of conferences supporting Saudi Arabia’s Vision 2030 objectives. He has an extensive publication record in peer-reviewed journals, actively participates in international forums, and engages in cross-cultural research collaborations, strengthening connections between regional academia and global scholarship.
As an educator, Dr. Almakaty demonstrates a strong commitment to mentoring future scholars and professionals, promoting a culture of inquiry, innovation, and excellence. His ongoing work continues to shape media and communication landscapes, supporting initiatives that enhance international collaboration, public diplomacy, and the modernization of knowledge-based institutions across the Middle East.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated