1. Introduction
1.1. The Epistemological Crisis of the Digital Age
The architecture of human communication has undergone a phase transition. In the span of a few decades, the dominant topology of information exchange has shifted from hierarchical, broadcast-based systems to decentralized, scale-free networks mediated by digital platforms. This structural transformation has democratized expression but has also introduced profound vulnerabilities into the collective cognitive ecosystem. The most salient of these is the “infodemic”—the rapid, viral spread of misinformation, conspiracy theories, and radicalizing narratives that mirrors the transmission of infectious diseases [
1].
However, the dynamics of information diffusion differ fundamentally from biological contagion. Biological viruses spread based on physical proximity and physiological susceptibility, factors that are relatively static and structurally constrained. Information viruses, by contrast, exploit cognitive vulnerabilities—confirmation bias, identity protection, and emotional reactivity—and are propelled by algorithmic engines designed to maximize engagement rather than veracity [
2]. The interaction between human cognitive limitations and machine learning optimization creates a feedback loop that segregates users into “echo chambers”: bounded topological clusters where dissenting views are systematically excluded, and reinforcing narratives are amplified to the point of perceived consensus [
3,
4].
The persistence of these echo chambers suggests they are not merely transient anomalies but stable dynamical states—metastable equilibria in the complex system of public opinion. Understanding the stability conditions of these structures is paramount for maintaining the integrity of democratic discourse and public health [
5].
1.2. Moving Beyond Classical Epidemiology
To study these phenomena, researchers have increasingly turned to the mathematical tools of epidemiology. The canonical
Susceptible-Infected-Recovered (SIR) model, first proposed by Kermack and McKendrick in 1927, has served as the bedrock for this analysis [
6]. In the context of information, “Susceptible” agents have not heard a rumor; “Infected” agents actively spread it; and “Recovered” agents have lost interest or stifled the rumor [
7].
While robust, the standard SIR model makes simplifying assumptions that are increasingly untenable in the context of modern social media:
Homogeneous Mixing: Classical models often assume that any agent can contact any other agent with equal probability. Real-world social networks, however, are
scale-free, characterized by a power-law degree distribution
. This heterogeneity fundamentally alters spreading dynamics, often eliminating the epidemic threshold entirely in infinite networks [
8].
Instantaneous Infection: Standard models assume that exposure leads to infection with a fixed probability
. This ignores the cognitive processing time—the moment of hesitation, verification, or skepticism—that occurs before a user decides to share a piece of content [
9].
Neutral Substrate: Classical models assume the network structure is a passive substrate. They do not account for
Algorithmic Bias, where the platform itself actively manipulates the contact rate between agents based on their infection status, creating artificial homophily that accelerates spread within clusters [
10].
1.3. The Sociotechnical Gap: Cognitive Immunity and Algorithmic Bias
To bridge the gap between classical theory and digital reality, we must formalize the mechanisms of resistance and amplification.
Cognitive Immunity refers to the mental capacity to identify and reject misinformation. Drawing from Inoculation Theory in social psychology, this concept suggests that minds can be “immunized” against weak arguments or manipulative tropes (e.g., through “prebunking”), thereby reducing the likelihood of infection upon exposure [
11,
12]. In a compartmental model, this requires an intermediate state between “Susceptible” and “Infected”—a “Critical” state where the cognitive immune system is engaged. The outcome of this state (Infection vs. Recovery) depends on the parameter
(Cognitive Immunity).
Algorithmic Bias represents the systemic distortion of information flow by recommendation engines. These systems prioritize content that elicits high engagement, often favoring sensationalist or polarizing material [
13]. Mathematically, this acts as a force multiplier on the transmission rate
, particularly within local clusters. We introduce the parameter
(Algorithmic Bias) to quantify this platform-induced amplification.
1.4. Research Objectives and Contributions
This paper aims to provide a rigorous stability analysis of echo chambers by integrating these sociotechnical parameters into a unified mathematical framework. We propose the SCIR (Susceptible-Critical-Infected-Recovered) model on scale-free networks. Unlike previous extensions of the SIR model, our formulation explicitly links the microscopic probability of transition (governed by ) and the macroscopic contact rate (governed by and network topology) to derive the stability criterion for information cascades.
The specific contributions of this work are:
Model Formulation: We develop a heterogeneous mean-field formulation of the SCIR model, accounting for degree-dependent transmission rates and the diverging connectivity of scale-free hubs.
Analytical Derivation of : Using the Next Generation Matrix (NGM) method, we derive an exact analytical expression for the Basic Reproduction Number () as a function of network moments (), cognitive immunity (), and algorithmic bias ().
Threshold Analysis: We demonstrate that while scale-free networks theoretically possess a vanishing epidemic threshold, the effective reproduction number can be suppressed below unity if exceeds a critical value . We further show that this critical value scales linearly with , quantifying the “arms race” between cognitive resilience and algorithmic amplification.
Stability of Echo Chambers: We analyze the endemic equilibrium to show how high values stabilize misinformation pockets even when the global population has moderate immunity, providing a mathematical definition of an echo chamber as a localized, algorithmically-sustained endemic state.
The remainder of the paper is structured as follows:
Section 2 reviews existing literature on network epidemiology and opinion dynamics.
Section 3 outlines the methodology, defining the network properties and the SCIR transition rules.
Section 4 presents the mathematical derivation of the mean-field equations and the stability analysis.
Section 5 discusses the results in the context of digital governance and platform regulation.
Section 6 concludes the study.
2. Related Work
2.1. Compartmental Models of Information Diffusion
The translation of epidemic models to social dynamics began with the pioneering work of Daley and Kendall (DK model) and Maki and Thompson (MK model) in the 1960s [
7,
14]. These models established the analogy between the spread of a disease and the spread of a rumor. In these frameworks, the population is divided into “Ignorants” (Susceptible), “Spreaders” (Infected), and “Stiflers” (Recovered). A key distinction of rumor models is the mechanism of cessation: spreaders often become stiflers not just through time (recovery), but through contact with other spreaders (loss of novelty) or stiflers (social sanctioning) [
15].
Contemporary research has refined these models to capture the nuances of online interaction. The
SEIZ model (Susceptible-Exposed-Infected-Skeptic), proposed by Bettencourt et al. and applied to Twitter data by Jin et al., introduces a “Skeptic” compartment for users who reject the rumor after exposure [
16]. Similarly, the
SCIR model (Susceptible-Contacted-Infected-Refractory) described by Xiong et al. includes a “Contacted” state to represent the hesitation or latency period before a user decides to retweet [
9].
Our work builds specifically on the SCIR framework but reinterprets the “Contacted” or “Critical” state. In previous iterations, this state often represented passive latency. In our model, the “Critical” compartment is an active filter where Cognitive Immunity is applied. The transition is not merely a delay; it is a bifurcation point determined by the agent’s psychological resilience.
2.2. Scale-Free Networks and the Vanishing Threshold
A limitation of early rumor models was the assumption of homogeneous mixing—the idea that every individual has an equal chance of encountering every other individual. Extensive empirical research has shown that real-world social networks (Twitter, Facebook, citation networks) are
scale-free, characterized by a power-law degree distribution
[
8,
17].
Pastor-Satorras and Vespignani fundamentally altered the field by solving the SIS model on scale-free networks using Heterogeneous Mean-Field (HMF) theory [
18]. They proved that for networks with
(which includes most social networks), the second moment of the degree distribution
diverges as the network size
. Consequently, the epidemic threshold
vanishes. This implies that in a sufficiently large social network, misinformation can spread regardless of how low its intrinsic transmissibility is, provided it reaches the hubs [
19].
This “vanishing threshold” presents a grim prognosis for combating misinformation. It suggests that structural interventions (cutting links to hubs) or massive reductions in transmissibility are required. Our work investigates whether Cognitive Immunity, effectively acting as a node-level filter, can restore a non-zero threshold even in the presence of scale-free topology.
2.3. Algorithmic Bias and Filter Bubbles
The passive topology of the network is only half the story. Digital platforms are active curators of connectivity. Recommender systems, driven by reinforcement learning objectives (e.g., maximizing watch time or clicks), systematically bias the flow of information [
10]. This phenomenon, termed
Algorithmic Bias, creates feedback loops where users are preferentially exposed to content that confirms their priors [
13].
Theoretical models of algorithmic bias often employ bounded confidence dynamics (e.g., the Deffuant or Hegselmann-Krause models), where agents only interact if their opinions are already similar [
13]. Sirbu et al. demonstrated that algorithmic bias amplifies opinion fragmentation, making consensus impossible and stabilizing polarized clusters [
13].
In the context of compartmental models, algorithmic bias acts as an enhancer of the effective contact rate between specific subgroups. It creates “high-conductance” pathways for misinformation while insulating users from corrective “Recovered” nodes. Our model integrates this by modulating the transmission parameter with an amplification factor , providing a mean-field approximation of this complex sorting process.
2.4. The Psychology of Resistance: Inoculation Theory
While algorithms accelerate spread, human cognition provides the braking mechanism.
Inoculation Theory, developed by social psychologists, posits that resistance to persuasion can be cultivated [
11]. Just as a vaccine triggers an immune response, pre-exposure to the strategies of misinformation (e.g., ad hominem attacks, false dichotomies) boosts an individual’s ability to identify and reject future falsehoods [
12].
This “Cognitive Immunity” is not binary; it is a probabilistic rate of rejection. Empirical studies on “prebunking” games like
Bad News have shown that these interventions can significantly reduce the perceived reliability of fake news [
20]. Mathematical models have begun to incorporate these findings. For instance, the IPSR model (Ignorant-Prebunked-Spreader-Stifler) treats prebunking as a form of “weak vaccination” that decays over time [
21].
Our SCIR model formalizes this by treating the “Critical” state as the locus of cognitive defense. The parameter aggregates the effects of media literacy, prebunking, and critical thinking skills into a single transition rate, allowing us to quantify exactly how much “immunity” is needed to counteract a specific level of viral amplification.
3. Methodology
3.1. Network Topology: The Scale-Free Substrate
We model the underlying social structure as a complex network
consisting of
N nodes (individuals) and
E edges (social ties). We assume the network exhibits the scale-free property, a ubiquitous feature of online social platforms [
8]. The connectivity of the nodes follows a power-law degree distribution:
where
is the probability that a randomly selected node has degree
k, and
is the degree exponent. For most social networks,
, indicating a heavy-tailed distribution with significant heterogeneity [
18]. The network possesses a natural cutoff
(or
for uncorrelated networks), which prevents the divergence of moments in finite systems [
22].
The statistical properties of the network are characterized by the moments of the degree distribution:
Crucially, for , the second moment diverges as . This divergence is the mathematical driver behind the extreme vulnerability of scale-free networks to spreading processes. The heterogeneity of the network means that highly connected “hubs” act as superspreaders, reducing the epidemic threshold to near zero in standard models.
3.2. The SCIR Compartmental Model
We propose a stochastic compartmental model that divides the population into four states: Susceptible (S), Critical (C), Infected (I), and Recovered (R).
Susceptible (S): Individuals who have not yet been exposed to the specific piece of misinformation. They are open to receiving information from their neighbors.
Critical (C): Individuals who have received the information but are currently in a state of cognitive processing. This compartment models the “hesitation” or “verification” period. In this state, the agent evaluates the veracity of the information based on their Cognitive Immunity. They do not yet spread the information.
Infected (I): Individuals who have accepted the misinformation and actively propagate it to their neighbors. This state represents the “believers” or “sharers” who drive the diffusion.
Recovered (R): Individuals who have either rejected the misinformation (transitioning from C) or have stopped spreading it (transitioning from I). These individuals are immune to re-infection in this model iteration (assuming the specific rumor loses novelty or has been debunked for them).
State Transitions and Dynamic Equations
The dynamics of the system are governed by the transitions between these states. We utilize the Heterogeneous Mean-Field (HMF) approach, which groups nodes by their degree k rather than assuming a uniform contact rate. Let denote the densities of nodes of degree k in each respective state at time t.
Transition 1: Exposure () A susceptible node
with degree
contacts its neighbors. The rate at which it encounters an Infected neighbor depends on the probability
that a randomly chosen link points to an Infected node. In an uncorrelated network, this probability is weighted by the degree of the infected nodes:
The transition rate is proportional to the node’s degree
, the transmission probability
, and the density of infected contacts. We introduce Algorithmic Bias (
) here as a multiplicative factor
. This parameter represents the platform’s tendency to increase the visibility of viral content or to preferentially connect susceptible users to infected sources (homophily amplification).
Adoption (): The agent accepts the information. This occurs at rate .
Rejection (): The agent identifies the information as false (via cognitive immunity mechanisms like lateral reading or prebunking awareness) and rejects it. This occurs at rate .
The total rate of leaving the Critical state is . The probability of becoming infected given exposure is effectively .
Transition 3: Recovery/Cessation () Infected agents eventually stop spreading the rumor. This can be due to loss of interest, correction, or shifting attention. This occurs at a recovery rate .
3.3. System of Differential Equations
Combining these mechanisms, the time evolution of the system is described by the following set of coupled nonlinear differential equations for each degree class
:
4. Stability Analysis and
The stability of the network against an “infodemic” is determined by the
Basic Reproduction Number (). In this context,
represents the average number of secondary “Infections” (new spreaders) generated by a single “Infected” user introduced into a fully “Susceptible” scale-free network [
23]. If
, the information cascade dies out exponentially (Information-Free Equilibrium is stable). If
, the information spreads to a finite fraction of the network, forming a stable endemic state (Echo Chamber).
Table 1.
Parameter Summary.
Table 1.
Parameter Summary.
| Parameter |
Symbol |
Interpretation in Information Diffusion |
| Algorithmic Bias |
|
Amplification of contact rate due to recommender systems (). High implies echo chamber formation. |
| Transmission Rate |
|
Intrinsic virality of the information (e.g., clickbait appeal, shock value). |
| Adoption Rate |
|
Speed at which an uncritically thinking agent accepts and shares information. |
| Cognitive Immunity |
|
Rate of critical rejection. High implies strong media literacy or effective prebunking. |
| Recovery Rate |
|
Rate at which users lose interest or stop spreading the rumor. |
| Degree |
|
Number of followers/connections a user has. |
4.1. The Next Generation Matrix (NGM) Method
To derive
for this heterogeneous multi-compartment model, we utilize the Next Generation Matrix method outlined by Diekmann et al. and Heffernan et al. [
23,
24].
We focus on the subsystems that drive the “infection”: the Critical () and Infected () compartments. The state of the system is linearized around the Information-Free Equilibrium (IFE), where and .
The linearized system for the infection subsystem
can be written as:
where is the transmission matrix (new infections) and is the transition/removal matrix.
However, because the infection force depends on the global average of infections (), this is an infinite-dimensional system. We must solve for the stability condition of the aggregate variable .
4.2. Analytical Derivation
Let us examine the growth of the variable near .
From the
equation, assuming
:
We assume solutions of the exponential form and , where is the growth rate. Substituting these into the linearized equations:
Combining these gives the relationship between the infected density
and the global field
:
Recall the definition of
:
Substitute the expression for
into the definition of
:
Dividing both sides by
(assuming
for an outbreak) and recognizing
:
The epidemic threshold corresponds to the transition from a negative growth rate to a positive one, i.e.,
. Setting
gives us the critical condition:
The Basic Reproduction Number is defined as the quantity on the right-hand side.
4.3. The Expression for SCIR on Scale-Free Networks
This derived equation encapsulates the four pillars of digital information diffusion:
Algorithmic Bias (): A linear multiplier. If the algorithm doubles the effective contact rate between like-minded users (), it doubles . This quantifies the “echo chamber” effect—the algorithmic concentration of susceptibility.
Transmissibility (): The standard epidemiological potential. High (viral content) or low (long persistence) increases spread.
-
Cognitive Failure Rate (): This term represents the fraction of “Critical” exposures that result in “Infection”.
If (no immunity), the term becomes 1, and we revert to the standard SIR result.
If (perfect immunity), the term , driving .
This is the control parameter for educational interventions.
Network Topology (): The structural risk factor. For scale-free networks with , this term diverges () as the network size grows.
4.4. Stability and the Vanishing Threshold
The divergence of
in large scale-free networks (
) has profound implications for stability.
This confirms the vanishing epidemic threshold result of Pastor-Satorras and Vespignani [
18]. In an infinite, uncorrelated scale-free network, information will always spread, regardless of how small the transmission rate
is, provided it is non-zero. The hubs serve as reservoirs that sustain the cascade.
However, in the context of the SCIR model, we can analyze the Effective Reproduction Number for finite networks or specific parameter regimes. For the network to be stable (information dies out), we require .
Rearranging the stability condition:
Solving for the Critical Cognitive Immunity (
) required to stabilize the network:
This inequality reveals the precise trade-off. To prevent an infodemic, the rate of cognitive rejection () must scale with the algorithmic bias () and the network heterogeneity ().
Key Insight: If algorithmic bias is high (strong echo chamber effects), the required cognitive immunity may become unrealistically high. This implies that individual-level interventions (like prebunking) have a saturation point. If the platform’s algorithm is sufficiently aggressive in connecting users (), no amount of human cognitive resilience can stabilize the network—the echo chamber becomes structurally inevitable.
5. Discussion
5.1. The Instability of Truth in Echo Chambers
The mathematical derivation highlights a disturbing property of algorithmically curated scale-free networks: they are inherently unstable to viral misinformation. The term creates a “super-critical” environment. Unlike physical diseases, where the contact network is relatively static, the digital contact network is dynamic and optimized for transmission ().
Our model suggests that echo chambers are not merely social clusters but
dynamical attractors. Once established, the high internal connectivity (modeled by high
) ensures that even if
drops (e.g., the news becomes stale) or
increases (people become more skeptical), the product may still keep
locally. This aligns with empirical observations that debunking attempts often fail to penetrate established echo chambers because the structural reinforcement outweighs the cognitive correction [
25].
5.2. Cognitive Immunity as a Policy Lever
Despite the grim topological prognosis, the parameter
offers a path for intervention. The SCIR model formally distinguishes between “not seeing” misinformation (Susceptible) and “seeing but rejecting” it (Recovered). Interventions that boost
—such as “prebunking” (inoculation) campaigns, accuracy nudges, or digital literacy training—directly reduce the Cognitive Failure Rate term [
12].
Our threshold analysis () provides a metric for success. For a given network structure and algorithmic environment, there is a minimum level of cognitive immunity required to break the chain of transmission. This suggests that platforms could dynamically adjust based on the estimated of the user base. If a topic is highly complex (low ) or susceptible to manipulation, the algorithm should reduce amplification () to maintain stability.
5.3. The Limitations of “Neutral” Algorithms
A crucial insight from the
equation is that even a “neutral” algorithm (
) does not guarantee stability in a scale-free network due to the
divergence [
26]. The hubs will naturally propagate information. Therefore, to achieve stability against malicious disinformation, platforms may need to implement
Algorithmic Damping (
)—actively reducing the reach of hubs or breaking homophilic clusters—rather than simply striving for neutrality.
5.4. Model Limitations
While the HMF approach captures the essential role of degree heterogeneity, it assumes that degree correlations are purely random (configurational). Real-world networks exhibit assortativity (hubs connecting to hubs) and high clustering (triangles), which HMF neglects [
26]. Clustering tends to localize epidemics, making them harder to start but also harder to eradicate once established within a cluster [
27]. Furthermore, our model assumes
is constant. In reality, cognitive immunity likely decays over time (waning immunity), necessitating an SCIRS model where
, potentially leading to cyclical outbreaks of the same rumor [
28].
6. Conclusions
This paper presented a stability analysis of information diffusion using a stochastic SCIR compartmental model. By integrating Algorithmic Bias () and Cognitive Immunity () into the classical epidemiological framework, we derived a comprehensive expression for the basic reproduction number on scale-free networks.
Our results demonstrate that the stability of the digital information ecosystem is determined by a tug-of-war between algorithmic amplification and cognitive resistance. While the scale-free topology of social networks creates a structural vulnerability to misinformation (vanishing threshold), this can be counteracted if the rate of cognitive rejection exceeds a critical threshold . However, this threshold scales with the intensity of algorithmic bias. High levels of algorithmic curation () can render the network unstable regardless of user skepticism, effectively locking the system into an endemic state of misinformation—an echo chamber.
These findings suggest that addressing the “infodemic” requires a dual strategy. “Soft” interventions that enhance cognitive immunity (prebunking, literacy) are essential but may be insufficient in the face of aggressive algorithmic sorting. “Hard” interventions that regulate the parameters of recommendation engines—specifically reducing or capping the connectivity of hubs—are likely necessary to restore the global stability of the truth.
References
- Nesteruk, I. Mathematical modeling of the COVID-19 pandemic. Information 2020, 11, 302. [Google Scholar] [CrossRef]
- Latimore, E. The Echo Chamber of Social Media. 2020. Available online: https://edlatimore.com/echo-chamber-social-media/.
- Wikipedia. Echo Chamber (Media). Available online: https://en.wikipedia.org/wiki/Echo_chamber_(media).
- Cinelli, M.; De Francisci Morales, G.; Galeazzi, A.; Quattrociocchi, W.; Starnini, M. The echo chamber effect on social media. Proceedings of the National Academy of Sciences 2021, 118, e2023301118. [Google Scholar] [CrossRef] [PubMed]
- Bovet, A.; Makse, H.A. Influence of fake news in Twitter during the 2016 US presidential election. Nature Communications 2019, 10, 7. [Google Scholar] [CrossRef] [PubMed]
- Kermack, W.O.; McKendrick, A.G. A contribution to the mathematical theory of epidemics. Proceedings of the Royal Society of London. Series A 1927, 115, 700–721. [Google Scholar] [CrossRef]
- Daley, D.J.; Kendall, D.G. Epidemics and rumours. Nature 1964, 204, 1118. [Google Scholar] [CrossRef]
- Barabási, A.-L. Network Science; Cambridge University Press, 2016; Available online: http://networksciencebook.com.
- Xiong, F.; Liu, Y.; Zhang, Z.J.; Zhu, J.; Zhang, Y. An information diffusion model based on retweeting mechanism for online social media. Physics Letters A 2012, 376, 2103–2108. [Google Scholar] [CrossRef]
- IBM. What Is Algorithmic Bias? Available online: https://www.ibm.com/think/topics/algorithmic-bias.
- Roozenbeek, J.; van der Linden, S. The fake news game: Actively inoculating against the risk of misinformation. Journal of Risk Research 2019, 22, 570–580. [Google Scholar] [CrossRef]
- Roozenbeek, J.; van der Linden, S.; Goldberg, B.; Rathje, S.; Lewandowsky, S. Psychological inoculation improves resilience against misinformation on social media. Science Advances 2022, 8, eabo6254. [Google Scholar] [CrossRef]
- Sirbu, A.; Pedreschi, D.; Giannotti, F.; Kertész, K. Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. PLoS ONE 2019, 14, e0213246. [Google Scholar] [CrossRef]
- Zhao, L.; Wang, J.; Chen, Y.; Wang, Q.; Cheng, J.; Cui, H. SIHR rumor spreading model in social networks. Physica A: Statistical Mechanics and its Applications 2012, 391, 2444–2453. [Google Scholar] [CrossRef]
- Nekovee, M.; Moreno, Y.; Bianconi, B.; Marsili, M. Theory of rumour spreading in complex social networks. Physica A: Statistical Mechanics and its Applications 2007, 374, 457–470. [Google Scholar] [CrossRef]
- Jin, F.; Dougherty, E.; Saraf, P.; Cao, Y.; Ramakrishnan, N. Epidemiological modeling of news and rumors on Twitter. In Proceedings of the 7th Workshop on Social Network Mining and Analysis, Chicago, IL, USA, 11 August 2013. Article 8. [Google Scholar] [CrossRef]
- Borge-Holthoefer, J.; Rivero, A.; Garcίa, I.; Cauhé, E.; Ferrer, A.; Ferrer, D.; Francos, D.; Iñiguez, D.; Pérez, M.P.; Ruiz, G.; et al. Structural and dynamical patterns on online social networks: The Spanish May 15th movement as a case study. PLoS ONE 2011, 6, e23883. [Google Scholar] [CrossRef]
- Pastor-Satorras, R.; Vespignani, A. Epidemic spreading in scale-free networks. Physical Review Letters 2001, 86, 3200. [Google Scholar] [CrossRef] [PubMed]
- Pastor-Satorras, R.; Vespignani, A. Epidemic dynamics in finite size scale-free networks. Physical Review E 2002, 65, 035108. [Google Scholar] [CrossRef] [PubMed]
- Roozenbeek, J.; van der Linden, S. Breaking Harmony Square: A game that inoculates against political misinformation. Harvard Kennedy School Misinformation Review 2020, 1. [Google Scholar] [CrossRef]
- Rai, R.; Sharma, R.; Meena, C. IPSR Model: Misinformation Intervention through Prebunking in Social Networks. arXiv 2025, arXiv:2502.12740. [Google Scholar] [CrossRef]
- Pastor-Satorras, R.; Vespignani, A. Epidemic dynamics in finite size scale-free networks cond-mat/0202298. arXiv 2002, arXiv:cond-mat/0202298. Available online: https://arxiv.org/abs/cond-mat/0202298. [CrossRef]
- Diekmann, O.; Heesterbeek, J.A.P.; Metz, J.A.J. On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations. Journal of Mathematical Biology 1990, 28, 365–382. [Google Scholar] [CrossRef]
- Heffernan, J.M.; Smith, R.J.; Wahl, L.M. Perspectives on the basic reproductive ratio. Journal of The Royal Society Interface 2005, 2, 281–293. [Google Scholar] [CrossRef]
- Treen, K.M.; Williams, H.T.P.; O’Neill, S.J. Online misinformation about climate change. Wiley Interdisciplinary Reviews: Climate Change 2020, 11, e665. [Google Scholar] [CrossRef]
- Boguñá, M.; Pastor-Satorras, R.; Vespignani, A. Epidemic spreading in complex networks with degree correlations. In Statistical Mechanics of Complex Networks; Springer: Berlin/Heidelberg, Germany, 2003; pp. 127–147. [Google Scholar] [CrossRef]
- Del Vicario, M.; Bessi, A.; Zollo, F.; Petroni, F.; Scala, A.; Caldarelli, G.; Stanley, H.E.; Quattrociocchi, W. The spreading of misinformation online. Proceedings of the National Academy of Sciences 2016, 113, 554–559. [Google Scholar] [CrossRef] [PubMed]
- Wolfram Research. The SIR Model for Spread of Disease. Wolfram Cloud. Available online: https://www.wolframcloud.com/obj/covid-19/Published/The-SIR-Model-for-Spread-of-Disease.nb.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).