Preprint
Review

This version is not peer-reviewed.

Protocol Versus Narrative Review Versus Systematic Review Versus Umbrella Review: A Comparative Methodological Analysis for Evidence Synthesis

Submitted:

13 April 2026

Posted:

20 April 2026

You are already at the latest version

Abstract

Background: The production of evidence syntheses has expanded substantially, yet confusion persists regarding the distinct roles, structures, and scientific validity of protocols, narrative reviews, systematic reviews, and umbrella reviews. Mislabeling or conflating these forms undermines research reproducibility and evidence-based decision-making. Objective: To provide a comprehensive, side-by-side methodological comparison of protocols, narrative reviews, systematic reviews, and umbrella reviews, including definitions, purposes, key methodological steps, strengths, limitations, and appropriate use cases. Methods: A structured comparative methodological analysis was conducted between February and March 2026. Authoritative guidance documents were identified through a targeted search of PubMed and Google Scholar using keywords “systematic review methodology,” “narrative review,” “umbrella review,” and “protocol registration.” Included sources were the Cochrane Handbook for Systematic Reviews of Interventions (Higgins et al., 2023), PRISMA 2020 (Page et al., 2021), PRISMA-P (Shamseer et al., 2015), the PRIOR statement for overviews of reviews (Gates et al., 2022), SWiM guideline for narrative synthesis (Campbell et al., 2020), PRISMA-ScR for scoping reviews (Tricco et al., 2018), JBI methodology for umbrella reviews (Aromataris et al., 2015), PROSPERO registry standards, ROBIS (Whiting et al., 2016), AMSTAR 2 (Shea et al., 2017), RoB 2 (Sterne et al., 2019), and GRADE (Schünemann et al., 2011). Key methodological domains (research question formulation, search strategy, risk of bias assessment, synthesis methods, transparency, reproducibility) were extracted and synthesized for side-by-side comparison. Results: A protocol is a pre-registered plan, not a review. A systematic review is a reproducible, bias-minimizing synthesis of eligible primary studies on a focused question. A narrative review is a subjective, flexible summary of a broader topic. An umbrella review is a higher-order synthesis that systematically compiles, appraises, and synthesizes existing systematic reviews. Umbrella reviews extend this hierarchy by synthesizing review-level evidence. Across all domains, systematic reviews and umbrella reviews demonstrated the highest methodological rigor, characterized by predefined protocols, comprehensive search strategies, and formal risk of bias assessment. Protocols functioned exclusively as methodological safeguards, while narrative reviews showed substantial variability and lack of reproducibility. Conclusion: Choosing among these four forms depends on the review question, available evidence base, resources, and intended use. Protocols should precede systematic reviews and umbrella reviews; narrative reviews serve complementary roles in education and hypothesis generation. Accurate differentiation is a prerequisite for maintaining the integrity of evidence-based healthcare.

Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

Evidence synthesis is the cornerstone of evidence-based practice across healthcare disciplines, including dentistry and orthodontics, where clinical decisions increasingly demand rigorous, transparent summaries of available research. However, the proliferation of review types has led to widespread misunderstanding among clinicians, researchers, and trainees. Four terms are particularly prone to misuse: protocol, narrative review, systematic review, and umbrella review.
A protocol is often mistaken for a review itself, while narrative reviews are frequently (and incorrectly) cited as systematic evidence. Umbrella reviews—a higher-order synthesis of existing systematic reviews—are sometimes conflated with standard systematic reviews or mistaken for scoping reviews. This confusion has tangible consequences: clinical guidelines based on mislabeled reviews may perpetuate bias, and researchers may invest substantial effort in reviews that lack methodological rigor or fail to meet journal standards for publication.
Recent methodological developments have further enriched the evidence synthesis landscape. These include the PRIOR statement for overviews of reviews (Gates et al., 2022), the SWiM guideline for narrative synthesis (Campbell et al., 2020), the PRISMA extension for scoping reviews (PRISMA-ScR) (Tricco et al., 2018), and the emergence of living systematic reviews (Elliott et al., 2014; Ioannidis, 2017). Despite these advances, clear, side-by-side comparisons of the fundamental review types remain scarce.
This manuscript aims to resolve these confusions by providing a definitive methodological comparison tailored to the needs of clinical researchers, particularly in fields such as dentistry, medicine, and allied health sciences.
The central thesis is that these four entities serve fundamentally different scientific functions:
  • A protocol is a plan for a future review.
  • A systematic review is a rigorous, reproducible study of primary studies.
  • A narrative review is an expert-led, interpretive overview.
  • An umbrella review is a systematic synthesis of existing systematic reviews.
Understanding these differences is essential for authors, peer reviewers, journal editors, and evidence users.

2. Methods

A structured comparative methodological analysis was conducted to evaluate protocols, narrative reviews, systematic reviews, and umbrella reviews across key methodological domains. The analysis was performed between February and March 2026.
Search and source identification: A targeted literature search was conducted in PubMed and Google Scholar using the following keywords: “systematic review methodology,” “narrative review,” “umbrella review,” “overview of reviews,” and “protocol registration.” The search was limited to English-language guidance documents published from 2010 onward. Search methods followed established principles (Lefebvre et al., 2023). No formal risk of bias assessment was performed, as this is a methodological comparison, not a systematic review of outcomes.
Included authoritative sources:
  • Cochrane Handbook for Systematic Reviews of Interventions (Higgins et al., 2023)
  • PRISMA 2020 Statement (Page et al., 2021) and its explanation and elaboration (Page et al., 2021b)
  • PRISMA-P 2015 Statement (Shamseer et al., 2015)
  • PRIOR statement for overviews of reviews (Gates et al., 2022)
  • SWiM guideline for narrative synthesis (Campbell et al., 2020)
  • PRISMA-ScR for scoping reviews (Tricco et al., 2018)
  • JBI Methodology for Umbrella Reviews (Aromataris et al., 2015)
  • PROSPERO registry standards (University of York)
  • ROBIS tool for risk of bias in systematic reviews (Whiting et al., 2016)
  • AMSTAR 2 (Shea et al., 2017)
  • RoB 2 for randomized trials (Sterne et al., 2019)
  • GRADE Working Group (Schünemann et al., 2011)
  • Methodological literature on living reviews (Elliott et al., 2014; Ioannidis, 2017)
  • Guidance on overlap management in umbrella reviews (Pieper et al., 2019; Lunny et al., 2020)
  • Additional methodological resources (Gough et al., 2012; Booth et al., 2021; Munn et al., 2018; Muka et al., 2020)
Domains of comparison were predefined as: purpose, research question structure, unit of analysis, search strategy, inclusion criteria, risk of bias assessment, data extraction, synthesis method, overlap management (for umbrella reviews), transparency, reproducibility, time requirements, and authorship considerations. Each domain was analyzed to produce the comparative findings presented in this manuscript.

3. Results

3.1. Definitions and Core Characteristics

3.1.1. Protocol

A protocol is a pre-specified, publicly registered document that outlines the rationale, objectives, and methodological plan for a proposed systematic review, umbrella review, or other evidence synthesis. Core characteristics include: written before conducting the review; registered in a public registry (e.g., PROSPERO, OSF, Cochrane); inclusion of research question, search strategy, eligibility criteria, data extraction items, risk of bias assessment plan, and synthesis methods; and the absence of results or conclusions.

3.1.2. Systematic Review

A systematic review is a formal, structured, and reproducible method to identify, appraise, and synthesize all empirical evidence from primary studies that meets pre-specified eligibility criteria to answer a focused research question. Core characteristics include: explicit, transparent methods; comprehensive, reproducible search (multiple databases, grey literature); formal assessment of risk of bias in included primary studies using tools such as RoB 2 (Sterne et al., 2019); quantitative or narrative synthesis of findings (with SWiM guidance when meta-analysis is not possible) (Campbell et al., 2020); and design to minimize bias and random error.

3.1.3. Narrative Review

A narrative (or traditional) review is a subjective summary of a topic based on the author’s selection and interpretation of the literature, without explicit, reproducible methods. Core characteristics include: no pre-specified protocol or registry; selective, convenience-based literature search; no formal quality appraisal of cited sources; author’s expertise driving inclusion, emphasis, and conclusions; and considerable flexibility.

3.1.4. Umbrella Review

An umbrella review (also known as an overview of reviews) is a higher-order evidence synthesis that systematically compiles, appraises, and synthesizes existing systematic reviews and meta-analyses on a broad topic. Core characteristics include: explicit, transparent methods; comprehensive search for existing systematic reviews; formal assessment of quality of included systematic reviews (e.g., AMSTAR 2, Shea et al., 2017; or ROBIS, Whiting et al., 2016); synthesis of findings from multiple reviews; and careful management of overlapping primary studies (Pieper et al., 2019; Lunny et al., 2020). Reporting should follow the PRIOR statement (Gates et al., 2022).

3.2. Comparative Methodological Characteristics

Table 1 presents a comprehensive point-by-point comparison across methodological dimensions for all four entities.

3.3. Synthesis of Findings

Across all domains examined, systematic reviews and umbrella reviews demonstrated the highest level of methodological rigor, characterized by predefined protocols, comprehensive search strategies, duplicate screening processes, and formal risk of bias assessment using validated tools—Cochrane RoB 2 for primary studies (Sterne et al., 2019) and AMSTAR 2 (Shea et al., 2017) or ROBIS (Whiting et al., 2016) for systematic reviews. Protocols functioned exclusively as methodological safeguards, serving to prevent selective outcome reporting and post-hoc modifications that might introduce bias.
Umbrella reviews occupy a distinct methodological niche: they represent a higher-order synthesis appropriate when multiple systematic reviews already exist on a topic. Unlike systematic reviews, which synthesize primary studies, umbrella reviews synthesize the findings of existing systematic reviews, requiring careful management of overlapping primary studies across included reviews to avoid double-counting (Pieper et al., 2019; Lunny et al., 2020).
Narrative reviews showed substantial variability and lack of reproducibility, reflecting their dependence on author-driven selection and interpretation without standardized methodological safeguards.
The emergence of living systematic reviews—a dynamic approach where evidence is continually updated as new research becomes available (Elliott et al., 2014; Ioannidis, 2017)—and living umbrella reviews further emphasizes the evolving nature of high-quality evidence synthesis.

4. Step-by-Step Process for Each

4.1. Protocol Development (Steps)

  • Identify a gap or need for a systematic review or umbrella review.
  • Formulate a research question (focused for systematic review; broader for umbrella review).
  • Draft eligibility criteria (inclusion/exclusion).
  • Develop a comprehensive search strategy (with librarian) (Lefebvre et al., 2023).
  • Plan study selection process (screening).
  • Design data extraction form.
  • Select risk of bias assessment tools (RoB 2 for primary studies; AMSTAR 2 or ROBIS for systematic reviews).
  • Plan synthesis (meta-analysis, narrative with SWiM guidance, or overview).
  • Register protocol (PROSPERO, OSF, Cochrane, or journal).
  • Publish protocol (optional but encouraged).

4.2. Conducting a Systematic Review (Steps)

  • (Prerequisite: Have or write a protocol.)
  • Execute search across multiple databases (e.g., PubMed, Embase, Web of Science).
  • Remove duplicates.
  • Screen titles/abstracts against inclusion criteria (two reviewers).
  • Retrieve full texts.
  • Screen full texts (two reviewers).
  • Extract data from included primary studies (standardized forms).
  • Assess risk of bias for each included primary study (Sterne et al., 2019).
  • Synthesize results (meta-analysis or narrative synthesis following SWiM guidance, Campbell et al., 2020).
  • Grade certainty of evidence (e.g., GRADE) (Schünemann et al., 2011).
  • Write report following PRISMA 2020 checklist (Page et al., 2021).
  • Interpret results, discuss limitations, conclude.

4.3. Writing a Narrative Review (Typical Steps)

  • Choose a broad topic (e.g., “orthodontic management of impacted canines”).
  • Conduct informal literature search (PubMed, Google Scholar, personal files).
  • Select papers based on author’s judgment (seminal, recent, convenient).
  • Read and summarize findings thematically or chronologically.
  • Provide expert commentary, opinion, and future directions.
  • Write without formal methods section (often no search details).
  • Submit to journal (often invited by editor).

4.4. Conducting an Umbrella Review (Steps)

  • (Prerequisite: Have or write a protocol.)
  • Execute search for existing systematic reviews across multiple databases (e.g., PubMed, Embase, Epistemonikos, Cochrane Database of Systematic Reviews).
  • Remove duplicates.
  • Screen titles/abstracts against inclusion criteria (two reviewers).
  • Retrieve full texts of candidate systematic reviews.
  • Screen full texts (two reviewers).
  • Extract data from included systematic reviews (standardized forms).
  • Assess quality of included systematic reviews (e.g., AMSTAR 2, Shea et al., 2017; or ROBIS, Whiting et al., 2016).
  • Manage overlap: map primary studies across included reviews to avoid double-counting (Pieper et al., 2019).
  • Synthesize results narratively with evidence tables.
  • Grade overall certainty of evidence (often using GRADE with consideration of review quality).
  • Write report following PRIOR statement (Gates et al., 2022).
  • Interpret results, discuss limitations, conclude.

5. Strengths and Limitations

5.1. Protocol

Strengths Limitations
Prevents selective outcome reporting No findings on its own
Increases transparency and credibility Requires time before seeing results
Allows replication and updating Not suitable for narrative reviews
Facilitates peer review of methods Registration may be mandatory for some journals

5.2. Systematic Review

Strengths Limitations
Minimal bias; highest internal validity for primary studies Very time- and resource-intensive
Reproducible and updateable (living reviews) May be outdated quickly
Essential for clinical guidelines Limited to narrow questions
Allows meta-analysis for precise effect estimates Can be uninformative if primary studies are poor quality

5.3. Narrative Review

Strengths Limitations
Relatively resource-efficient to produce High risk of bias and selective citation
Considerable flexibility; can integrate disparate ideas Not reproducible
Useful for teaching, textbooks, introductions Cannot reliably inform policy or clinical practice
Allows expert opinion and interpretation Often confused with systematic reviews

5.4. Umbrella Review

Strengths Limitations
Provides comprehensive overview of a broad topic Contingent upon the methodological quality of existing systematic reviews
More time-efficient than conducting multiple de novo systematic reviews Requires careful management of overlapping primary studies
Enables comparison of findings across multiple systematic reviews May propagate errors if included reviews are flawed
Ideal for topics with abundant existing reviews Can be complex to interpret when reviews reach conflicting conclusions
High credibility when methodologically rigorous Not appropriate when few or no systematic reviews exist

6. Appropriate Use Cases

When to write a PROTOCOL:
  • You plan to conduct a systematic review, umbrella review, scoping review, or network meta-analysis.
  • You seek funding or ethics approval for a synthesis project.
  • You want to avoid duplication (registry check).
  • You intend to publish the review in a high-impact journal (many require protocol registration).
When to conduct a SYSTEMATIC REVIEW:
  • You need to inform a clinical practice guideline.
  • A policy decision requires the best available evidence from primary studies.
  • You want to quantify an intervention’s effect (meta-analysis).
  • You are evaluating the consistency or inconsistency of primary evidence.
  • Your question is narrow and answerable (e.g., “Does X cause Y?”).
  • Few or no existing systematic reviews on the topic.
When to conduct an UMBRELLA REVIEW:
  • Multiple systematic reviews already exist on the same or related topics.
  • You need a comprehensive overview of a broad clinical area (e.g., “orthodontic interventions for malocclusion”).
  • You want to compare findings across different systematic reviews.
  • Guideline developers need a single source summarizing existing review-level evidence.
  • You wish to map the quality and consistency of evidence across a field.
When to write a NARRATIVE REVIEW:
  • You are writing a textbook chapter or background section of a primary research paper.
  • You are introducing a field to students or new researchers.
  • You want to offer a personal perspective or historical overview.
  • A systematic review or umbrella review is impractical (e.g., extremely broad topic, no eligible primary studies or systematic reviews).
  • You are generating hypotheses, not testing them.

7. Common Misconceptions and Pitfalls

Misconception Correction
“A protocol is a type of review.” Incorrect. A protocol constitutes a methodological plan rather than an evidence synthesis.
“A narrative review with a long reference list is systematic.” Incorrect. Systematic refers to methods, not length. A long reference list does not equal systematic search or quality appraisal.
“We did a systematic review without a protocol.” That is a systematic review, but it is non-registered and potentially at higher risk of post-hoc bias.
“We did a systematic review using narrative synthesis, so it’s a narrative review.” Incorrect. Synthesis method (narrative vs. statistical) does not define the review type. Systematic review with narrative synthesis remains a systematic review.
“Umbrella reviews are just large systematic reviews.” Incorrect. Umbrella reviews synthesize existing systematic reviews, not primary studies. The unit of analysis differs fundamentally.
“Any review that covers multiple topics is an umbrella review.” Incorrect. Umbrella reviews require systematic methods, quality appraisal of included systematic reviews, and management of overlapping primary studies.
“Narrative reviews are worthless.” Incorrect. Narrative reviews have important roles in education, hypothesis generation, and historical context—but they cannot replace systematic reviews or umbrella reviews for guiding practice.

8. Reporting Standards and Registries

Entity Key Reporting Guideline Key Registry
Protocol PRISMA-P 2015 (Shamseer et al., 2015) PROSPERO (for health), OSF, Cochrane, Figshare
Systematic Review PRISMA 2020 (Page et al., 2021) Not applicable (but protocol should be registered)
Umbrella Review PRIOR statement (Gates et al., 2022) PROSPERO (increasingly required)
Narrative Review No widely accepted guideline (some journals suggest non-systematic review checklist) Not applicable (should not be registered as a systematic or umbrella review)
Critical note: Journals increasingly reject manuscripts that claim to be “systematic reviews” or “umbrella reviews” but lack a registered protocol, PRISMA/PRIOR flow diagram, or formal risk of bias assessment.

9. Illustrative Examples

Example 1: Protocol
Title: “Clear aligners versus fixed appliances for orthodontic treatment outcomes: A systematic review and meta-analysis protocol”
Registry: PROSPERO CRD42023400001
Content: Search strategy, PICO (Population: orthodontic patients; Intervention: clear aligners; Comparison: fixed appliances; Outcome: treatment duration, occlusal outcomes, patient-reported outcomes), inclusion criteria (RCTs and prospective cohort studies), risk of bias (RoB 2, Sterne et al., 2019), planned meta-analysis.
Example 2: Systematic Review
Title: “Clear aligners versus fixed appliances for orthodontic treatment: A systematic review and meta-analysis of 12 controlled studies”
Methods: Searched 5 databases from inception to December 2025, two independent reviewers, RoB 2 assessment (Sterne et al., 2019), random-effects meta-analysis, GRADE certainty (Schünemann et al., 2011), reported following PRISMA 2020 (Page et al., 2021).
Conclusion: Moderate-certainty evidence shows aligners are associated with shorter treatment duration but no significant difference in occlusal outcome quality compared to fixed appliances.
Example 3: Umbrella Review
Title: “Orthodontic interventions for malocclusion: An umbrella review of systematic reviews”
Methods: Searched PubMed, Embase, Epistemonikos, and Cochrane Database of Systematic Reviews from inception to March 2026, two independent reviewers, AMSTAR 2 quality assessment (Shea et al., 2017), narrative synthesis of 18 systematic reviews, management of overlapping primary studies using matrix approach (Pieper et al., 2019), reported following PRIOR (Gates et al., 2022).
Conclusion: High-quality systematic reviews consistently support the effectiveness of fixed appliances for moderate to severe malocclusion, while evidence for clear aligners remains moderate with heterogeneity in outcome reporting.
Example 4: Narrative Review
Title: “The evolution of clear aligner therapy in orthodontics: A narrative review”Methods section (often brief or absent):
“The author searched PubMed and Google Scholar using keywords ‘clear aligners,’ ‘Invisalign,’ and ‘orthodontic treatment’ and selected key papers based on clinical relevance and historical impact.”
Content: Covers development from initial cases to current digital workflows, biomechanical considerations, limitations, and author’s clinical recommendations.

10. Decision Algorithm

Figure 1 presents a decision algorithm to guide authors in selecting the appropriate evidence synthesis approach based on their objectives and resources.
The figure presents a decision tree with four branches:
  • Branch 1 (Plan a future systematic or umbrella review) → Write a PROTOCOL → Register in PROSPERO/OSF
  • Branch 2 (Answer a focused question from primary studies) → Conduct a SYSTEMATIC REVIEW → Follow PRISMA
  • Branch 3 (Synthesize existing systematic reviews) → Conduct an UMBRELLA REVIEW → Follow PRIOR
  • Branch 4 (Provide broad overview or expert perspective) → Write a NARRATIVE REVIEW → No registration required

11. Discussion

The findings of this comparative analysis reveal fundamental distinctions among protocols, narrative reviews, systematic reviews, and umbrella reviews that extend beyond mere terminology to encompass deep methodological differences with significant implications for evidence-based practice.

11.1. Interpretation of Differences

Systematic reviews represent the gold standard for minimizing bias when answering focused clinical questions based on primary studies. Their methodological rigor—including protocol registration (Shamseer et al., 2015), comprehensive searching (Lefebvre et al., 2023), duplicate screening, and formal risk of bias assessment (Sterne et al., 2019)—ensures that findings are reproducible and reliable. This level of rigor is essential when reviews inform clinical guidelines or policy decisions.
Umbrella reviews occupy a distinct and increasingly important niche in the evidence synthesis landscape. They represent a higher-order synthesis appropriate when multiple systematic reviews already exist on a topic. By systematically compiling, appraising, and synthesizing existing reviews, umbrella reviews provide clinicians and guideline developers with a comprehensive overview of evidence across broad clinical areas. The methodological challenges unique to umbrella reviews—particularly the management of overlapping primary studies across included systematic reviews—require specialized expertise and transparent reporting (Aromataris et al., 2015; Pieper et al., 2019; Gates et al., 2022).
Protocols serve as methodological safeguards, preventing post-hoc changes that might introduce bias. The requirement by leading journals and organizations for protocol registration (e.g., PROSPERO, Cochrane) reflects recognition that pre-specification is critical to maintaining the integrity of both systematic reviews and umbrella reviews.
Narrative reviews, while lacking methodological rigor, fulfill distinct and valuable functions. Their flexibility allows integration of diverse sources, historical perspectives, and clinical expertise that cannot be captured through rigid systematic methods. They serve as essential educational tools and hypothesis-generating resources.

11.2. Why Mislabeling Occurs

Misclassification of narrative reviews as systematic reviews occurs frequently in the literature. Umbrella reviews are sometimes incorrectly labeled as systematic reviews or scoping reviews. Several factors contribute to this problem: (1) authors may lack awareness of methodological standards; (2) the term “systematic” is sometimes used loosely to imply thoroughness rather than adherence to specific methodological criteria; (3) journals may not enforce rigorous methodological expectations; (4) there is perceived prestige in labeling a review as systematic or as an umbrella review, even when methods do not meet established standards; and (5) confusion persists regarding the appropriate unit of analysis (primary studies versus systematic reviews).
Scoping reviews represent another important synthesis type, designed to map the breadth of evidence without formal risk of bias assessment (Tricco et al., 2018; Munn et al., 2018), further highlighting the need for precise methodological classification across the evidence synthesis landscape.

11.3. Consequences in Clinical Research

Misclassification represents not merely a semantic issue, but a methodological error with potential consequences for clinical decision-making and guideline development. A narrative review erroneously labeled as systematic may be cited as high-level evidence, potentially leading to biased clinical recommendations. Conversely, a properly conducted systematic review labeled as a narrative review may be undervalued and overlooked by guideline developers. An umbrella review that fails to manage overlapping primary studies appropriately may produce misleading conclusions through double-counting of evidence (Lunny et al., 2020).

11.4. The Evolving Landscape

The emergence of living systematic reviews—a dynamic approach where evidence is continually updated as new research becomes available (Elliott et al., 2014; Ioannidis, 2017)—and living umbrella reviews further emphasizes the evolving nature of high-quality evidence synthesis. This approach maintains the methodological rigor of traditional reviews while addressing the challenge of rapid obsolescence in fast-moving fields.
The increasing use of artificial intelligence in literature screening and synthesis further underscores the importance of clearly defined methodological frameworks to ensure transparency and reproducibility. As AI tools become more integrated into evidence synthesis workflows, adherence to established standards for protocols, systematic reviews, and umbrella reviews will be essential to maintain scientific integrity.

11.5. Limitations

This analysis is limited by its reliance on established methodological guidance rather than empirical evaluation of published reviews, and by the absence of quantitative assessment of misclassification prevalence in the literature. Future research could empirically examine the frequency and characteristics of mislabeled reviews across disciplines to better understand the scope of this issue.

12. Conclusion

Protocols, narrative reviews, systematic reviews, and umbrella reviews are distinct scientific products with different purposes, methods, strengths, and limitations. A protocol is a pre-registered plan, not a review. A systematic review is the gold standard for minimizing bias and answering focused questions from primary studies. An umbrella review provides a higher-order synthesis of existing systematic reviews, offering comprehensive overviews of broad topics. A narrative review offers breadth and flexibility but cannot provide reproducible, bias-minimized evidence for decision-making.
Authors must accurately label their work and follow appropriate methodological guidelines. Journals, peer reviewers, and evidence users must critically appraise which type of synthesis is presented and interpret conclusions accordingly. Mislabeling—particularly calling a narrative review a systematic review, or conflating umbrella reviews with systematic reviews—misleads readers and undermines evidence-based practice. Accurate differentiation among these review types is not merely methodological precision, but a prerequisite for maintaining the integrity of evidence-based healthcare.
For clinicians and researchers in orthodontics and dentistry, understanding these distinctions is essential for producing high-quality evidence syntheses, critically appraising published literature, and ultimately delivering better patient care based on reliable evidence.
Final take-home message:
Plan with a protocol. Answer focused questions with a systematic review. Synthesize existing reviews with an umbrella review. Contextualize with a narrative review. Never confuse one for another.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable (no human or animal subjects).

Data Availability Statement

No new data were created or analyzed in this study. All referenced guidance documents are publicly available.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Higgins, J.P.T.; Thomas, J.; Chandler, J.; Cumpston, M.; Li, T.; Page, M.J.; Welch, V.A. (Eds.) Cochrane Handbook for Systematic Reviews of Interventions. Version 6.4; Cochrane, 2023. [Google Scholar]
  2. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021, 372, n71. [Google Scholar] [CrossRef]
  3. Page, M.J.; Moher, D.; Bossuyt, P.M.; et al. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ 2021, 372, n160. [Google Scholar] [CrossRef]
  4. Shamseer, L.; Moher, D.; Clarke, M.; et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ 2015, 350, g7647. [Google Scholar] [CrossRef]
  5. Gates, M.; Gates, A.; Pieper, D.; et al. Reporting guideline for overviews of reviews of healthcare interventions: development of the PRIOR statement. BMJ 2022, 378, e070849. [Google Scholar] [CrossRef]
  6. Campbell, M.; McKenzie, J.E.; Sowden, A.; et al. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ. 2020, 368, l6890. [Google Scholar] [CrossRef]
  7. Tricco, A.C.; Lillie, E.; Zarin, W.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018, 169(7), 467–473. [Google Scholar] [CrossRef] [PubMed]
  8. Aromataris, E.; Fernandez, R.; Godfrey, C.M.; Holly, C.; Khalil, H.; Tungpunkom, P. Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Int J Evid Based Healthc. 2015, 13(3), 132–40. [Google Scholar] [CrossRef] [PubMed]
  9. Shea, B.J.; Reeves, B.C.; Wells, G.; et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017, 358, j4008. [Google Scholar] [CrossRef] [PubMed]
  10. Whiting, P.; Savović, J.; Higgins, J.P.T.; et al. ROBIS: A new tool to assess risk of bias in systematic reviews. J Clin Epidemiol. 2016, 69, 225–234. [Google Scholar] [CrossRef]
  11. Sterne, J.A.C.; Savović, J.; Page, M.J.; et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 2019, 366, l4898. [Google Scholar] [CrossRef]
  12. Schünemann, H.J.; Brennan, S.; Akl, E.A.; et al. The development of the GRADE handbook. J Clin Epidemiol. 2011, 64(11), 1281–5. [Google Scholar]
  13. Lefebvre, C.; Glanville, J.; Briscoe, S.; et al. Searching for and selecting studies. In Cochrane Handbook for Systematic Reviews of Interventions. Version 6.4; Higgins, J.P.T., Thomas, J., Chandler, J., et al., Eds.; Cochrane, 2023. [Google Scholar]
  14. Elliott, J.H.; Turner, T.; Clavisi, O.; et al. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Med. 2014, 11(2), e1001603. [Google Scholar] [CrossRef] [PubMed]
  15. Ioannidis, J.P.A. Next-generation systematic reviews: prospective meta-analysis, individual patient data, and living reviews. J Clin Epidemiol. 2017, 83, 1–3. [Google Scholar]
  16. Pieper, D.; Antoine, S.L.; Mathes, T.; Neugebauer, E.A.M.; Eikermann, M. Systematic review finds overlapping reviews were not mentioned in every other overview. J Clin Epidemiol. 2019, 116, 1–9. [Google Scholar] [CrossRef]
  17. Lunny, C.; Brennan, S.E.; McDonald, S.; et al. Overviews of reviews often have limited rigorous reporting and methodological quality. J Clin Epidemiol. 2020, 124, 64–75. [Google Scholar]
  18. Gough, D.; Thomas, J.; Oliver, S. Clarifying differences between review designs and methods. Syst Rev. 2012, 1, 28. [Google Scholar] [CrossRef]
  19. Booth, A.; Sutton, A.; Papaioannou, D. Systematic Approaches to a Successful Literature Review, 3rd ed.; Sage, 2021. [Google Scholar]
  20. Munn, Z.; Peters, M.D.J.; Stern, C.; Tufanaru, C.; McArthur, A.; Aromataris, E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018, 18(1), 143. [Google Scholar] [CrossRef] [PubMed]
  21. Muka, T.; Glisic, M.; Milic, J.; et al. A 24-step guide on how to design, conduct, and successfully publish a systematic review. Eur J Epidemiol. 2020, 35(1), 49–60. [Google Scholar] [CrossRef]
  22. Pae, C.U. Why systematic review rather than narrative review? Psychiatry Investig. 2015, 12(3), 417–9. [Google Scholar] [CrossRef]
  23. PROSPERO International prospective register of systematic reviews. Available online: https://www.crd.york.ac.uk/prospero/ (accessed on 13 April 2026).
  24. Moher, D.; Shamseer, L.; Clarke, M.; et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015, 4, 1. [Google Scholar] [CrossRef]
  25. Pollock, M.; Fernandes, R.M.; Becker, L.A.; Pieper, D.; Hartling, L. Chapter V: Overviews of Reviews. In Cochrane Handbook for Systematic Reviews of Interventions. Version 6.4; Higgins, J.P.T., Thomas, J., Chandler, J., et al., Eds.; Cochrane, 2023. [Google Scholar]
  26. Rethlefsen, M.L.; Kirtley, S.; Waffenschmidt, S.; et al. PRISMA-S: an extension to the PRISMA statement for searching. J Med Libr Assoc. 2021, 109(2), 174–180. [Google Scholar] [CrossRef]
  27. McGuinness, L.A.; Higgins, J.P.T. Risk-of-bias VISualization (robvis): An R package and Shiny web app for visualizing risk-of-bias assessments. Res Synth Methods 2021, 12(1), 55–61. [Google Scholar] [CrossRef]
  28. McKenzie, J.E.; Brennan, S.E. Synthesizing and presenting findings using other methods. In Cochrane Handbook for Systematic Reviews of Interventions; Higgins, J.P.T., Thomas, J., Chandler, J., et al., Eds.; Cochrane, 2023; Volume Version 6.4. [Google Scholar]
  29. Peters, M.D.J.; Godfrey, C.M.; Khalil, H.; McInerney, P.; Parker, D.; Soares, C.B. Best practice guidance and reporting items for the development of scoping review protocols. JBI Evid Implement. 2022, 20(4), 261–272. [Google Scholar] [CrossRef] [PubMed]
  30. Pollock, M.; Fernandes, R.M.; Becker, L.A.; Pieper, D.; Hartling, L. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evid Synth. 2023, 21(3), 520–532. [Google Scholar] [CrossRef] [PubMed]
  31. Page, M.J.; Moher, D. Evaluations of the uptake and impact of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement. Syst Rev. 2018, 7(1), 132. [Google Scholar]
  32. Khamis, A.M.; et al. A scoping review of scoping reviews: advancing the methodology. J Clin Epidemiol. 2022, 145, 15–27. [Google Scholar]
Figure 1. Decision Algorithm for Selecting Protocol, Systematic Review, Narrative Review, or Umbrella Review.
Figure 1. Decision Algorithm for Selecting Protocol, Systematic Review, Narrative Review, or Umbrella Review.
Preprints 208156 g001
Table 1. Comparative Methodological Characteristics of Protocols, Systematic Reviews, Narrative Reviews, and Umbrella Reviews.
Table 1. Comparative Methodological Characteristics of Protocols, Systematic Reviews, Narrative Reviews, and Umbrella Reviews.
Dimension Protocol Systematic Review Narrative Review Umbrella Review
Primary purpose To pre-specify methods and prevent post-hoc bias To answer a focused question by synthesizing primary studies with minimal bias To provide broad overview, perspective, or historical context To synthesize existing systematic reviews on a broad topic
Unit of analysis Not applicable (plan) Primary studies (RCTs, cohort, case-control, etc.) Any literature (primary studies, reviews, opinion) Systematic reviews and meta-analyses
Presence of results No results (plan only) Yes (synthesized findings from primary studies) Yes (summarized findings) Yes (synthesized findings from multiple reviews)
Research question Stated but not answered Narrow, precise (often PICO/PEO format) Broad, often evolving or ill-defined Broad to moderately broad (encompassing multiple related PICO questions)
Search strategy Specified in detail (databases, terms, limits) Comprehensive, reproducible, documented (Lefebvre et al., 2023) Not specified; selective, based on author knowledge Comprehensive for systematic reviews (multiple databases, search for existing reviews)
Inclusion criteria Explicitly defined (PICO, study designs, language) Strictly applied to all retrieved primary studies Vague or absent Explicitly defined (types of systematic reviews, overlapping PICO questions)
Risk of bias assessment Planned (e.g., RoB 2, AMSTAR 2) Performed on included primary studies (Sterne et al., 2019) Not performed Performed on included systematic reviews (e.g., AMSTAR 2, ROBIS)
Data extraction Forms designed in advance Standardized, duplicate extraction from primary studies Informal note-taking Standardized extraction from systematic reviews
Synthesis method Not applicable (planned) Meta-analysis (if possible) or narrative synthesis (SWiM guidance, Campbell et al., 2020) Thematic, chronological, or conceptual summary Narrative synthesis, tabulation of findings, and where appropriate, comparison of effect estimates
Overlap management Not applicable Not applicable Not applicable Critical (must avoid double-counting overlapping primary studies; see Pieper et al., 2019)
Transparency Full (public registry) Full (methods reported in detail) Low (methods rarely reported) Full (protocol recommended; PRIOR reporting, Gates et al., 2022)
Reproducibility High (others could follow same plan) High (study could be repeated) Very low (different authors would produce different reviews) High (study could be repeated)
Time to complete Weeks Months to years Days to weeks Months (may be shorter than de novo systematic reviews depending on scope)
Number of authors Usually 2+ (often the same as the review team) 2+ (ideally including a librarian or methodologist) 1–2 (frequently single-authored) 2+ (often including methodologist with expertise in overview methods)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated