1. Introduction
The EU Health Technology Assessment (HTA) regulation Ref was adopted on 15 December 2021, came into force on 11 January 2022, and apply from 12 January 2025. This EU-HTA regulation establish a support framework and procedures for cooperation of member states on health technology at EU level. It creates an EU HTA Member States Coordination Group with two main goals: conducting Joint Clinical Assessments (JCA) and offering Joint Scientific Consultations (JSC) on medicinal products and medical devices.
The JCA aim to assess the level of certainty of the comparative effectiveness of medicinal products. The framework PICO (Population, Intervention, Comparator, Outcome) is the pivotal tool to define evidence requirement for clinical evidence assessment.
While the systematic literature review (SLR) will be central to the JCA, little attention has been given to this topic and no guidance has been developed for SLRs. Recent study focused on simulating the evidence generation requirements under the JCA predicted that between 8 and 31 Population, Intervention, Comparator, Outcome (PICO) frameworks would be necessary for two anticancer investigational medicinal products, depending on the treatment landscape and the specific requirements of each member state. This indicates that health technology developers (HTDs) may be required to submit up to 31 distinct comparative effectiveness assessments[
1]. The JCA subgroup recently published a PICO exercise for two oncology products, identifying 13 PICOs per each[
2].
At the time of submission of the JCA dossier, a limited number of comparative clinical trials will be available, typically including 1 phase II and 1 phase III trials, or a single phase II/III trial, or just a phase II trial and even non comparative single arm trials. When comparative these trials are often using placebo as the reference comparator. To address the multiplicity of PICOs and generate comparative effectiveness, direct evidence comparisons will be insufficient and may only inform a minority of PICOs. The majority of PICOs will require systematic literature reviews (SLRs) followed by indirect treatment comparisons (ITCs). The quality of the SLRs will be crucial for the comparative effectiveness assessment. Both SLR and ITC will play a vital role in documenting the comparative effectiveness for EU-JCA purposes.
While SLRs are critical to assess JCA comparative effectiveness, it is unclear why a specific guidance document was not developed to address methodological recommendations for conducting SLRs. Moreover, the JCA subgroup does not recommend a specific guideline for conducting SLRs, but emphasizes that the submission dossier should adhere to the principles of evidence-based medicine (EBM) with no further specifications[
3], [
4]. This remains an open vague nonactionable recommendation. Nevertheless, through JCA guidance documents and reporting templates, several recommendations are offered that outline the expectations of the JCA subgroup regarding SLRs generation.
While JCA subgroup offers well established recommendations for studies classification/hierarchy, reporting of reasons for exclusion, and PRISMA diagram generation, several requirements or recommendations are not aligned with the state of the art.
This manuscript discusses the requirements for SLR development for JCA that deviate from the state of the art.
2. Scope of Literature Search
No requirement for Embase
The guidance on filling in the JCA dossier template for medicinal products specify: ”The search in bibliographic databases at least is to be conducted in MEDLINE (inclusive „in-process & other non-indexed citations”) and the „Cochrane Central Registry of Controlled Trials“ database”[
5].
This is not aligned with the state of the art according to Cochrane handbook that advise that at a minimum, systematic reviews should search MEDLINE and Embase, along with Cochrane Central Register of Controlled Trials (CENTRAL)[
6].
Embase includes all items indexed in MEDLINE, along with an additional 3,000 journals (over 7 million records), featuring a significant number of European and non-English publications [
7,
8,
9]. Some analyses have demonstrated that Embase provides more comprehensive coverage of controlled clinical trials compared to MEDLINE. For instance, Embase indexes 16% more studies on conditions such as rheumatoid arthritis, low back pain, and osteoporosis[
7]. Its broader scope also extends to research on pharmacological treatments and drug-related adverse events[
7]. Moreover, Embase includes around 5 million conference abstracts from around 16,000 conferences, which help reduce publication bias. The database frequently offers broader access to European and international clinical practice guidelines, which play a key role in preparing global regulatory dossiers and documentation for multinational clinical trials[
8,
10].
For European-focused projects - regulatory, epidemiological, or clinical - Embase is the better choice. The two databases are complementary, and for comprehensive coverage, especially for HTA or evidence synthesis projects, it’s still standard practice to search both[
7,
9,
10].
Not including Embase can lead to missing relevant studies and potentially biases the review’s conclusions.
Excluding Abstracts
JCA report template specify that studies must have enough documentation to assess their methods and results and therefore recommends excluding studies with only abstracts or posters available, when setting inclusion and exclusion criteria[
5].
Including conference abstracts is crucial for reducing publication bias ensuring that the most recent evidence is considered and guarantee comprehensive EBM practice. Excluding them may result in missing relevant studies and biasing the review’s conclusions. Exhaustiveness of evidence is fundamental to EBM[
6,
11,
12].
Omitting abstracts also diverges from the recommendations set forth by EUnetHTA[
13,
14]. According to EUnetHTA guidance, only around half of all studies initially presented as abstracts are eventually published in full, and abstracts are more likely to be published as full articles if they demonstrate a favorable treatment outcome or statistically significant findings[
13,
14,
15]. Conference abstracts typically offer minimal information regarding study design and often report limited outcome data[
16]. EUnetHTA advises against routinely including abstracts in evidence searches and recommends that reviewers should make every effort to obtain the complete study report or additional study details before deciding whether to incorporate the findings into the assessment[
13,
14,
16]. Nevertheless, in situations where systematic searches of published literature result in few or no relevant records—or when existing data is inconsistent—searching conference abstracts and proceedings may be justified to uncover supplementary studies[
13,
14,
16]. These abstracts and proceedings can be located through bibliographic databases that catalogue conference materials[
17], such as Embase, BIOSIS Previews, and Scopus, as well as through manual searches of conference booklets, journal supplements, and event websites[
16].
So even if EUnetHTA is more nuanced it does not exclude systematically abstracts from the search.
Excluding abstract and Embase from the sources of SLR goes against the widely accepted standards in evidence-based synthesis and medicine. It should be reconsidered.
3. Using Risk of Bias Version 1 Tool (RoB 1) for Clinical Trial Evidence Quality Assessment
The JCA guidance recommends using the RoB 1 instrument to assess risk of bias, though the updated Risk of Bias version 2 (RoB 2) tool was developed in 2019 to replace it[
18]. RoB 2 addresses previous limitations and improves the standardisation, transparency, and reproducibility of risk of bias assessments in randomized controlled trials (RCTs)[
6,
19].
RoB 1 assesses bias in six areas: selection, performance, detection, attrition, reporting, and a general "other bias". Judgments are classified as low, unclear, or high risk[
20,
21].
RoB 2 focuses on five domains: randomization process, deviations from intended interventions, missing outcome data, measurement of the outcome, and selection of reported results. RoB 2 removes the "other bias" domain and introduces an overall risk of bias judgment. Additionally, RoB 2 uses signaling questions to guide reviewers toward judgments of low risk, some concerns, or high risk. The signaling questions in RoB 2 include responses such as yes, probably yes, no, probably no, and no information, leading to classifications of high risk, low risk, and some concerns for each domain, which can enhance the precision of this revised version compared to the RoB 1 tool[
6,
22].
RoB 1 offers a general evaluation of each study considering all outcomes, whereas RoB 2 performs a result-level assessment for specific outcomes. RoB 2 enables reviewers to evaluate context and endpoints with greater accuracy, thereby enhancing the precision of their judgments compared to RoB1[
6].
RoB 2 compared to RoB 1 has several advantages. It offers clearer guidance with structured signaling questions and focuses on result-level assessments, enhancing relevance for SLRs. Transparency is improved by removing ambiguous categories such as "other bias" and "unclear risk". Interestingly, the analysis conducted by Babic et al. showed that, among 768 analyzed Cochrane reviews, 78% (602 reviews) contained domain ‘other bias’ in the risk of bias tool, covering 7811 RCTs[
23]. The authors observed a broad variety of sources categorized as 'other bias' in the RoB tool applied by researchers, along with variability in the interpretation of identical supporting justifications. Such inconsistency in evaluating the risk of other bias with RoB 1 undermines the reliability and comparability of SLRs’ findings[
23].
The JCA recommends RoB 1, primarily due to its shorter operation time compared to the upgraded version[
5,
18]. This recommendation raises two important questions. Firstly, in terms of the quality of reports focusing solely on clinical evidence, should assessors and co-assessors not employ the highest quality tool endorsed by a reputable institution like Cochrane? Is the difference in operating time significant enough to consider using a second-best tool that was replaced by a more effective one?
Numerous factors influence the time required to utilize RoB 2, with experience being a significant one. While added complexity is generally linked to increased duration, one outlier publication indicates that mean assessment times are similar for both tools[
22]. This decision to use RoB 1 warrants further consideration, including potentially piloting RoB 2 with trained reviewers. The Guidance document acknowledge indirectly the limitation of RoB 1 and the value of RoB 2 by specifying : “It is nevertheless advisable that assessors familiarise themselves with RoB 2, so they can pay attention to more subtle mechanisms of bias and describe these in a JCA, if required”[
18]. It raises the question of why RoB2 is not used directly while providing adequate training to assessors.
The JCA reporting template requests RoB assessments for each outcome, which contradicts the purpose of the RoB 1 tool designed for study-level bias assessment[
18]. Although technically possible, using RoB1 for outcome-level assessments is not recommended by Cochrane. It lacks the structure for reliable outcome-level evaluations and can lead to inconsistencies and subjectivity. Cochrane discourages routine outcome-level use of RoB1 due to its limited suitability for specific issues like differential blinding or missing data rates [
6].
Utilizing RoB2 in the JCA dossier will help ensure that RoB evaluations adhere to contemporary standards and are uniformly applied across various outcomes.
Using RoB 1 goes against the widely accepted standards in evidence synthesis and EBM. It should be reconsidered.
Although the recommendation to use RoB concerns the clinical validity of submitted studies, it also applies to indirect treatment comparisons and underlying clinical studies evidence, which form the basis for most PICOs assessments.
4. Quality Assessment of the Evidence Was Overlooked
Although RoB assesses clinical trial evidence quality, it is not enough to evaluate the overall quality of a body of evidence for each outcome. To do so, researchers need to use a specific tool to determine quality for eligibility for SLR.
Tools such as Grading Recommendation assessment, development and evaluation (GRADE) provide structured, transparent, and reproducible approaches for evaluating the quality of evidence[
6]. For example Cochrane[
6], National Institute for Health and Care Excellence (NICE)[
24], World Health Organization (WHO)[
25], Centers for Disease Control and Prevention (CDC)[
26] use GRADE. The absence of an endorsed tool can result in inconsistency in the assessment of clinical trial evidence among different JCAs or even within the same JCA for different PICOs’ related evidence. Without a standardized and widely accepted framework, assessors may employ diverse methodologies, potentially leading to discrepancies in the evaluation of evidence quality. Such inconsistencies could compromise the reliability and reproducibility of JCA reports, which is critical when these evaluations are utilized to guide policy decisions and healthcare strategies at the European level.
Global health organizations, such as NICE and Cochrane, have developed clear methodologies for assessing evidence quality[
6,
24]. The JCA's decision not to recommend a standardized tool like GRADE may result in misalignment with these best practices. Adopting such tools would align JCA with international standards and enhance the credibility and acceptance of its findings.
It is recommended to use GRADE after assessing individual studies (with RoB tools), when you need to judge how trustworthy the pooled evidence is for decision-making[
6].
Not recommending a tool for clinical trial reporting quality assessment also, goes against the widely accepted standards in evidence synthesis and EBM. It should be reconsidered.
5. Time Pressure Will Affect Quality
HTD has 90 days to develop the JCA dossier[
4,
5]. Based on this recommendation, the fact the dossier should be submitted due 45 days prior Committee for Medicinal Products for Human Use (CHMP) opinion, and considering median duration of the first regulatory stop of the clock, the conclusion can be made that for half of the dossiers, the time for development of dossier, will be less than 90 days[
27]. Defining the PICOs scope takes up to 100 days, which may be reduced to 90 days. Developing a submission dossier within 90 days is technically impossible. Therefore, HTD must anticipate the scope and prepare the dossier in advance. When the final scope is available, HTD will need to adjust their dossier accordingly.
Guidelines advise that groups conducting SLRs should involve individuals with subject-matter expertise relevant to the area under investigation, alongside specialists in SLR’s methods and statistical analysis. The structure of the review team, possibility to exchange doubts between team members, and time for consensus, can notably influence the quality, credibility, and outcomes of the review. Ensuring openness and clarity regarding team composition is essential to minimize the risk of bias and maintain methodological integrity[
6,
28]. To form such a team and facilitate information exchange between team members, adequate time is required.
This approach poses challenges when considering the multiple PICOs and submission timelines for JCA dossiers.
The 3-month timeline for evidence to be considered up to date is too short compared to for example NICE's 6-month window[
29]. Not knowing the PICOs in advance exacerbates this problem. Performing update of searches, extractions, quality controls, and ITCs updating should be timely. Last-minute report updates can be disruptive.
Similarly, EUnetHTA advises that information specialists should be an essential component of the SLR or HTA team from the project’s outset. Search strategies ought to be subjected to peer evaluation to guarantee their robustness and quality. The search activities should be documented contemporaneously and communicated in a clear and transparent way[
13].
The current approach proposed for JCA submission and reporting, disrupts time management and raises the risk of errors due to time pressure and the complexity of updating SLRs when PICOs are revised. Handling multiple PICOs under time pressure demands significant resources, which does not comply with best practices.
Not providing reasonable time to conduct SLR goes against the widely accepted standards in evidence synthesis and EBM. It should be reconsidered.
6. Other HTA Agencies’ SLRs’ Requirements and EUnetHTA Recommendations on SLR
HTA bodies worldwide, at both national and regional levels, offer guidance on healthcare interventions, that may be eligible for public funding or reimbursement. These entities base their decisions on comprehensive dossiers submitted by manufacturers, with each organization providing its own specific templates or criteria to support the preparation of required materials. Evidence from clinical studies - particularly RCTs - forms the cornerstone of economic analyses, which in turn enables HTA institutions to deliver decisions grounded in scientific evidence. Across the reviewed agencies from various global regions (see
Table 1), there is a shared understanding that clinical evidence must be gathered in a systematic way, from multiple sources, based on a predefined PICO, and that the quality of the collected data must be critically appraised to ensure maximum bias reduction. Guidelines from HTA agencies and EBM organizations have been taken into account by EUnetHTA, and reflected in detailed recommendations published in 2015[
14] and 2019[
13],outlining the specific steps for conducting SLRs, which can be then served as a solid foundation for making health-related decisions in an international context. The recommendations and templates for JCA dossier submission and report do not align with EUnetHTA's oldest SLR guidelines. It is recommended that JCA adopt Cochrane guidelines to ensure state-of-the-art EBM standards.
4. Conclusions
It is to be expected that the majority of PICOs will be addressed through SLR followed by ITC when feasible, while direct evidence will only document a low number of PICOs. However, guidance documents disproportionately address direct comparison. SLR is central to most PICOs and merits dedicated guidance. While no specific guidance for SLR has been developed, the topic is mentioned in various places. Adhering to available EUnetHTA guidelines for SLR would have provided a clear concise guidance aligned on the state of the art[
13,
14]. This manuscript summarizes the essential requirements for performing SLR for JCA submission, while SLR on performing SLR was obviously not conducted, as a consequence recommendations deviate from best practices as outlined by Cochrane and EUnetHTA guidelines. For example when EUnetHTA developed its guidelines for conducting SLR they reflected on the state-of-the-art references available at that time[
6,
17,
37,
38,
39,
40,
41].Guidance documents are intended to reflect the state of the art in EBM as specified in the EU-HTA Regulation[
3], but the recommendations in JCA documents deviate significantly from this standard. This may expose JCA to issues such as unreliability, errors, lack of comprehensiveness, and lack of reproducibility. Adhering to Cochrane guideline, regarded as the highest standard in the field for SLR, is a simple way to address these concerns and avoid inappropriate recommendations. Recommending the EUnetHTA SLR guidelines would also be advisable, as they have been recognised for their high quality.
Author Contributions
B.S. and M. T.: Conceptualized the content and wrote the first draft of the manuscript. The co-authors: Challenged the concept, edited the manuscript, and refined arguments for clarity and coherence. All authors reviewed and approved the final version for submission.
Funding
This research received no external funding..
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| Abbreviation |
Full Form |
| CADTH |
Canadian Agency for Drugs and Technologies in Health |
| CDC |
Centers for Disease Control and Prevention |
| CENTRAL |
Cochrane Central Register of Controlled Trials |
| CHMP |
Committee for Medicinal Products for Human Use |
| EBM |
Evidence-Based Medicine |
| EU-HTA |
European Health Technology Assessment |
| EUnetHTA |
European Network for Health Technology Assessment |
| GRADE |
Grading of Recommendations Assessment, Development and Evaluation |
| HAS |
Haute Autorité de Santé |
| HTDs |
Health Technology Developers |
| IQWIG |
Institute for Quality and Efficiency in Health Care |
| ITCs |
Indirect Treatment Comparisons |
| JCA |
Joint Clinical Assessment |
| JSC |
Joint Scientific Consultations |
| NICE |
National Institute for Health and Care Excellence |
| PBAC |
Pharmaceutical Benefits Advisory Committee |
| PICO |
Population, Intervention, Comparator, Outcome |
| PRISMA |
Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
| RoB 1 |
Risk of Bias version 1 tool |
| RoB 2 |
Risk of Bias version 2 tool |
| SLRs |
Systematic Literature Reviews |
| WHO |
World Health Organization |
References
- D. N. Anastasaki E, Dirnberger F, van Engen A, Majer I, Oswald C, Szawara P, Takundwa R, Ubi S. , "Predicting Evidence Requirement Implications of the EU Joint Clinical Assessment (JCA) Through a Population, Comparator, Intervention, Outcome (PICO) Simulation for Two Anticancer Investigational Medicinal Products (IMPs). ISPOR Europe 2024".
- D.-G. f. H. a. F. Safety, "PICO exercises. Exercise 1 and 3," Access through: https://health.ec.europa.eu/publications/pico-exercises_en, 2025.
- European Commission, "Commission Implementing Regulation (EU) 2024/1381 of 16 May 2024 laying down rules for the application of Regulation (EU) 2023/2411 of the European Parliament and of the Council as regards the template for the statement of reasons for refusals and withdrawals of authorisation of certain products," Official Journal of the European Union, Brussels, 2024-05-16 2024. [Online]. Available: https://eur-lex.europa.eu/eli/reg_impl/2024/1381/oj.
- H. Directorate-General for and S. Food, "Procedural guidance for JCA medicinal products," Brussels, 2024-11-28 2024. [Online]. Available: https://health.ec.europa.eu/publications/procedural-guidance-jcamedicinal-
products_en.
- Directorate-General for Health Food Safety, "Guidance on filling in the joint clinical assessment (JCA) dossier template – Medicinal products," Brussels, 2024-11-28 2024. [Online]. Available: https://health.ec.europa.eu/publications/guidance-filling-joint-clinical-assessment-jca-dossier-templatemedicinal-
products_en.
- J. P. T. Higgins et al., Cochrane Handbook for Systematic Reviews of Interventions version 6.5 (updated August 2024). Cochrane, 2024.
- L. A. Armen Yuri Gasparyan, George D. Kitas. , "Multidisciplinary Bibliographic Databases. Journal of Korean Medical Science 2013; 28(9): 1270-1275. 2013".
- Elsevier, "Embase Content Coverage," 2024. [Online]. Available: https://www.elsevier.com/products/embase/content.
- Clarivate, "Embase vs MEDLINE FAQ," vol. 2025, ed, 2021.
- Kapadia, "PubMed Vs. Embase in Literature Review and Analysis: A Medical Communications Perspective. Life Sciences Enago (https://lifesciences.enago.com/blogs/pubmed-vs-embase-in-literature-review-and-analysis) 2024".
- R. W. Scherer and I. J. Saldanha, "How should systematic reviewers handle conference abstracts? A view from the trenches," (in eng), Syst Rev, vol. 8, no. 1, p. 264, Nov 7 2019. [CrossRef]
- S. Hackenbroich, P. Kranke, P. Meybohm, and S. Weibel, "Include or not to include conference abstracts in systematic reviews? Lessons learned from a large Cochrane network meta-analysis including 585 trials," Systematic Reviews, vol. 11, no. 1, p. 178, 2022/08/26 2022. [CrossRef]
- EUnetHTA, "METHODOLOGICAL GUIDELINES. Process of information retrieval for systematic reviews and health technology assessments on clinical effectiveness. Version 2.0, December 2019 European network for Health Technology Assessment | JA3 2016-2020 www.eunethta.eu".
- EUnetHTA, "GUIDELINE. LEVELS OF EVIDENCE Internal validity of randomised controlled trials. Adapted version (2015) based on “LEVELS OF EVIDENCE: Internal validity of randomised controlled trials - February 2013".
- M. J. Scherer RW, Pfeifer N, Schmucker C, Schwarzer G, Von Elm E. , "Full publication of results initially presented in abstracts. Cochrane Database Syst Rev 2018; (11): MR000005".
- D. S. Dundar Y, Dickson R, Walley T, Haycox A, Williamson P., "Comparison of conference abstracts and presentations with full-text articles in the health technology assessments of rapidly evolving technologies. Health Technol Assess 2006; 10(5): iii-iv, ix145. ".
- B. H. Relevo R, "Finding evidence for comparing medical interventions: methods guide for comparative effectiveness reviews; AHRQ publication no .11-EHC021-EF.
- [internet]".
- Directorate-General for Health Food Safety, "Guidance on the validity of clinical studies for joint clinical assessments," Brussels, 2024-11-28 2024. [Online]. Available: https://health.ec.europa.eu/publications/guidance-validity-clinical-studies-joint-clinical-assessments_en.
- S. J. Sterne J A C, Page M J, Elbers R G, Blencowe N S, Boutron I et al., "RoB 2: a revised tool for assessing risk of bias in randomised trials BMJ 2019; 366 :l4898. [CrossRef]
- J. P. T. Higgins et al., "The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials," BMJ, vol. 343, p. d5928, 2011. [CrossRef]
- J. Savović et al., "Evaluation of the Cochrane Collaboration’s tool for assessing the risk of bias in randomized trials: focus groups, online survey, proposed recommendations and their implementation," Systematic Reviews, vol. 3, no. 1, p. 37, 2014/04/15 2014,. [CrossRef]
- R. Brennan and K. Kremer, "Comparison of risk of bias assessments with RoB 1 and RoB 2 in Cochrane intervention reviews: an evaluation study," Cochrane Methods, 2022. [Online]. Available: https://community.cochrane.org/sites/default/files/uploads/inlinefiles/RoB1_2_project_220529_BR%20KK
%20formatted.pdf.
- A. Babic et al., "The judgement of biases included in the category “other bias” in Cochrane systematic reviews of interventions: a systematic survey," BMC Medical Research Methodology, vol. 19, no. 1, p. 77, 2019/04/11 2019. [CrossRef]
- National Institute for Health Care Excellence, The guidelines manual. NICE, 2012.
- World Health Organization, WHO Handbook for Guideline Development, 2nd Edition. World Health Organization, 2014.
- U. A. C. o. I. Practices, ACIP GRADE Handbook for Developing Evidence-based Recommendations. Centers for Disease Control and Prevention (CDC), 2024.
- S. Gillard, "EU Joint Clinical Assessment (JCA) – Implications for Pharma and Medtech. Mtech Access website (https://mtechaccess.co.uk/eu-jca/) 2024".
- P. M. Lesley Uttley, "The influence of the team in conducting a systematic review. Systematic Reviews. 2017 Aug 1;6(1):149. [CrossRef]
- C. B. Isobel Munro, Neil Webb. , "Comparison of Systematic Literature Review Requirements Among Health Technology Assessment Agencies in the UK and Ireland. ISPOR Europe 2024 2024".
- NICE., "NICE health technology evaluations: the manual. 2022. ".
- H. A. d. Santé., "Choices in Methods for Economic Evaluation. 2020".
- IQWIG., "General Methods. 2022. ".
- CADTH., "Guidelines for the Economic Evaluation of Health Technologies: Canada (4th Edition). 2017".
- CADTH, "Drug Reimbursement Review Procedures. 2023".
- CADTH., "CADTH Reimbursement Review Sponsor Summary of Clinical Evidence Template. 2023".
- P. B. A. Committee., "Guidelines for preparing a submission to the Pharmaceutical Benefits Advisory Committee (Version 5.0). 2016".
- C. f. R. a. Dissemination., "CRD’s guidance for undertaking reviews in health care. York: CRD; 2009. ".
- L. L. Eden J, Berg A, Morton S (Ed). , "Finding what works in health care: standards for systematic reviews. Washington: National Academies Press; 2011".
- S. A. Balshem H, Ansari M, Norris S, Kansagara D, Shamliyan T et al., "Finding grey literature evidence and assessing for outcome and analysis reporting biases when comparing medical interventions: AHRQ and the Effective Health Care Program; methods guide for comparative effectiveness reviews; AHRQ publication no. 13(14)-EHC096-EF [internet]. ".
- S. M. Mc Gowan J, Salzwedel DM, Cogo E, Foester V, Lefebre C. , PRESS: Peer Review Electronic Search Strategies; 2015 guideline explanation and elaboration (PRESS E&E) [internet].
- M. J. Sampson M, Lefebvre C, Moher D, Grimshaw J., PRESS: Peer Review of Electronic Search Strategies. Ottawa: Canadian Agency for Drugs and Technologies in Health; 2008.
Table 1.
Requirements for SLR process.
Table 1.
Requirements for SLR process.
| Agency |
SLR needed |
The necessity of defining PICOS criteria |
Databases |
Other specified sources |
Selection process |
PRISMA required |
Critical appraisal |
| NICE[30] |
Yes |
Yes |
Medline, Embase, and Cochrane |
Unpublished data, reference searching, citation searching, inclusion list of systematic reviews, websites |
Double-screening done by independent reviewers |
Yes |
Validated tools specific to the study design and use case |
| HAS[31] |
Yes |
HAS recommends following the Cochrane Handbook |
HAS recommends following the Cochrane Handbook |
Relevant websites (government agencies, learned societies, conferences), legislative and regulatory texts |
HAS recommends following the Cochrane Handbook |
Yes |
HAS recommends following the Cochrane Handbook |
| IQWIG [32] |
Yes |
Yes |
Medline, Embase, Cochrane |
Trial registries, manufacturer data, HTA agency websites, PROSPERO, Dynamed, UpToDate |
Double-screening done by independent reviewers |
Yes |
Defined by the basic principles of good clinical practice |
| CADTH[33,34,35] |
Yes |
Yes |
Medline, Cochrane |
Trial registries, websites of INAHTA agencies, manufacturers’ websites, internet search tools, consultation with experts and agencies |
No information |
No information |
Not reported |
| PBAC[36] |
Yes |
Yes |
Medline, Embase, Cochrane |
Trial registries, reference lists, marketing approval dossiers, company databases |
No information |
Yes |
Cochrane RoB 2 |
| EUnetHTA [13,14] |
Yes |
Yes |
Medline, Embase, Cochrane |
SLR should regularly include a search for unpublished literature to identify both unpublished studies and unpublished data from published studies |
Double-screening done by independent reviewers |
Yes |
Use the RoB concept of the Cochrane Collaboration to assess the internal validity of RCTs (guideline on internal validity of RCTs published in 2015[14], therefore, could not contain information on RoB 2) |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).