Preprint
Essay

This version is not peer-reviewed.

Epistemological Tensions in the EU HTA Joint Clinical Assessment: The Illusion of Judgment-Free Evaluation

Submitted:

30 July 2025

Posted:

26 September 2025

You are already at the latest version

Abstract

Health Technology Assessment (HTA) is shaped by moral (what makes a health technology desirable or acceptable, e.g., beneficence, justice), epistemological (how reliable knowledge is obtained and validated, e.g., evidence standards, uncertainty handling), and ontological (what effects and outcomes are considered real and relevant for assessment) commitments. The EU Joint Clinical Assessment (JCA), intended to harmonize relative effectiveness evaluation, embeds these commitments implicitly rather than explicitly, creating transparency and consistency issues. Critical concerns include: rigid reliance on the PICO framework privileging RCTs and quantifiable outcomes, proliferation of PICOs leading to infeasible evidence requirements, inadequate handling of multiplicity and post-hoc analyses, conflation of certainty and uncertainty, and exclusion of qualitative evidence and stakeholder input. Guidance documents, criticized for methodological weaknesses (e.g., outdated tools, poor external validity assessment, nominal p-values acceptance), further undermine epistemic robustness. These structural flaws risk producing assessments that are non-inclusive, reductionist, and epistemologically inconsistent. Future EU HTA frameworks should explicitly align on foundational commitments and integrate stakeholder perspectives to ensure transparency, scientific credibility, and ethical legitimacy.

Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

Introduction

The practice of Health Technology Assessment (HTA) inherently involves moral, epistemological, and ontological commitments, which shape how assessments are conducted, interpreted, and used for decision-making [1,2]. Moral commitments concern what makes a health technology desirable or acceptable—for example, considerations of beneficence (doing good), nonmaleficence (avoiding harm), autonomy (respecting patient choices), and justice (fair distribution of resources) [1-3].
Epistemological commitments in HTA relate to how reliable knowledge about health technologies is obtained and validated [4,5]. HTA practitioners commit to specific norms about evidence quality, study design, and methods of data analysis and synthesis, which influence what counts as valid or trustworthy information. This includes assumptions about the type of prioritized evidence (e.g., randomized controlled trials), how uncertainties are handled, and how evidence translates from clinical trials to real-world settings [4,5].
Ontological commitments concern what effects and outcomes of health technologies are considered real, relevant, or conceivable in the assessment [1]. This involves decisions about which health states, patient experiences, or societal impacts are included or excluded from evaluation. For example, HTA must decide whether to consider only direct clinical outcomes or also broader social, psychological, or environmental effects. These commitments define the “reality” that HTA engages with and delimit the phenomena that can be meaningfully assessed and acted upon [6].
In summary, Moral commitments about what is good, just, and acceptable in health care technologies; Epistemological commitments about how to generate and interpret reliable evidence; and Ontological commitments about what effects and outcomes are real and relevant to assess. These commitments shape the normative foundation of HTA and highlight the importance of explicitly addressing values alongside empirical evidence in health technology decision-making [1-3].
These three types of commitments are intertwined and normative, meaning HTA is not a purely objective or technical exercise but one that involves value-laden decisions at every stage [1-3,7]. Recognizing these commitments supports the integration of normative analysis and stakeholder participation, fostering transparency and shared understanding among assessors, decision-makers, and affected populations [1-3]. This approach helps ensure that HTA outcomes are both scientifically credible and ethically justified, promoting equitable and socially relevant health policies. This is why considering these three commitments explicitly when developing a new HTA decision analysis framework model is unavoidable. Analyzing these commitments is important to understand the underlying foundation of the HTA analysis framework and its consistency in the light of the objectives it pursues.
The recent European Union (EU) regulation on HTA introduces the Joint Clinical Assessment (JCA) as a harmonized collaborative approach to assessing the relative effectiveness of health technologies across Member States (MS). A core body was established to implement the EU HTA Regulation, the MS Coordination Group on Health Technology Assessment (HTACG). While the JCA is designed to be factual and free of value judgments, its epistemological foundations raise significant concerns [8].
No foundational epistemological, moral or ontological commitments were made publicly available, which is an important issue from transparency perspective and consistency of the proposed JCA model. These commitments are implied (implicit) in the EU-HTA Regulation, its implementing act, and documents from the HTACG. This paper critically examines the JCA process from an epistemological perspective, highlighting the decision-making, ontological implications, and impact on the selected design. It argues that the methodological framework underpinning the JCA, which stems from its historical design, does not recognize epistemological principles, thereby compromising the transparency and consistency of its outputs.
For clarity, the various epistemological topics are presented separately, although they are interconnected and frequently influence one another.
1.
The PICO framework (Population, Intervention, Comparator, and Outcome)
The PICO Framework is central to JCA scoping and outcomes [9]. It fundamentally shapes the epistemological framework of EU HTA assessment, showing HTACG’s preference for a positivist, experimental approach over a constructivist, ethical one as seen in the PICOs generation exercise by the JCA subgroup. This is an impactful epistemological choice.
PICO rigidly prespecifies research questions, limiting evidence synthesis to predefined parameters. This excludes outcomes, populations, comparators, or way of using the intervention not articulated in the initial scope and potentially omitting real-world clinical nuances. PICO framework within EU HTA process prioritizes quantifiable outcomes (e.g., mortality rates, morbidity rates, etc.) over qualitative patient experiences [10].
Through its link to the hierarchy of evidence, the framework implicitly elevates comparative effectiveness data as the primary epistemic authority [4]. It mandates direct comparisons with existing alternatives, thereby reinforcing a rigid hierarchy of evidence. While indirect treatment comparisons are possible, they are placed low in the hierarchy. Furthermore, PICO introduces study design ranking by favoring randomized controlled trials (RCTs), which may not reflect real-world potential contribution.
Translation challenges arise as PICO clashes with clinical trial frameworks like Estimand [11]. While Estimand is clinical trial focused and aims to inform internal, external validity and causal relationship, PICO is HTA focused and aims to inform relative effectiveness. RCTs are mainly constructed for regulatory purposes and may misalign with PICO framework [11-13]. While the guidance on statistical analysis recommends sensitivity analysis [14], it is unclear how intercurrent events may be integrated within PICO framework unless a new PICO is developed.
There is also a risk of reductionism. Fragmenting assessments into discrete PICO components may obscure complex interactions, such as comorbidities and social factors, hinder the understanding of unanticipated findings, and oversimplify multidimensional health science into binary comparisons.
Moreover, the PICO framework limits the ability of patients to contribute their experiences and preferences during the assessment phase [15,16]. It offers a rigid structure that conflicts with the EU HTA Regulation, which mandates broader methodological approaches beyond RCTs, especially for novel therapies like advanced therapy medicinal products (ATMPs) and vaccines.
Despite these epistemological limitations, PICO provides a highly standardized framework, which continues to be a key feature in the JCA process. Most HTA agencies use the PICO framework explicitly or implicitly. While NICE has a formal scoping process [17], France, Germany, Italy, and Spain do not. In Germany early consultation with GBA allows to define the population, relevant comparator(s), outcomes and subgroups, however it is not a mandatory step. As a result, in these countries, any relevant evidence meeting general HTA requirements can be submitted, regardless of predefined PICOs. Predefined PICOs limit evidence submission flexibility.
2.
Multiplicity of PICOs and Epistemic Uncertainty
The JCA scoping process consolidates individual MS PICOs (Population, Intervention, Comparator, Outcome), often resulting in an excessive number [9]. Recent exercises by the JCA subgroup identified at least 13 PICOs for each of two oncology products, with the comparator being the most frequent source of differences between PICOs [18]. It is unlikely to address this number of PICOs through direct randomized evidence. Typically, only one phase III clinical trial with at best one active comparator is available at launch time [19]. As a result, indirect treatment comparisons (ITC) are unavoidable. The HTACG ranked non-randomized evidence, particularly ITC, low in the evidence hierarchy [4,20]. Consequently, many PICOs will have the lowest certainty of relative effectiveness due to the epistemological standards set by the HTACG.
3.
Multiplicity of hypotheses and type I errors
The hypothetico-deductive model adopted by the HTACG is also known as the Neyman-Pearson testing framework from the names of the authors who described and proposed this model [14], which is designed to control Type I error (false positives) while maximizing test power for a single hypothesis test [21]. However, when multiple hypotheses or endpoints are tested simultaneously, this framework faces the problem of multiplicity due to multiple comparisons [22]. The JCA guidance requires testing all endpoints (e.g., adverse events at organ-level regardless of frequency, seriousness, or severity), leading to hundreds of hypothesis tests per PICO [14]. This inflates Type I errors [23]. While it mandates reporting the 1-alpha Confidence Interval and estimates for each endpoint, it does not address the challenge of accounting for multiplicity, presenting an epistemological difficulty for interpretation of the finding. There is therefore an epistemological contradiction between adopting the hypothetico-deductive model -the Neyman-Pearson testing framework- and the requirement for testing multiple outcomes without addressing the alpha inflation and multiplicity issue clearly in the guidance.
4.
Post-Hoc Analysis and Nominal p-values
As PICOs will be defined after the evidence is analyzed and submitted to the JCA [9],all PICOs not predefined in the statistical analysis plan will be considered as post-hoc analysis [14]. This will apply to the vast majority of PICOs. The hypothetico-deductive model prohibits calculating or reporting p-values for post hoc analysis [24]. However, under the assumption that post-hoc analysis may be more important for some MS than ad-hoc analysis, the HTACG proposes calculating the p-value and labeling it “nominal,” without reference to a commonly accepted definition [14], further undermining the epistemic rigor of the process.
In statistics, the nominal p-value has emerged as a term to describe a p-value uncorrected for multiple comparisons, although it is generally considered inappropriate or misleading in standard statistical textbooks. Indeed, according to statistical principles, unadjusted p-values for multiple testing should not be reported, and p-values for post-hoc analyses violate the hypothetico-deductive model, which holds that hypothesis testing should be defined a priori and not computed retrospectively (i.e., in post-hoc analyses) [25,26]. The French national health authority (HAS) is very inflexible about the requirement to adjust for multiplicity analysis in its doctrine [27].
5.
Ontic component and judgment-free assessment
The JCA equates judgment-free with decontextualized assessment, aiming to evaluate the ontic component of clinical evidence. The ontic component refers to the objective, mind-independent aspects of the experimental system, such as real features, structures, and causal mechanisms that exist independently of observers or interpretations [28-30]. Assessing the ontic component requires a reference model or structure against which evidence is compared. Models and structures simplify reality and evolve with knowledge development [31,32]. The outcome of such assessment is binary: the evidence either matches or does not match the model or structure [33]. However, assessing ontic components without any judgment is impossible because ontic assessment inherently involves interpretive and normative elements [28,34-36].
The JCA relies on a combination of methodological guidelines (rules on how to conduct assessments) and guidance documents (non-binding expert advice) as its reference framework. This hybrid fails to provide a coherent ontic structure, as guidance documents lack enforceability and often rely on arbitrary consensus rather than empirical validation or systematic literature review. Additionally, these guidance documents and guidelines have been challenged elsewhere already [37-40].
Moreover, ontic assessment cannot be judgment-free. Interpreting evidence within any model necessarily involves normative choices. Thus, JCA’s ambition to produce a purely factual, decontextualized output without normative evaluation is both epistemologically and ontologically not an option.
The closest HTA process to JCA is the German independent Institute for Quality and Efficiency in Health Care (IQWiG). IQWiG guidelines are detailed, providing a clear reference for assessing deviations [41]. Although IQWiG does not perform appraisals, it recommends additional benefits to GBA, which is prohibited within JCA [8,41].
6.
Certainty, uncertainty, and epistemological confusion
Scientific empirical reasoning operates predominantly under uncertainty, with certainty being a rare ideal. This asymmetry means uncertainty is the default epistemic condition, while certainty is exceptional [42]. When concluding that an intervention is superior to the comparator, we accept 5% probability of being wrong, so it is not certain. In epistemology, certainty and uncertainty are not symmetrical concepts [43,44]; rather, they have an asymmetrical relationship grounded in how knowledge and doubt function. Yet, the JCA treats certainty and uncertainty as symmetrical, converting probabilities into a quasi-continuum of certainty degrees. This conflation is evident in the European Commission publication on food safety assessment that advocates a 0–100% scale to communicate certainty, implying that 100% equates to absolute certainty— while the overall publication explains how to measure uncertainty but ultimately reports the level of certainty assuming symmetrical propriety of the two concepts. This notion is indefensible from an epistemological standpoint in empirical sciences [45].
The JCA aims to report the “degree of certainty” in relative effectiveness. This leads to overconfidence in conclusions derived from uncertainty assessment and incorrectly assume symmetry with certainty.
7.
Exclusionary and authoritative nature of the EU HTA process
The JCA process evaluates if submitted evidence by the health technology developer (HTD) aligns with guidelines and guidance documents [8]. In fact, JCA assessors will measure any deviation to the methodological guidance and guidelines. As a factual exercise, it remains, according to the guidance documents, decontextualized and judgment-free, leaving little room for input from patients, clinicians, or experts experiences. HTD cannot participate in any part of the process [9,46].
They may only report factual errors and are not allowed to challenge JCA scoping or reports. As long as the guidance and guidelines remain unchanged, there is little to no opportunity for third parties to meaningfully influence the outcome.
The only opportunity for patients or clinical experts to impact JCA, is to influence the PICO. However, they are not part of the discussion of PICOs at the different stages of development [9], but they are presented with the final conclusions and allowed to comment with little chance to impact the final PICOs.
A stakeholder network has been established to provide input and recommendation to the process and the guidelines/guidance documents. However, early feedback indicates the process is slow-moving and that even consensual recommendations are difficult to implement.
Focusing heavily on internal validity and RCT can result in the exclusion of certain patient subgroups from clinical trials, which may unintentionally bias assessments against marginalized or vulnerable populations, potentially conflicting HTA moral commitment [4,20,47].
The guidance demonstrates a strong commitment to empirical rigor, yet it overlooks essential qualitative evidence such as patient preferences, exit interviews from clinical trials, patient experiences, and real-world evidence, which are crucial for specific situations [48,49]. Prioritizing quantifiable metrics overlooks qualitative outcomes such as dignity and caregiver burden, indicating an ontological preference for measurable phenomena.
These above described JCAs’ features make the JCA process non-inclusive and authoritative in several ways by design.
8.
Consolidation of assessment
While there are recommendations for individual outcomes assessment, it is unclear how certainty of relative effectiveness at the outcome level will be consolidated at the PICO level. Only basic information is provided for consolidating the risk of biases at the PICO level. Ranking outcomes is prohibited by EU HTA law [8]. Thus, JCA must treat a non-severe, non-serious extremely rare adverse event with the same importance as mortality.
The absence of a normative ontological commitment, that is, a clear stance on the desirability or relevance of outcomes, undermines one of the three epistemological pillars of HTA. It also makes the assessment process burdensome, as all outcomes and endpoints must receive equal attention. Notably, adverse events will be reported in large numbers in the JCA submission and will be subject to statistical testing. Given the volume of data, limited resources, and time constraints, some reported events are likely to go unassessed. This inevitably leads to the omission of certain endpoints, despite their formal inclusion in the submission.
9.
The degree of certainty
The degree of certainty of relative effectiveness is generally not high due to the inherent imperfections in life sciences experiments [42,50]. Researchers strive to balance complexity, feasibility, cost, time, and reliability of outcomes, often making compromises to attain empirically valid results. While HTD will focus on the key endpoints, JCA will treat all endpoints equally irrespective of their importance. This approach will disregard the HTD rationale aimed at minimizing biases on key endpoints, while potentially accepting higher uncertainty on less critical ones as a trade-off. In practice, a perfect experiment is unattainable [50,51]. Given the multiplicity of outcomes and assessments at the outcome level, combined with impractical and high bar methodological requirements:
  • High certainty of relative effectiveness will be exceptional.
  • Medium certainty may sometimes be attainable.
  • Most PICOs will have a low certainty rating.
This presumes that the guidance and scope of the JCA will be adhered to and not subject to revision. This suggests a floor effect for the degree of certainty of relative effectiveness assessment leaving the HTACG focusing on the tables of results with limited interest in the degree of certainty.
The decision to include ATMPs and approximately 50% of orphan-designated products, many in oncology or emerging therapeutic classes, further contributes to low certainty. These are areas where clinical knowledge is still evolving, or the rarity of the condition making usual large size RCT feasibility, inherently limiting the degree of certainty achievable in assessments.
If the vast majority of PICOs will be ranked as a low degree of certainty of relative effectiveness, the usefulness of this rating will be limited as not discriminant.
10.
Accelerated versus standard procedure
The HTACG distinguishes between two timelines: the accelerated procedure and the standard procedure. The accelerated process applies to variations such as extensions of indication and aligns with the European Medicines Agency (EMA) timelines [8,46]. From a regulatory standpoint, shortening timelines in these cases is logical because certain aspects—such as quality, toxicology, pharmacology, and phase I studies—may not require a new review, thereby reducing the assessment workload significantly.
However, from an HTA perspective, the clinical evaluation for an indication extension is just as rigorous and demanding as that of the initial application. This raises an epistemological question about the rationale behind having two different timelines for essentially the same process and workload. Since the standard timeline is already considered tight, it is important to clarify whether the accelerated process is truly realistic and feasible in practice [52].
From an epistemological standpoint, having two separate timelines is unjustified. This dual timeline arrangement arises from an arbitrary attempt to align regulatory and JCA deadlines, without adequately considering feasibility.
11.
The hierarchy of evidence
The hierarchy of evidence, though not explicitly addressed, is clear in guidance documents [4,20]. Randomized clinical trials are at the top, with other types of evidence at the bottom [4,20]. Non-randomized evidence, despite some nuances like anchored versus unanchored indirect treatment comparisons, generally occupy the lower end of the scale [4,20]. The hierarchy appears dichotomic rather than a continuum in JCA guidance documents.
Guidance documents recommend prioritizing robust evidence ignoring less robust sources when both co-exist for the same PICO and favor direct evidence over combining direct and indirect evidence [4,20,47]. This is inconsistent with evidence-based medicine (EBM) philosophy that defines different levels of evidence and ranks them along a continuum and considers all evidence [53].
RCTs are positioned at the top of the evidence hierarchy, which aligns with EBM and the majority of HTA guidelines. However, the epistemic basis for this placement and its broader implications have not been addressed.
12.
Epistemological perspective of guidance documents
The guidance documents have been heavily criticized [37-40,54] and led to European Access Academy (EAA) faculty members addressing an open later to the European Commission raised the lack of applying to the HTACG the principles of EBM imposed on MS by the HTACG and concluded : “Considering the current status of adopted methodological guidance documents an additional major aspect of uncertainty needs to be addressed: the uncertainty of validity of recommended assessment methods. The EAA Faculty would like to express our profound concern that the evidence base for the currently adopted EU HTA methodological framework does not reach beyond an informal consensus among the involved members of the CG. Major uncertainty regarding the applicability, robustness and validity of those methods prevails“ [55] .
Critics, including the European Access Academy, have flagged serious epistemological flaws in JCA guidance documents:
  • Erroneous definitions of internal and external validity [4,40]
  • Oversimplified external validity assessment [4,56].
  • Inadequate handling of multiplicity [14,37]
  • Acceptance of nominal p-values in post hoc contexts [14,37]
  • Poor management of missing data [14,37]
  • Minimum Clinically Important Difference (MCID) is not addressed, within the JCA [4] while it should be reported. It may be reasonable to ask HTDs to document MCID for critical endpoints.
  • Continued use of the outdated Risk of Bias (RoB) 1 tool rather than the updated RoB 2 [57].
Additional arbitrary practices include:
  • Excluding Embase (a medical literature database) and conference abstracts from systematic literature reviews (SLRs) [57]
  • Requiring a shifted null hypothesis (altering the baseline assumption) in population-adjusted indirect treatment comparisons (PAICs) [38]
  • Relying solely on narrative assessments for external validity [40]
Therefore, using scientifically inadequate methodological guidance documents and guidelines as the reference to assess how much submitted evidence deviates from this reference is an invalid outcome from an epistemological perspective.
Although the guidance documents avoid defining norms regularly, two quantitative norms are specified: a high correlation for surrogate endpoint validation at 0.85 and a “large effect size” defined as five times the effect of the reference health technology intervention. Both are unrealistic in medical sciences. This has made the guidance documents challenging to implement and has raised doubts about the reliability of the JCA report.
13.
The source of HTA epistemological contradictions
The epistemological contradiction is seen in HTA across different countries. HTA deals with regulatory requirements output for marketing authorization and considerations for reimbursement and pricing, each following different epistemological frameworks. Marketing authorization involves broad evidence beyond clinical trials such as biological and medical sciences disciplines including animal models, in vitro models, pharmacokinetics/dynamics, and mechanisms of action for example. Reimbursement and pricing fall under the category of social sciences [58]. HTA intersects these two fields and faces challenges in determining its alignment [59,60]. Many HTA assessors have a background in experimental life sciences and may not consider social sciences, which are important for providing essential information in decision sciences.
Epistemological awareness is crucial for methodological choices and understanding HTA’s policy role. HTA should support reimbursement and pricing decisions by considering funding community expectations and patient preferences, rather than repeating the marketing authorization clinical assessment process. The aim is to satisfy population needs with the most appropriate epistemological model.
14.
The way forward to circumvent epistemological contradictions
Epistemological ontological decisions guide the assessment of evidence in the HTA process [1,60]. It is important to acknowledge that, while peer-review evaluation is generally more thorough than single evaluation, assessors may also have cognitive biases and conflicts of interest, which might not always be fully disclosed, either intentionally or unconsciously. This may further confuse the process.
To ensure a transparent evaluation, HTD, Patients and clinical experts should play a larger role in the JCA, the process to consider multiple perspectives. The results of such evaluations should offer a balanced explanation of differing viewpoints and judgments, enabling decision-makers (MS) to make informed decisions based on their own criteria, which theoretically reflect a political and democratic process.
To ensure an alignment between 27 MS and ensure a consistent epistemological HTA assessment framework, there is a need to at priori align on the HTA purpose and the three commitments (moral, epistemological and ontological) that will drive the development of Guidance and guidelines. This will ensure a higher level of consistency and awareness of the ultimate methodological choices’ implications driving the JCA. To date JCAs’ guidance documents and guidelines have been developed while looking at the minimum consensus ignoring the underlying epistemological, moral and ontological commitments
The JCA’s design contains multiple epistemological contradictions. Any assessment of relative effectiveness cannot be properly judged, as is clearly described in the context of risk of bias assessment. The JCA attempts to evaluate the degree of certainty in relative effectiveness through a process that:
  • Multiplies PICOs beyond reasonable empirical feasibility,
  • Fails to adequately address type I errors due to multiplicity,
  • Treats post-hoc analyses as hypothesis-driven evidence,
  • Relies on non-enforceable and epistemically weak, imprecise and inconsistent guidance documents,
  • Excludes normative, moral, and contextual judgments,
  • Operates under a misconceived notion of certainty.
These foundational issues render the JCA process not fit for its intended purpose. Rather than promoting harmonization and robust decision-making, it introduces new layers of epistemological, ontological, and ethical confusion. Future development steps of the EU HTA methodological framework should aim to primary align with the foundational epistemological, ontological and ethical principles of science, medicine, and public health policy. It requires a comprehensive understanding of epistemology and a clear alignment on the societal purposes of HTA. Such process will help reaching a robust and stable methodological consensual framework.

Author Contributions

M.T.: Conceptualized the content. The co-authors: B.F., A.J., I.S., S.S., M.P., L.B., B.B., R.B., S.C., D.D., C.D., and P.A., challenged the concept, edited the manuscript, and refined arguments for clarity and coherence. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

None.

Conflicts of Interest

M.T are current employees of Inovintell. A.J. and I.S. are current employees of Clever Access. D.D is a current employee of Ideas & Solutions (I&S).

Abbreviations

The following abbreviations are used in this manuscript:
EAA European Access Academy
EBM Evidence Based Medicine
EU European Union
HAS French National Health Authority
HTA Health Technology Assessment
HTACG Health Technology Assessment Coordination Group (of the EU)
HTD Health Technology Developer
IQWiG Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen
ITC Indirect Treatment Comparison
JCA Joint Clinical Assessment
MCID Minimum Clinical Important Difference
MS Member States (of the EU)
PAIC Population-Adjusted Indirect Treatment Comparisons
PICO Population, Intervention, Comparator, Outcome
RCT Randomized Controlled Trial
RoB Risk of Bias
SLR Systematic Literature Review

References

  1. Bloemen, B., W. Oortwijn, and G.J. van der Wilt. Understanding the Normativity of Health Technology Assessment: Ontological, Moral, and Epistemological Commitments. Health Care Anal. 2024. [CrossRef]
  2. Refolo, P., K. Duthie, B. Hofmann, M. Stanak, N. Bertelsen, B. Bloemen, R. Di Bidino, W. Oortwijn, C. Raimondi, D. Sacchini, G.J. van der Wilt, and K. Bond. Ethical challenges for Health Technology Assessment (HTA) in the evolving evidence landscape. Int. J. Technol. Assess. Health Care 2024, 40, e39. [CrossRef]
  3. van der Wilt, G.J., B. Bloemen, J. Grin, I. Gutierrez-Ibarluzea, L. Sampietro-Colom, P. Refolo, D. Sacchini, B. Hofmann, L. Sandman, and W. Oortwijn. Integrating Empirical Analysis and Normative Inquiry in Health Technology Assessment: The Values in Doing Assessments of Health Technologies Approach. Int. J. Technol. Assess. Health Care 2022, 38, e52. [CrossRef]
  4. The Member State Coordination Group on Health Technology Assessment (HTACG). Guidance on the validity of clinical studies for joint clinical assessments. V1.0. Available online: https://health.ec.europa.eu/document/download/9f9dbfe4-078b-4959-9a07-df9167258772_en?filename=hta_clinical-studies-validity_guidance_en.pdf (accessed on 30 December 2024).
  5. Schünemann, H.J., R.A. Mustafa, J. Brozek, K.R. Steingart, M. Leeflang, M.H. Murad, P. Bossuyt, P. Glasziou, R. Jaeschke, and S. Lange. GRADE guidelines: 21 part 1. Study design, risk of bias, and indirectness in rating the certainty across a body of evidence for test accuracy. J. Clin. Epidemiol. 2020, 122, 129-141.
  6. Hoaglin, D.C., N. Hawkins, J.P. Jansen, D.A. Scott, R. Itzler, J.C. Cappelleri, C. Boersma, D. Thompson, K.M. Larholt, M. Diaz, and A. Barrett. Conducting Indirect-Treatment-Comparison and Network-Meta-Analysis Studies: Report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: Part 2. Value Health 2011, 14, 429-437. [CrossRef]
  7. Charlton, V., M. DiStefano, P. Mitchell, L. Morrell, L. Rand, G. Badano, R. Baker, M. Calnan, K. Chalkidou, and A. Culyer. We need to talk about values: a proposed framework for the articulation of normative reasoning in health technology assessment. Health Economics, Policy and Law 2024, 19, 153-173.
  8. European Commission. Regulation (EU) 2021/2282 of the European Parliament and of the Council of 15 December 2021 on health technology assessment and amending Directive 2011/24/EU. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32021R2282 (accessed on 18 December 2024).
  9. The Member State Coordination Group on Health Technology Assessment (HTACG). Guidance on the scoping process. Available online: https://health.ec.europa.eu/document/download/7be11d76-9a78-426c-8e32-79d30a115a64_en?filename=hta_jca_scoping-process_en.pdf (accessed on 2 May 2025).
  10. The Member State Coordination Group on Health Technology Assessment (HTACG). Guidance on outcomes for joint clinical assessments. Available online: https://health.ec.europa.eu/document/download/a70a62c7-325c-401e-ba42-66174b656ab8_en?filename=hta_outcomes_jca_guidance_en.pdf (accessed on 11 July 2025).
  11. Sieverding, M. and A. Allignol. HTA71 Bridging the Gap: Exploring the PICO Vs Estimand Frameworks in EU Health Technology Assessment (HTA). Value Health 2023, 26, S332. [CrossRef]
  12. European Medicines Agency (EMA). ICH E9 (R1) addendum on estimands and sensitivity analysis in clinical trials to the guideline on statistical principles for clinical trials. Available online: https://www.ema.europa.eu/en/documents/scientific-guideline/ich-e9-r1-addendum-estimands-and-sensitivity-analysis-clinical-trials-guideline-statistical-principles-clinical-trials-step-5_en.pdf (accessed on 7 January 2025).
  13. The Institute for Quality and Efficiency in Healthcare (IQWiG). 2018: Do the new “estimand” strategies compromise the standards of benefit assessments? Available online: https://www.iqwig.de/en/events/iqwig-in-dialogue/2018-do-the-new-estimand-strategies-compromise-the-standards-of-benefit-assessments.html (accessed on 16 January 2025).
  14. The Member State Coordination Group on Health Technology Assessment (HTACG). Guidance on reporting requirements for multiplicity issues and subgroup, sensitivity and post hoc analyses in joint clinical assessments. Available online: https://health.ec.europa.eu/document/download/f2f00444-2427-4db9-8370-d984b7148653_en?filename=hta_multiplicity_jca_guidance_en.pdf (accessed on 8 January 2025).
  15. Świder, A., A.S.S. González, D. Lucion, S. Hall, A. Creasey, J. Wright, C. Steeds, and F. Torelli. HTA291 How Are Patients Involved in Health Technology Assessment? A Comparative Analysis of European Markets for Advanced Therapies and Implications for Joint Clinical Assessment. Value Health 2023, 26, S376.
  16. European patient organisations. 10 Key Recommendations from Patient Organisations on Joint Clinical Assessments under the EU HTA Regulation. Available online: https://ehc.eu/wp-content/uploads/2024/10/10-Key-Recomendations-from-Patient-Organisations-on-JCAs_-20240610.pdf (accessed on 11 July 2025).
  17. NICE health technology evaluations: the manual. Last updated: 14 July 2025. Available online: https://www.nice.org.uk/process/pmg36/chapter/the-scope-2 (accessed on 22 September 2025).
  18. The Member State Coordination Group on Health Technology Assessment (HTACG). PICO exercises. Available online: https://health.ec.europa.eu/publications/pico-exercises_en (accessed on 26 February 2025).
  19. Morant, A.V., V. Jagalski, and H.T. Vestergaard. Characteristics of Single Pivotal Trials Supporting Regulatory Approvals of Novel Non-orphan, Non-oncology Drugs in the European Union and United States from 2012-2016. Clin. Transl. Sci. 2019, 12, 361-370. [CrossRef]
  20. The Member State Coordination Group on Health Technology Assessment (HTACG). Methodological Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons. Available online: https://health.ec.europa.eu/document/download/4ec8288e-6d15-49c5-a490-d8ad7748578f_en?filename=hta_methodological-guideline_direct-indirect-comparisons_en.pdf (accessed on 8 March 2025).
  21. Perezgonzalez, J.D. Fisher, Neyman-Pearson or NHST? A tutorial for teaching data testing. Front. Psychol. 2015, 6, 223. [CrossRef]
  22. Cook, R.J. and V.T. Farewell. Multiplicity considerations in the design and analysis of clinical trials. Journal of the Royal Statistical Society: Series A (Statistics in Society) 1996, 159, 93-110.
  23. Dmitrienko, A. and R. D'Agostino, Sr. Traditional multiplicity adjustment methods in clinical trials. Stat. Med. 2013, 32, 5172-218.
  24. Levine, M. and M.H. Ensom. Post hoc power analysis: an idea whose time has passed? Pharmacotherapy: The Journal of Human Pharmacology and Drug Therapy 2001, 21, 405-409. [CrossRef]
  25. Bays, H.E., Alirocumab, Decreased Mortality, Nominal Significance, P Values, Bayesian Statistics, and the Duplicity of Multiplicity: A Bays-on-Bayes Editorial. 2019, Lippincott Williams & Wilkins Hagerstown, MD. p. 113-116.
  26. Boscardin, C.K., J.L. Sewell, M.G. Tolsgaard, and M.V. Pusic. How to Use and Report on p-values. Perspectives on Medical Education 2024, 13, 250.
  27. French National Authority for Health (HAS). Doctrine of the Commission de la Transparence (CT). Available online: https://www.has-sante.fr/upload/docs/application/pdf/2021-03/doctrine_ct.pdf (accessed on 26 February 2025).
  28. Craver, C.F. The ontic account of scientific explanation. Explanation in the special sciences: The case of biology and history 2014, 27-52.
  29. Waters, C.K. Causes that make a difference. The Journal of Philosophy 2007, 104, 551-579.
  30. Woodward, J. Causation in biology: stability, specificity, and the choice of levels of explanation. Biol. Philos. 2010, 25, 287-318. [CrossRef]
  31. Weisberg, M. Three kinds of idealization. The journal of Philosophy 2007, 104, 639-659.
  32. Winsberg, E., Science in the age of computer simulation. 2019: University of Chicago Press.
  33. Craver, C.F. When mechanistic models explain. Synthese 2006, 153, 355-376.
  34. Kohár, M. and B. Krickel. Compare and contrast: how to assess the completeness of mechanistic explanation. Neural Mechanisms: New Challenges in the Philosophy of Neuroscience 2021, 395-424.
  35. Povich, M.A., Model and World: Generalizing the Ontic Conception of Scientific Explanation. 2017: Washington University in St. Louis.
  36. Matthewson, J. Trade-offs in model-building: A more target-oriented approach. Studies in History and Philosophy of Science Part A 2011, 42, 324-333. [CrossRef]
  37. Toumi, M., B. Fallissard, A. Jouini, P. Auquier, C. Dussart, and L. Boyer. Guidance on Multiplicity Analysis in Single-Trial Assessments: A No-Solution Equation. 2025.
  38. Aballéa, S., M. Toumi, P. Wojciechowski, E. Clay, B. Falissard, S. Simoens, P. Auquier, S. Capri, J. Ruof, and F.-U. Fricke. Between Rigor and Relevance: Why the EU HTA Guidelines on Indirect Comparisons Miss the Mark. 2025.
  39. Smela, B., M. Toumi, S. Aballéa, S. Simoens, L. Boyer, B. Falissard, R. Bernardini, S. Capri, and P. Auquier. The EU-Joint Clinical Assessment Guidance Documents Fail to Address the Significance of Systematic Literature Reviews and Deviate from the State of the Art. 2025.
  40. Toumi, M., B. Falissard, A. Jouini, S. Aballéa, and L. Boyer. Clinical Trial Validity Guidance from the HTACG: Looking for Chicken Teeth. Journal of Market Access & Health Policy 2025, 13, 15.
  41. The independent Institute for Quality and Efficiency in Health Care (IQWiG). General Methods version 8.0. Available online: https://www.iqwig.de/methoden/allgemeine-methoden_entwurf-fuer-version-8-0.pdf (accessed on 26 February 2025).
  42. Kaiser, M. Uncertainty and Precaution 1: Certainty and uncertainty in science. Global bioethics 2004, 17, 71-80. [CrossRef]
  43. Wittgenstein, L., G. Anscombe, and G. Von Wright, On Certainty/Uber Gewissheit. 1986, Harper Collins.
  44. Bäumel, M. 'On Certainty'and Formal Epistemology. 2015.
  45. Authority, E.F.S., U. Sahlin, A. Hart, and J. Zilliacus. Degree of certainty in scientific advice: implications for risk management and communication: Event report on a training workshop on uncertainty with risk managers. EFSA Supporting Publications 2022, 19, 7377E.
  46. The Member State Coordination Group on Health Technology Assessment (HTACG). Procedural guidance for JCA medicinal products. Available online: https://health.ec.europa.eu/document/download/0929cd01-619d-4456-a1c4-d8e33f9e36bf_en?filename=hta_jca_mp_procedural-guidance_en.pdf (accessed on 11 July 2025).
  47. The Member State Coordination Group on Health Technology Assessment (HTACG). Practical Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons. Available online: https://health.ec.europa.eu/document/download/1f6b8a70-5ce0-404e-9066-120dc9a8df75_en?filename=hta_practical-guideline_direct-and-indirect-comparisons_en.pdf (accessed on 8 March 2025).
  48. Szabo, S.M., N.S. Hawkins, and E. Germeni. The extent and quality of qualitative evidence included in health technology assessments: a review of submissions to NICE and CADTH. Int. J. Technol. Assess. Health Care 2023, 40, e6. [CrossRef]
  49. Germeni, E., I. Vallini, M.G. Bianchetti, and P.J. Schulz. Reconstructing normality following the diagnosis of a childhood chronic disease: does "rare" make a difference? Eur. J. Pediatr. 2018, 177, 489-495.
  50. Cartwright, N. and J. Hardie, Evidence-based policy: A practical guide to doing it better. 2012: Oxford University Press.
  51. Glasziou, P. and I. Chalmers. Research waste is still a scandal—an essay by Paul Glasziou and Iain Chalmers. BMJ 2018, 363. [CrossRef]
  52. Desmet, T., M. Brijs, F. Vanderdonck, S. Tops, S. Simoens, and I. Huys. Implementing the EU HTA Regulation: Insights from semi-structured interviews on patient expectations, Belgian and European institutional perspectives, and industry outlooks. Front. Pharmacol. 2024, 15, 1369508.
  53. Burns, P.B., R.J. Rohrich, and K.C. Chung. The levels of evidence and their role in evidence-based medicine. Plast. Reconstr. Surg. 2011, 128, 305-310.
  54. European Federation of Pharmaceutical Industries and Associations (EFPIA). EFPIA response to ‘Guidance on outcomes for joint clinical assessments’. Available online: https://www.efpia.eu/news-events/the-efpia-view/blog-articles/efpia-response-to-guidance-on-outcomes-for-joint-clinical-assessments/ (accessed on 11 June 2025).
  55. European Access Academy (EAA). Open Letter to DG Santé and the Member State Coordination Group on HTA. Available online: https://irp.cdn-website.com/e52b6f19/files/uploaded/Open_Letter_Methods_EU_HTA.pdf (accessed on 11 June 2025).
  56. Roe, B.E. and D.R. Just. Internal and external validity in economics research: Tradeoffs between experiments, field experiments, natural experiments, and field data. Am J Agric Econ 2009, 91, 1266-1271.
  57. The Member State Coordination Group on Health Technology Assessment (HTACG). Guidance on filling in the joint clinical assessment (JCA) dossier template – Medicinal products. Available online: https://health.ec.europa.eu/document/download/3943ae6a-1bca-4afd-b05c-fd9a54a252d7_en?filename=hta_jca_mp_dossier-template_guidance_en.pdf (accessed on 11 June 2025).
  58. Marsh, K., J.A. van Til, E. Molsen-David, C. Juhnke, N. Hawken, E.M. Oehrlein, Y.C. Choi, A. Duenas, W. Greiner, and K. Haas. Health preference research in Europe: a review of its use in marketing authorization, reimbursement, and pricing decisions—report of the ISPOR stated preference research special interest group. Value Health 2020, 23, 831-841. [CrossRef]
  59. van der Wilt, G.J. and W. Oortwijn. Health technology assessment: A matter of facts and values. Int. J. Technol. Assess. Health Care 2022, 38, e53.
  60. Refolo, P., D. Sacchini, L. Brereton, A. Gerhardus, B. Hofmann, K. Lysdahl, K. Mozygemba, W. Oortwijn, M. Tummers, and G. Van Der Wilt. Why is it so difficult to integrate ethics in Health Technology Assessment (HTA)? The epistemological viewpoint. 2016.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated