Preprint
Concept Paper

This version is not peer-reviewed.

The qPCR Standard Curve

A peer-reviewed version of this preprint was published in:
International Journal of Molecular Sciences 2026, 27(6), 2904. https://doi.org/10.3390/ijms27062904

Submitted:

26 February 2026

Posted:

28 February 2026

You are already at the latest version

Abstract
The quantitative PCR standard curve is the central analytical tool for validating qPCR assays and can also be used to estimate target concentrations in test samples. This review explains how qPCR standard curves are constructed, validated, and analysed for different purposes. We first examine an idealised standard curve generated using an exceptionally high number of replicates, far exceeding typical routine use. This approach clearly illustrates fundamental qPCR characteristics and provides an educational framework for defining and estimating PCR efficiency, limit of detection, and limit of quantification. Furthermore, we demonstrate that, in theory, variation in threshold crossing points across replicates can be used to estimate the number of target molecules in a sample. This method, which we term variance PCR, could complement digital PCR and potentially extend the dynamic range of absolute quantification. We also analyse a representative standard curve as typically processed in routine qPCR workflows. This includes validating its dynamic range, assessing the impact of outliers, estimating PCR efficiency and precision, and finally applying the curve to determine the concentration of test samples.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Real-time quantitative PCR (qPCR) is arguably the predominant technology for nucleic acid quantification and serves as a benchmark for validating other methods. Reliable qPCR, however, requires rigorous assay validation, as outlined by the ‘Minimum Information for Publication of Quantitative Real-Time PCR Experiments’ (MIQE) guidelines, originally published in 2009 [1], and recently updated [2]. The MIQE guidelines served as the basis for the standard ISO 20395:2019, “Biotechnology — Requirements for evaluating the performance of quantification methods for nucleic acid target sequences — qPCR and dPCR”, in diagnostics [3], and also for the white paper “Recommendations for Method Development and Validation of qPCR and dPCR Assays in Support of Cell and Gene Therapy Drug Development” [4].
Despite these frameworks, erroneous qPCR results, often arising from flawed experimental design, inadequate validation, and incorrect data analysis, remain common in the literature, sometimes with serious consequences. A recent comprehensive review of qPCR methodological standards and reporting practices has shown that many of these deficiencies are still widespread, underscoring the gap between published guidance and routine practice [5].
The standard curve is the central tool for proper qPCR analysis, serving both to validate assay performance and to quantify target molecules in test samples. This paper aims to clarify the construction, interpretation, and validation of qPCR standard curves, with a particular focus on distinguishing their distinct roles in assay validation versus sample quantification, a distinction that is frequently blurred in practice.
We begin by analysing a deliberately idealised standard curve constructed with an extreme number of replicates. This didactic example makes fundamental qPCR properties, such as sampling uncertainty, limits of detection and quantification, and the relationship between the variation of the quantification cycle (Cq) across replicate amplification curves, explicitly visible. We then examine standard curves under typical, replication-limited conditions, demonstrating how to evaluate dynamic range, outliers, efficiency, and prediction uncertainty in routine practice.
The insights from the extreme case led to a conceptual extension we term variance PCR (vPCR), which clarifies the theoretical relationship between Cq variation and absolute target copy number. This framework is intended to elucidate the fundamental limits of quantification rather than to propose a new routine method. Throughout, the emphasis is on understanding what a standard curve can and cannot reveal, thereby helping to prevent common interpretative errors.

2. Constructing the qPCR Standard Curve

A qPCR standard curve is constructed by plotting the Cq-value against the logarithm of the target amount, expressed as copy number, concentration, or sometimes dilution factor (Figure 1).
The Cq, known also as ‘cycle threshold’ (Ct), ‘crossing point’ (Cp), or ‘take-off point’ (TOP) in older literature, values are extracted from the qPCR amplification curves and reflect the amount of target in the samples. There are several strategies for extracting Cq values. The most common way is to read them out as the crossing points of the amplification curves with a threshold level. Although the choice of the threshold influences the Cq values, as long as the amplification curves are parallel at the threshold level, the relative spacing of the curves is preserved, and comparisons of Cq values across samples remain valid.
The standard curve data are fitted to a straight line using the linear regression of Equation 1:
C q ( N ) = k × l o g ( N ) + m
Where N reflects the expected number of target molecules, optionally normalized to volume, being a concentration. For samples with high target concentrations, the expected concentration is what we typically refer to as concentration: the average number of target molecules per analyzed volume. However, as we shall discuss, analyzing aliquots (or samplings, as they are also called) that contain just a few target molecules, a variation across them is expected that affects the measured Cq. In this work, we therefore refer to the expected number of target molecules per sampling as N, while the actual number of target molecules in a particular sampling is X. The average of X across sampling equals N. k is the slope of the standard curve. m is the intercept and corresponds to the Cq of a sample containing a single, in our case, double-stranded target molecule, m = Cq(1) [6].
Alternatively, the Cq values are taken as the second derivative maximum of the amplification curve, which is based on a smoothed sigmoidal or logistic mathematical model.
Standard curves serve two distinct purposes: (1) validation of assays, and (2) estimation of target concentrations in test samples by comparison with standards. The design and interpretation of the standard curve differ depending on its intended use.
For every PCR assay, performance assessment is a critical component of assay validation. This should be conducted under optimal conditions, i.e., in the absence of interfering substances, like inhibitors and enhancers. For most targets, a PCR efficiency of 90% - 100% should be achievable.
Standard samples used for constructing a standard curve are typically prepared by serial dilution of a concentrated stock solution with a known target concentration. Accurately determining the concentration of this stock solution represents the first challenge. Note that the concentration of synthetic oligonucleotides provided by oligo manufacturers is usually not precise enough for qPCR, and validation is needed.
One strategy is to design synthetic templates to include both the target sequence and a reference sequence, for which there is a well characterized high performing qPCR assay, in a 1:1 ratio (Figure 2). One such reference sequence and assay for it is discussed below. It was originally designed as a means to assess the genomic DNA background in expression analysis [7] and is one of the most well-characterized qPCR reference assays [8]. The concentration of the stock solution can then be determined with high precision by targeting the reference sequence using digital PCR (dPCR).
Once an optimized qPCR assay has been established, it is advisable to evaluate its performance with a sample matrix, referred to as a blank or procedural blank, that is processed according to the established protocol. The processed matrix is then spiked with reference standard DNA of known concentration, and a new standard curve is generated. PCR efficiency is re-estimated under these conditions to assess the impact of matrix-related interference. Hence, it is worth noting that PCR efficiency is sample-type dependent in the absence of other dominating factors. Optimized assays are generally robust, and matrix effects are typically limited. However, if substantial effects are observed, the protocol may require additional optimization or, if assay robustness is in question, redesign.
The second application of a standard curve is the quantification of targets in test samples. This is performed by comparison with standards of known concentrations within a validated dynamic range. In this case, the standard samples should be constructed differently. They should be independent of one another to capture the variability inherent in independent test samples and should reflect as much of the analytical workflow as possible. Ideal standards consist of a naïve sample matrix spiked with a known amount of target. If the target in test samples is expected to be contained or protected, such as a nucleic acid encapsulated in lipid nanoparticles or adeno-associated viruses, the standards should mimic this protection [9]. The standard samples are then processed identically to the test samples prior to construction of the standard curve.

3. Learnings from an Extreme Standard Curve

We begin by examining an extreme standard curve constructed using a very large number of replicates per concentration and small concentration increments. This example is included as a deliberately idealised, illustrative case to highlight fundamental properties of qPCR behavior that are not readily visible in routine standard curves. It represents a validation experiment for the Reference Sequence (Figure 1) [7] performed using the IntelliQube instrument that uses very small reaction volumes, enabling high throughput [8]. The example data with tutorials are available [10].

3.1. Learnings from an Extreme Standard Curve

The Reference Sequences were originally proposed as a cost-efficient strategy to correct for background genomic DNA (gDNA) in gene expression analysis.7 The Reference Sequences are species-specific, and each targets a conserved, non-transcribed genomic sequence present in exactly one copy per haploid genome. This allows for accurate quantification of the gDNA background in complementary DNA (cDNA) samples [7].
Standard samples were prepared from a stock solution of purified synthetic DNA containing a single Reference Sequence per double-stranded molecule. The concentration of the stock solution was determined using dPCR with the reference sequence assay. Solutions covering the range of 2048 down to 1 target molecule on average per reaction were prepared using 2-fold dilution steps. At the lowest concentration, and for the non-template control, 128 replicates were assessed. Higher concentrations were replicated 64 times and the highest 32 times.
The data, after removing some obviously technically failed reactions, fitted to a straight line using linear regression (ordinary least squares fit, OLS), are shown in Figure 3.

3.2. Residual Plot

A general good practice is to study the difference between Cq values and those predicted by the linear regression, i.e., the residual. This is done using a residuals plot. Either the combined data at all levels (different concentrations) are considered, as in Figure 4, or per each level separately [11].
From these illustrative data with the very large number of replicates, we see in Figure 3 and Figure 4 that at lower N, the variation, which can be quantified as the standard deviation (SD), becomes higher. This violates one of the assumptions of the OLS criterion. Either weighted least squares should be applied, or, as we will do below, only samples above the lower limit of quantification (LLOQ) will be considered.

3.3. Relative Standard Deviation and Limit of Quantification

The variation across replicates can be quantified by means of the SD, where SD can be calculated either for the measured Cq-values or for concentrations derived from the Cq-values. These SDs are very different because of the difference in scale. While concentrations are in a linear scale, Cq-values are in a log scale. For comparison with other bioanalytical methods, performance parameters are preferably presented in linear scale.
SD, like the mean, increases with the expected number of target molecules per sample. For comparison across concentrations, the relative standard deviation (RSD), also referred to as the coefficient of variation, is the most widely used measure of imprecision. In a linear scale, RSD is obtained from the SD of the Cq values as depicted in Equation 2 (where E is the PCR efficiency):8
RSD ( % ) = 1 + E ) l n ( 1 + E ) S D ( C q ) 1
The RSD for the data in Figure 4, is plotted as a function of the number of target molecules per sample in Figure 5.
The RSD levels off at 32 molecules and stabilizes, which is coherent with what can be seen in Figure 4. At these concentrations, various sources of imprecision, such as sample handling, pipetting, and measurement error, become significant. Although these data were generated with the rather unique IntelliQube instrument, repeatability is in line with previous reports using standard qPCR instrumentation [12].
For bioanalytical methods, we need to estimate important performance parameters. The two most important are the limit of detection (LOD) and LOQ. LOD is the lowest analyte concentration likely to be reliably distinguished from the blank and at which detection is feasible, while LOQ is the limiting concentration at which the analyte cannot only be reliably detected but at which some predefined goals for bias and imprecision are met [13]. The LOQ may be equivalent to the LOD but is usually higher.
For qPCR, there is no single universal recommendation for LOQ. It depends on the assay, sample type, matrix, and purpose. But to be in line with other bioanalytical methods, a maximum RSD of 25 % in linear scale is commonly used. Sometimes the RSD increases towards very low and also towards very high concentrations. In those cases, we refer to the lower limit of quantification (LLOQ) and the upper limit of quantification (ULOQ). The LLOQ is then the lowest concentration where RSD ≤ 25 %, and ULOQ is the highest concentration where RSD ≤ 25 %.
In Figure 5 we see that 32 target molecules and higher per sample meet this criterion. Thus, the LLOQ of the reference assay, when analyzing purified gDNA standard using the IntelliQube qPCR instrument, is 32 molecules. There is no ULOQ within the concentration range studied.

3.4. Sampling Uncertainty

PCR, using an optimized assay, can amplify a single target molecule as routinely demonstrated in dPCR. If we take a cell that contains a single DNA and place it into a qPCR tube so we are sure it is there, add PCR reagents and an optimized assay for a target sequence in the DNA, we will detect it.
For most samples, however, we don’t control the number of target molecules. Consider a 1 mL homogeneous sample containing 1,000 lysed cells. From this solution, transfer 1 L, which is 1/1000th of the volume, to a tube for analysis with qPCR targeting the DNA. The sampling can, of course, contain one DNA molecule. However, it may also contain two, perhaps three, or even more DNA molecules. These cases will produce a product, and the PCR will be positive. But the sampling (pipetted sample aliquot) may contain no DNA, in which case the PCR will be negative. Since the original sample did contain DNA, this is a false negative.
Stochastic variation in the number of targets across sampling replicates gives rise to sampling uncertainty. Sampling uncertainty becomes important when few targets are expected (small N).
The probability that a sampling contains x target molecules, P ( x ) , is given by the Poisson distribution shown in Equation 3:
P x = e N N x x !
where N is the expectation value, which is equivalent to the average number of target molecules per sampled volume, x is the actual number of targets in a particular sampling, and ! indicates the factorial of the number (x! = 1 × 2 × 3 × … × x). Figure 6 shows Poisson distributions for selected expectation values.
Let us inspect the Poisson distributions in Figure 6. For an expectation value of one molecule per sampling (N=1), the probability that a sampling contains one molecule (x=1) is 37 %. Probability is 18 %, it contains two target molecules (x=2); 6 % it contains three (x=3), and 1.5 % it contains four (x=4). There is also a relevant 37 % probability that a sampling is negative (x=0).
Figure 7 shows the probability that a sampling is positive P(x>0), as a function of the average number of target molecules per sampling (N).

3.5. Limit of Detection (LOD)

When analyzing test samples, we want to be reasonably confident that we obtain a positive test result when the sample is expected to contain even very few target molecules per analyzed volume (N). The challenge here is often to decide when the signal obtained cannot be statistically confounded with the signal from a negative sample, i.e., matrix. While buffer only shall not produce a positive PCR, a negative sample may contain similar nucleic acids that amplify. Such a background signal must then be taken into account, limiting the LOD.
LOD can be limited by many factors related to the sampling and processing of the samples, instrumental noise, quality of reagents, assay performance, etc. Under ideal conditions, when limited by the sampling uncertainty, LOD can be predicted. Working at a 95 % confidence level means at least 95 % of replicate samplings shall be qPCR positive when the sample contains targets. In Figure 7 the dashed line indicates that an average of 3 molecules per analyzed volume (N) is required for a sampling to be positive in 95 % of cases. Hence, the theoretical LOD is N=3 molecules when sampling uncertainty dominates. This theoretical limit is independent of the analytical method.
In the standard curve in Figure 3 and its corresponding residuals plot in Figure 4 we see the spread of replicates increases towards lower concentrations, which is due to increasing sampling uncertainty. The effect is visualized and quantified in Figure 5.
The mean Cq of the replicates at the lowest concentrations (N≤2), where some samplings are negative, is lower than that predicted by the linear regression (Figure 3, Figure 4). This is because the average is calculated only for the positive samples that have Cq values, which introduces bias. Negative samplings are also the reason RSD is lower for N=1 than for N=2 (Figure 5).
The fraction of samplings that were positive as a function of N for the data in Figure 3 is shown in Figure 8. The data are fitted with a sigmoidal function to interpolate the expectation value at 95 % probability for samplings to be positive. The confidence area of the fitted curve is also estimated, from which the confidence range of the LOD is obtained [8].
For the reference assay, targeting purified synthetic DNA measured using the IntelliQube, the estimated LOD at 95 % confidence is 2.6 target molecules per sampling, with a 95 % confidence interval (CI) of 2.0–3.7 target molecules. The CI encompasses the theoretical limit of three molecules, and we conclude that the reference assay, when applied to purified synthetic targets, reaches this limit. Switching to dPCR will not improve the sensitivity, since it is limited by the sampling uncertainty. To achieve higher sensitivity, larger sampling volumes should be used, or the sample may be concentrated.
It should be noted that the approaches currently used to estimate LOD and LOQ are based on definitions from 1975 and 1980, respectively. These have issues. LOD can be calculated differently depending on the setup, particularly how the ‘blank’ is defined, and suffers from a high false negative rate (often around 50 %), and the LOQ threshold, here set at 25 %, is arbitrary. In 1993, IUPAC, ISO, and the European Union initiated harmonizing criteria and agreed on new LOD and LOQ definitions that are derived from the straight line fit and error propagation [14,15,16]. This solves the issue of high false rates and the ambiguous LOQ [17]. A general-purpose introduction is found elsewhere [18]. When these new definitions will be introduced into PCR analytics remains to be seen.

3.6. Expected Imprecision

With the Poisson model (Equation 3), the expected imprecision in logarithmic scale due to sampling uncertainty expressed as either SD or RSD can be calculated using (Equation 2). Figure 9 shows the predicted RSD as a function of the expected number of target molecules per sampling (N).
From Figure 9 it follows that under conditions when other contributions than sampling uncertainty to imprecision are negligible, RSD reaches 25 %, which is a common criterion for LOQ, around N=26 target molecules per sampling. This is the theoretical LOQ at 25 % RSD.
For the reference assay in Figure 5 RSD drops below 25 % at N = 32 target molecules, which is consistent with the theoretical expectations.

3.7. Variance PCR for Absolute Quantification

Note the resemblance of the theoretical curve in Figure 9 with the experimentally determined RSD of the reference assay in Figure 5. This suggests that when sampling uncertainty dominates variation across sampling aliquots, the expected number of target molecules per sampling can be predicted. The concept of this approach, which we term variance PCR (vPCR), is illustrated in Figure 10. This is intended as a conceptual extension to illuminate the fundamental concept of the limit of quantification.
While the targets in the samplings are clearly observed to vary when the average is one target molecule per sampling, one hardly notices variation among samplings at N = 100 molecules per sampling. Although SD increases with N, the RSD decreases. This dependence was, precisely, presented in Figure 9.
An apparent ambiguity is that the same RSD is obtained at two different expectation values (N), one at each side of the maximum RSD (Figure 9). This, however, is not an issue. At low concentration, the drop in RSD is due to some replicates becoming negative, while at high concentration, all replicates are positive.
A comparison between experimentally measured RSD (Figure 5) and theoretically predicted RSD assuming sampling uncertainty only (Figure 9) is provided in Table 1.
Given that measurements are performed on a logarithmic scale, the agreement between the predicted average number of targets per sampling derived from the measured RSD and the expected number of targets based on the dilutions is remarkably good. For all samples, the predicted values exceed the expected values somewhat, which may indicate that the expectation values based on dilutions of the stock solution, whose concentration was determined by dPCR, could be slightly underestimated.
At conditions where sampling uncertainty dominates, N can be determined directly from the SD (or RSD) of the Cq values of replicate samplings. This represents an alternative approach to absolute quantification that differs fundamentally from dPCR.
A potential application of vPCR is as a complement to dPCR, particularly on dPCR platforms capable of real-time fluorescence detection that generate Cq values. At low target loading, where a significant fraction of reaction partitions (which are equivalent to replicate samplings) are negative, standard dPCR readout provides optimal precision. At higher target loading, when most or all partitions are positive and classical dPCR loses resolving power, target concentration can instead be estimated by the vPCR principle based on the SD of the Cq values. Combining these approaches would dramatically expand the dynamic range of absolute quantification achievable with dPCR platforms.

3.8. Dynamic Range

The dynamic range of a qPCR method extends from LLOQ to ULOQ, defined as the concentration range over which the RSD remains below a specified threshold,14 like 25% in our example. For the reference assay, RSD does not exceed 25% at high concentrations. Figure 11 shows the standard curve above the LLOQ, and Figure 12 shows the corresponding standardized residuals, i.e., residuals scaled by the SD, which is a suitable means for testing for the presence of outliers. Considering each level separately, data points outside ± 3 are considered to have an outlying behaviour. Alternatively, the traditional Grubbs’ test can be applied to the overall set of residuals, which assumes they follow a normal distribution as required also for the OLS [19].
Within this range, replicate variability is effectively independent of concentration. This is referred to as homoscedasticity in statistics and is an assumption behind linear regression when based on the standard least squares criterion.

3.9. PCR Efficiency

Linear regression of the data in Figure 11 yields the slope and intercept, along with the Working-Hotelling 95 % confidence band, shown with red dashed lines. The confidence region is derived from error propagation and, for this example, is exceedingly small, which is a positive and direct consequence of the large number of replicates. The PCR efficiency (E) is estimated as in Equation 4 [6]:
E = 10 k 1 1
The standard error (SE) of the mean of the PCR efficiency can be estimated using Equation 5 [6]:
S E E = S E k × 1 + E ln 1 0 k 2
for which the Student’s 95 % CI, considering the t factor at 95 % confidence level and n-2 degrees of freedom, with n being the total number of samples used in the curve, is calculated using Equation 6 [20]:
E ± t 95 % , n 2   × S E E
For the data in Figure 11 we obtain:
  • Slope (k):
−3.33 [-3.37, -3.29]
  • Intercept (m):
34.1 [34.0, 34.2]
  • Efficiency (E):
0.996 [0.978, 1.014]

3.10. Real-Life Standard Curves

Typical standard curves have fewer replicates. Still, the standard samples shall be independent and handled the same way as future test samples, including preanalytics [10]. To get the example in Figure 13 standard samples were measured in triplicate in dilution steps of ten covering six logs in concentration.
The data are fitted using OLS, which yields the following regression parameter estimates along with their 95 % CIs:
  • Slope (k):
−3.42 [-3.50, -3.34]
  • Intercept (m):
31.58 [31.21, 31.96]
  • Efficiency (E):
0.960 [0.928, 0.992]
The PCR efficiency, estimated as 96 ± 3 %, is well within acceptable limits. Also shown in Figure 13 is the Working–Hotelling area illustrating the precision of the fit. The standardized residuals plot is shown in Figure 14.

3.11. Validation of the Linear Dynamic Range

While the dynamic range extends from LLOQ to ULOQ, the linear range, which is usually narrower [21], corresponds to the interval where Cq is proportional to log(N). The linear range (Equation 7) can be assessed by fitting the data to higher-order polynomial models and testing whether the k1 and k2 terms in Equations 8 and 9 have statistical significance:13
y = b 1 x + m
y = k x + k 1 x 2 + m
y = k x + k 1 x 2 + k 2 x 3 + m
The second-order polynomial accounts for curvature at one end of the concentration range, while the third-order model takes into account curvature at both ends. An F-test is used to determine whether the inclusion of higher-order terms significantly improves the fit compared with the linear model. We will refer to this as ‘polynomial test’ [13,22].
The full dataset spanning six logs, when fitted by OLS regression, fails the polynomial test for the overall linear dynamic range, i.e., the distribution of the standards does not follow a straight line. This result may initially appear surprising, as no other indicators suggest a problem. However, closer inspection of the residuals plot reveals that all three replicates at the highest concentration exhibit positive residuals, while all three replicates at the second-highest concentration exhibit negative residuals. This systematic pattern indicates curvature at the upper end of the range.
After removing the three highest-concentration samples, the revised standard curve is shown in Figure 15.
Fit by linear regression yields the following parameters:
  • Slope (k):
−3.50 [-3.59, -3.41]
  • Intercept (m):
31.80 [31.44, 32.16]
  • Efficiency (E):
0.930 [0.897, 0.964]
The corresponding standardized residuals plot is shown in Figure 16.
Although a slight curvature at high concentration may still be observed, it is not statistically significant. The data now pass the polynomial test, and we conclude the method exhibits a linear dynamic range of five orders of magnitude. The PCR efficiency estimate is somewhat reduced to 93 ± 3 % but remains within acceptable limits.
The polynomial test is very sensitive to deviations and must be used thoughtfully. When the standard samples fall very precisely on a straight line, the test may indicate deviations that are purely noise dependent and may suggest removing good data points. We therefore recommend considering also the “allowable deviation from linearity”, as stated elsewhere [13]. If it is below 20 %, we would typically accept the standard curve.
Deviations from linearity at high concentrations can arise from several factors, including inhibition in undiluted samples, errors in baseline subtraction when fluorescence accumulates at very early cycles, or incorrect dilutions. In many cases, the exact cause remains unidentified. If uncorrected, the too high a Cq value of the most concentrated sample causes a tilt of the fitted straight line, reducing its slope. A reduced slope implies a higher PCR efficiency, which is incorrect. The impact can be large due to the lever effect and may result in PCR efficiency estimates above 100 %.
The true PCR efficiency cannot exceed 100 %, which corresponds to copying all molecules in the sample every amplification cycle. An estimate, though, may be higher due to experimental error and imprecision. This is reflected by the CI of the PCR efficiency estimate, which then should encompass 100 %.
If the lower confidence range of the PCR efficiency estimate is above 100 % the result should not be accepted as, most likely, concentrations outside the linear dynamic range are included, causing an artificial increase in the efficiency estimate.

3.12. Impact of outliers

In Figure 17 we have included an outlier sample in the standard curve in Figure 13 for illustrative purposes.
The outlier is readily identified with a statistical test like the Grubbs and would normally be removed. However, here we keep it to illustrate its impact.
The linear regression yields the following estimates (95 % CI):
  • Slope (k):
-3.61 [-3.88, -3.33 ]
  • Intercept (m):
32.57 [31.34, 33.79]
  • Efficiency (E):
0.894 [0.800, 0.988]
The estimated PCR efficiency is 89.4 %, close to the commonly acceptance threshold of 90%. This illustrates that PCR efficiency alone is not a sensitive indicator of problems in standard curve data.
In contrast, the CI of the PCR efficiency is highly informative. Although 89.4 % is the point estimate, the 95 % confidence range spans from 80 % to 99 %, which is excessively wide and provides little certainty about the actual PCR efficiency. Indeed, the width of the CI is a more meaningful quality indicator than the point estimate itself.
When applying the polynomial test to study the goodness of the linear fit, the presence of the outlier prevents the detection of the deviation from linearity that was clearly visible in Figure 17. This occurs because the outlier introduces substantial imprecision and biases the least squares fit to itself. This not only reduces the precision of the estimated PCR efficiency, but also affects the entire linear regression, as reflected by the very broad Working–Hotelling confidence band. Due to this imprecision, the triplicate samples at the second-highest and highest concentrations are no longer significantly below and above the fitted regression line, and the linear range would erroneously be accepted.
This example highlights the critical importance of carefully assessing reliable standard data when constructing qPCR standard curves. A single outlier that is not removed can have a profound impact, particularly if it is at an extreme low or high concentration, on regression results and lead to misleading conclusions.

3.13. Prediction of Concentrations of Test Samples

Figure 18 uses the standard curve established in Figure 15 to predict the concentrations of test samples.
For accurate prediction, standard samples used to construct the standard curve must be handled exactly in the same manner as the test samples and must have exactly the same matrix effects.4 This ensures that the same sources of confounding variation are maintained. For example, if the test samples were analyzed using technical replicates that were subsequently averaged, the standard samples must be analyzed and averaged in the same way.
Test samples may be available as biological replicates. In such cases, two approaches are possible. The biological replicates can be analyzed separately, yielding independent readouts that may be averaged if a single estimate is desired. Alternatively, the Cq values of the biological replicates can be averaged before estimation, resulting in a single concentration estimate. In the statistical analysis, it is important to account for this averaging, as it reduces variation and therefore improves precision.
The concentration in logarithmic scale of a test sample is estimated by applying Equation 10 and considering the measured Cq and the slope (k) and intercept (m) of the standard curve:
l o g ( N ) = C q m k
The standard curve is typically validated on each sample run using at least two quality control samples at high and low concentrations, with a preference for three (high, medium, and low concentrations). Intra and inter-assay accuracy of +/- 1 cycle (corresponding to 50 to 100 % relative error (RE) in linear scale) is a common validation criterion [4].
The Working–Hotelling prediction band is wider than the corresponding confidence band for the fit because the former includes three sources of uncertainty. In addition to the imprecision of the slope (k) and the imprecision of the intercept (m), the imprecision of the measured Cq of the test sample also contributes to the imprecision of predictions.
The standard error, or uncertainty, of the estimated concentration in logarithmic scale of a test sample is given by Equation 11 [23,24].
S E l o g N = S E y . x k 1 b + 1 n + C q C q ¯ 2 k 2 i = 1 n log N i log N ¯ 2
The equation looks complex, although it can be implemented readily in a spreadsheet. It is worth looking into the different parameters it contains to understand its essentials.
S E y . x is the standard error of the least squares fit or the average error of the standard curve. It reflects the average spread of data in the residuals plot in Figure 16. The lower the S E y . x , the better the fit of the experimental points to the line.
b is the number of test sample replicates. The more replicates, the higher is the precision of the predictions. When measurement error dominates, precision scales with the square root of b.
n is the total number of standard samples used to construct the standard curve. n = the number of levels ✕ the number of replicates (or samplings) at each level. For method development, a minimum of nine levels is recommended in at least four replicates [13].
C q C q ¯ 2 is the difference between the measured Cq and the average Cq of all the standard samples, squared. Its effect is that precision is highest at the center of the standard curve and decreases towards the edges, as reflected by the Working-Hotelling band.
i = 1 n l o g N i log N ¯ 2 is the sum of squared differences between the standards’ concentrations and the mean concentration of all the standards in logarithmic scale. This is the analytical measuring interval of the standard curve and should be within the linear dynamic range. The wider the interval, the greater the precision of the predicted concentrations.
From the SE of the interpolated value, the CI is obtained considering Equation 12:
l o g N ± t 95 % , 2 t a i l s , n 2 × S E l o g N
This is on a logarithmic scale, and the CI is symmetric around the point estimate (Table 2).
For example, the estimated concentration of sample 5, with the corresponding 95 % CI, is l o g N 5 = 4.91   4.73 , 5.09 .
Note the CI is symmetrical around the point estimate, and the relative error is about 7.3 % (=100*(5.09-4.73)/4.91). In a linear scale, the CI is asymmetric. The best estimate is N 5 = 81200   54100 , 121900   m o l e c u l e s , which has an RE of 83.5 % (=100*(121900-54100)/81200).

4. Conclusions

qPCR standard curves remain a key analytical approach for assay validation and target concentration estimation, but their interpretation depends strongly on whether they are constructed correctly and on the purpose for which they are used. Here we show that standard curves generated with very large numbers of replicates reveal features of qPCR behaviour that are not readily apparent under routine conditions, particularly the relationship between replicate variation, detection limits, and target copy number. These features are always present but are usually masked by limited replication.
Standard curves constructed under typical laboratory conditions, with relatively few replicates and independently prepared standards, remain appropriate and informative for routine assay validation and quantification, provided their limitations are recognised. In such cases, careful assessment of dynamic range, outliers, efficiency estimates, and prediction uncertainty is essential for obtaining reliable results.
We introduce a novel concept termed vPCR to formalise the theoretical relationship between Cq variation and absolute target copy number. This concept is intended to clarify fundamental constraints on quantification rather than to propose a new routine methodology.
The examples presented here highlight that standard curves are purpose-dependent analytical tools. Understanding what information they can provide, and equally what they cannot, is critical for avoiding common analytical errors and for ensuring robust interpretation of qPCR data.

Author Contributions

All authors contributed to the writing.

Funding

MK was supported by 86652036 from RVO and MULTIOMICS_CZ (Programme Johannes Amos Comenius, Ministry of Education, Youth and Sports of the Czech Republic, ID Project CZ.02.01.01/00/23_020/0008540) – Co-funded by the European Union, and the Swedish Foundation for Strategic Research SM23-0033. AF was supported by Swedish Vinnova project 2024-03260 ("AI-Based solution for advanced digitalization of ATMP quality control") and Swedish Tillväxtverket project 20370391, 20371194 (“Streamlined technology transfer from research to clinic to strengthen the innovation process in precision medicine.”). AS was funded by Region Västra Götaland; Swedish Cancer Society [25-4425]; Swedish Childhood Cancer Foundation [MT2024-0004 and PR2025-0064]; Swedish Research Council [2021-01008]; Swedish governmental funding of clinical research (ALF) [1006009]; the Swedish Agency for Economic and Regional Growth [20370391]; the Sjöberg Foundation [2025-1219].

Data Availability Statement

The data used in the examples including guidance is available https://www.multid.se/qpcrstandardcurve/.

Acknowledgments

The authors acknowledge their mothers for giving birth to them.

Conflicts of Interest

Amin Forootan, Björn Sjögreen and Mikael Kubista are board members with interest in MultiD Analyses AB.

Abbreviations

The following abbreviations are used in this manuscript:
b number of test sample replicates
cDNA complementary DNA
CI confidence interval
Cq quantification cycle
dPCR digital PCR
E PCR efficiency
gDNA genomic DNA
ISO International Organization for Standardization
IUPAC International Union of Pure and Applied Chemistry
k slope of the standard curve
LLOQ lower limit of quantification
LOD limit of detection
LOQ limit of quantification
m intercept of the standard curve
MIQE Minimum Information for Publication of Quantitative Real-Time PCR Experiments
n total number of standard samples
N expected (and average) number of target molecules per analyzed volume.
OLS ordinary least squares regression
P(x) the probability that a sampling contains exactly x target molecules
P(x>0) the probability a sampling is positive
qPCR quantitative PCR
RSD relative standard deviation
SD standard deviation
SE standard error
SEy.x average standard error of the regression
t95%,n-2 the critical value from the Student's t-distribution required to calculate a 95% confidence interval when the degrees of freedom are n-2
ULOQ upper limit of quantification
vPCR variance PCR
x the actual number of target molecules in a sampling

References

  1. Bustin, SA; Benes, V; Garson, JA; Hellemans, J; Huggett, J; Kubista, M; Mueller, R; Nolan, T; Pfaffl, MW; Shipley, GL; Vandesompele, J; et al. The MIQE guidelines: minimum information for publication of quantitative real-time PCR experiments. Clin Chem. 2009, 55, 611–622. [Google Scholar] [CrossRef] [PubMed]
  2. Bustin, SA; Ruijter, JM; van den Hoff, MJB; Kubista, M; Pfaffl, MW; Shipley, GL; Tran, N; Rödiger, S; Untergasser, A; Mueller, R; Nolan, T; et al. MIQE 2.0: Revision of the Minimum Information for Publication of Quantitative Real-Time PCR Experiments Guidelines. Clin Chem 2025, hvaf043. [Google Scholar] [CrossRef] [PubMed]
  3. ISO 20395:2019; Biotechnology — Requirements for evaluating the performance of quantification methods for nucleic acid target sequences — qPCR and dPCR. International Organization for Standardization (Geneve Switzerland), 2019.
  4. Hays, A.; Wissel, M.; Colletti, K.; et al. Recommendations for Method Development and Validation of qPCR and dPCR Assays in Support of Cell and Gene Therapy Drug Development. AAPS J 2024, 26, 24. [Google Scholar] [CrossRef] [PubMed]
  5. Bustin, Stephen A; Wittwer, Carl T. Real-Time Reverse Transcription Quantitative PCR (RT-qPCR) Methodological Standards and Reporting Practices. Clinical Chemistry 2026, hvaf176. [Google Scholar] [CrossRef] [PubMed]
  6. Kubista, M; Andrade, JM; Bengtsson, M; Forootan, A; Jonak, J; Lind, K; Sindelka, R; Sjoback, R; Sjogreen, B; Strombom, L; Stahlberg, A; et al. The real-time polymerase chain reaction. Molecular Aspects of Medicine 2006, 27, 95–125. [Google Scholar] [CrossRef] [PubMed]
  7. Laurell, Henrik; Iacovoni, Jason S.; Abot, Anne; Svec, David; Maoret, Jean-José; Arnal, Jean-François; Kubista, Mikael. Correction of RT–qPCR data for genomic DNA-derived signals with ValidPrime. Nucleic Acids Research 2012, Volume 40(Issue 7), e51. [Google Scholar] [CrossRef] [PubMed]
  8. Forootan, Amin; Sjöback, Robert; Björkman, Jens; Sjögreen, Björn; Linz, Lucas; Kubista, Mikael. Methods to determine limit of detection and limit of quantification in quantitative real-time PCR (qPCR). Biomolecular Detection and Quantification 2017, Volume 12, Pages 1–6. [Google Scholar] [CrossRef] [PubMed]
  9. Pennucci, J.; Hays, A.; Adamowicz, W.; et al. Technical Considerations of Pharmacokinetic Assays for LNP-mRNA Drug Products by RT-qPCR. AAPS J 2025, 27, 144. Available online: https://link.springer.com/article/10.1208/s12248-025-01122-w. [CrossRef] [PubMed]
  10. CLSI. Guide EP06-Evaluation of linearity of quantitative measurement procedures, 2nd ed.; Available online: https://clsi.org/shop/standards/ep06/.
  11. Anders Ståhlberg; Håkansson, Joakim; Xian, Xiaojie; Semb, Henrik; Kubista, Mikael. Properties of the Reverse Transcription Reaction in mRNA Quantification. Clinical Chemistry 2004, Volume 50(Issue 3), 509–515. [Google Scholar] [CrossRef] [PubMed]
  12. Armbruster, David A; Pry, Terry. Limit of blank, limit of detection and limit of quantitation. Clin Biochem Rev 2008, 29 Suppl 1(Suppl 1), S49–52. [Google Scholar] [PubMed]
  13. Currie, LA. Nomenclature in evaluation of analytical methods including detection and quantification capabilities. Pure and Applied Chemistry 1995, 67(10), 1699–1723. [Google Scholar] [CrossRef]
  14. ISO 11843-2-2007; Capability of detection – Part 2: methodology in the linear calibration case. (Corrigendum 2007); ISO: Gene. ve, Switzerland, 2000.
  15. Commission Decision of 12 August 2002, implementing Council directive 96/23/EC concerning the performance of analytical methods and the interpretation of results. Official Journal of the European Communities 2002, 8–36.
  16. Currie, LA. Detection: international update, and some emerging dilemmas involving calibration, the blank, and multiple detection decisions. Chemometrics and Intelligent Laboratory Systems 1997, 37, 151–181. [Google Scholar] [CrossRef]
  17. Andrade-Garda, JM. Basic data analysis. In Problems of Instrumental Analytical Chemistry, a hands-on guide, 2nd Edition ed; World Scientific Publishing Europe, 2025. [Google Scholar]
  18. Grubbs, FE. Sample Criteria for Testing Outlying Observations. Ann. Math. Statist 1950, 21(1), 27–58. [Google Scholar] [CrossRef]
  19. Svec, David; Tichopad, Ales; Novosadova, Vendula; Pfaffl, Michael W; Kubista, Mikael. How good is a PCR efficiency estimate: Recommendations for precise and robust qPCR efficiency assessments. Biomol Detect Quantif. 2015, 3, 9–16. [Google Scholar] [CrossRef] [PubMed]
  20. Eurachem Guide: The Fitness for Purpose of Analytical Methods – A Laboratory Guide to Method Validation and Related Topics, 3rd ed.; Cantwell, H, Ed.; 2025; Available online: http://www.eurachem.org.
  21. Kroll, MH; Emancipator, K. A theoretical evaluation of linearity. Clinical Chemistry 1993. [Google Scholar]
  22. Andrade-Garda, J. The basics of univariate and multivariate calibration. In Basic Chemometric Techniques for Analytical Chemists; Leardi, R, Andrade-Garda, J, Eds.; World Scientific Publishing (London), 2025. [Google Scholar]
  23. NIST-SEMATECH. Engineering statistics handbook. Available online: https://www.itl.nist.gov/div898/handbook/.
Figure 1. The Cq values, extracted from the amplification curves (left, RFU = relative fluorescence units), are plotted versus the starting amount of target molecules on a logarithmic scale of the standard samples (right).
Figure 1. The Cq values, extracted from the amplification curves (left, RFU = relative fluorescence units), are plotted versus the starting amount of target molecules on a logarithmic scale of the standard samples (right).
Preprints 200574 g001
Figure 2. Design of a synthetic reference material comprising a target and a reference sequence in the same DNA molecule, guaranteeing a 1:1 ratio.
Figure 2. Design of a synthetic reference material comprising a target and a reference sequence in the same DNA molecule, guaranteeing a 1:1 ratio.
Preprints 200574 g002
Figure 3. Standard samples containing on average (N): 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, and 2048 target molecules per sampling analyzed in replicates using the Reference Sequence assay. The number of replicates was: 128 (N=1), 64 (2 ≤ N ≤512), and 32 (1024 ≤ N ≤ 2048). Data fitted to a straight line with linear regression.
Figure 3. Standard samples containing on average (N): 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, and 2048 target molecules per sampling analyzed in replicates using the Reference Sequence assay. The number of replicates was: 128 (N=1), 64 (2 ≤ N ≤512), and 32 (1024 ≤ N ≤ 2048). Data fitted to a straight line with linear regression.
Preprints 200574 g003
Figure 4. Residual plot of the data in Figure 3 expressed in standard deviations.
Figure 4. Residual plot of the data in Figure 3 expressed in standard deviations.
Preprints 200574 g004
Figure 5. RSD in linear scale expressed in percentage as a function of the expected number of target molecules per sampling (N). Red data points indicate that some of the replicates in those samplings were negative.
Figure 5. RSD in linear scale expressed in percentage as a function of the expected number of target molecules per sampling (N). Red data points indicate that some of the replicates in those samplings were negative.
Preprints 200574 g005
Figure 6. Poisson distributions showing the frequencies of samplings containing different numbers of target molecules (x) for expectation values:  = 1, 2, 3, 5, and 10 molecules per sampling.
Figure 6. Poisson distributions showing the frequencies of samplings containing different numbers of target molecules (x) for expectation values:  = 1, 2, 3, 5, and 10 molecules per sampling.
Preprints 200574 g006
Figure 7. The probability that a sampling is positive, P(x>0), as a function of the expected average number of target molecules per sampling N. The dashed line is drawn at P(x>0) = 95 %.
Figure 7. The probability that a sampling is positive, P(x>0), as a function of the expected average number of target molecules per sampling N. The dashed line is drawn at P(x>0) = 95 %.
Preprints 200574 g007
Figure 8. Fraction of replicate samplings that were positive as a function of N. Data are fitted to a sigmoidal curve to estimate LOD as the concentration that produces 95 % positive replicates (LOD = 2.5 molecules, blue vertical dashed line). Confidence band of the fit (red lines) is used to estimate the confidence interval (CI) of LOD (CI = [2.0,3.7]).
Figure 8. Fraction of replicate samplings that were positive as a function of N. Data are fitted to a sigmoidal curve to estimate LOD as the concentration that produces 95 % positive replicates (LOD = 2.5 molecules, blue vertical dashed line). Confidence band of the fit (red lines) is used to estimate the confidence interval (CI) of LOD (CI = [2.0,3.7]).
Preprints 200574 g008
Figure 9. RSD due to sampling uncertainty calculated from the Poisson distribution for different expected numbers of target molecules per sampling (N). Red line is at RSD = 25 %. The green arrow indicates the expected number of target molecules (N) at which RSD is 25 %, which is 26 molecules. This defines the theoretical LOQ.
Figure 9. RSD due to sampling uncertainty calculated from the Poisson distribution for different expected numbers of target molecules per sampling (N). Red line is at RSD = 25 %. The green arrow indicates the expected number of target molecules (N) at which RSD is 25 %, which is 26 molecules. This defines the theoretical LOQ.
Preprints 200574 g009
Figure 10. Illustration of expected distributions of target molecules when performing 100 subsamplings with N = 1 (left), 10 (middle), and 100 target (right) molecules in average per volume.
Figure 10. Illustration of expected distributions of target molecules when performing 100 subsamplings with N = 1 (left), 10 (middle), and 100 target (right) molecules in average per volume.
Preprints 200574 g010
Figure 11. Standard curve based on Cq values vs. N at concentrations above LLOQ. Data fitted to a straight line (blue) with linear regression. Uncertainties of the fit are reflected by the Working-Hotelling confidence area indicated with red dashed lines.
Figure 11. Standard curve based on Cq values vs. N at concentrations above LLOQ. Data fitted to a straight line (blue) with linear regression. Uncertainties of the fit are reflected by the Working-Hotelling confidence area indicated with red dashed lines.
Preprints 200574 g011
Figure 12. Standardized residuals of the data in Figure 11.
Figure 12. Standardized residuals of the data in Figure 11.
Preprints 200574 g012
Figure 13. Standard samples were measured in triplicate in dilution steps of ten, covering six logs in concentration. Data are fitted to the blue line using linear regression. Red dashed lines show the Working-Hotelling confidence area, illustrating the precision of the fit.
Figure 13. Standard samples were measured in triplicate in dilution steps of ten, covering six logs in concentration. Data are fitted to the blue line using linear regression. Red dashed lines show the Working-Hotelling confidence area, illustrating the precision of the fit.
Preprints 200574 g013
Figure 14. Standardized residuals of the data in Figure 13.
Figure 14. Standardized residuals of the data in Figure 13.
Preprints 200574 g014
Figure 15. Standard curve using the same data as Figure 13, but leaving out the most concentrated sample, covering only 5 logs of concentration.
Figure 15. Standard curve using the same data as Figure 13, but leaving out the most concentrated sample, covering only 5 logs of concentration.
Preprints 200574 g015
Figure 16. Standardized residuals of the data in Figure 15.
Figure 16. Standardized residuals of the data in Figure 15.
Preprints 200574 g016
Figure 17. The data in Figure 13 with an outlier sample (blue color) at the lowest concentration.
Figure 17. The data in Figure 13 with an outlier sample (blue color) at the lowest concentration.
Preprints 200574 g017
Figure 18. Prediction of the concentrations of test samples (red) using the standard curve established in Figure 15. (blue line, standard samples not shown). Red dashed lines indicate the Working-Hotelling prediction band.
Figure 18. Prediction of the concentrations of test samples (red) using the standard curve established in Figure 15. (blue line, standard samples not shown). Red dashed lines indicate the Working-Hotelling prediction band.
Preprints 200574 g018
Table 1. Comparison of predicted and measured RSD for the expected number of target molecules below and around LOQ.
Table 1. Comparison of predicted and measured RSD for the expected number of target molecules below and around LOQ.
Expected number of targets (N) Predicted RSD Measured RSD Predicted number of targets (N)
1 0.581 0.554 2
2 0.668 0.703 4.5
4 0.738 0.581 6.5
8 0.514 0.408 11.5
16 0.327 0.316 17
32 0.221 0.201 53
64 0.153 0.167 75
Table 2. Estimated concentrations with 95% CIs in logarithmic and linear scales for the test samples in Figure 18.
Table 2. Estimated concentrations with 95% CIs in logarithmic and linear scales for the test samples in Figure 18.
Preprints 200574 i001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated