CONCEPT PAPER | doi:10.20944/preprints202204.0104.v1
Subject: Computer Science And Mathematics, Mathematical And Computational Biology Keywords: Method validation; droplet digital PCR; orthogonal factorial design; variance components; Poisson assumption; cloglog model; target DNA copies per droplet; Monte Carlo; prediction interval
Online: 12 April 2022 (08:46:49 CEST)
For the in-house validation of a droplet digital PCR method, a factorial experimental design was implemented. This design serves different purposes. On the one hand, it is an efficient design in relation to the workload involved in achieving a desirable level of reliability of variance estimates. On the other hand, it allows a partitioning of total variance into different components, thus providing information regarding the dominant sources of random variation. The statistical modelling reflects the actual measurement mechanism, establishing relationships between nominal target DNA copies per well, the range of variation of copy numbers per droplet, probability of detection values, and estimated numbers of copies.
COMMUNICATION | doi:10.20944/preprints202112.0420.v3
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: Non-targeted methods; method validation; food fraud; food authenticity; mass spectrometry; spectroscopy; NGS; NMR
Online: 23 May 2022 (11:10:00 CEST)
Through its suggestive name, non-targeted methods (NTMs) do not aim at a predefined "needle in the haystack". Instead, they exploit all the constituents of the haystack. This new form of analytical methods is increasingly finding applications in food and feed testing. However, the concepts, terms, and considerations related to this burgeoning field of analytical testing needs to be propagated for the benefit of ones associated in academic research, commercial development, and official control. This paper addresses the frequently asked questions around notations and terminologies surrounding NTMs. The widespread development and adoption of these methods also necessitates the need to develop approaches to NTM validation, i.e., evaluating the performance characteristics of a method to determine if it is fit-for-purpose. This work aims to provide a roadmap to approaching NTM validation. In doing so, the paper deliberates on the different considerations that influence the approach to validation and provides suggestions thereof.
SHORT NOTE | doi:10.20944/preprints202206.0032.v1
Subject: Biology And Life Sciences, Other Keywords: Conformity assessment; lot inspection; acceptance sampling; Quality level; sample size; Bayesian statistics; prior distribution; posterior distribution; consumer risk; producer risk
Online: 2 June 2022 (10:59:47 CEST)
The ISO 2859 and ISO 3951 series provide acceptance sampling procedures for lot inspection, allowing both sample size and acceptance rule to be determined, starting from a specific value either for the consumer or producer risk. However, insufficient resources often make it difficult to implement “ISO sampling plans.” In cases where the sample size is determined by external constraints, the focus shifts from determining sample size to determining consumer and producer risks. Moreover, if the sample size is very low (e.g. one single item), prior information should be included in the statistical analysis. For this reason, it makes sense to work within a Bayesian theoretical framework, such as that described in JCGM 106. Accordingly, the approach from JCGM 106 is adopted and broadened so as to allow application to lot inspection. The discussion is based on a “real-life” example of lot inspection on the basis of a single item. Starting from simple assumptions, expressions for both the prior and posterior distributions are worked out, and it is shown how the concepts from JCGM 106 can be reinterpreted in the context of lot inspection. Conceptual differences regarding the definition of consumer and producer risks in JCGM 106 and in the ISO acceptance sampling standards are elucidated and a numerical example is provided.
ARTICLE | doi:10.20944/preprints202208.0179.v1
Subject: Computer Science And Mathematics, Probability And Statistics Keywords: In-house validation study; reproducibility precision; measurement uncertainty; prediction interval; uncertainty interval
Online: 9 August 2022 (10:56:40 CEST)
Measurement uncertainty is typically expressed in terms of a symmetric interval , where denotes the measurement result and the expanded uncertainty. However, in the case of heteroscedasticity, symmetric uncertainty intervals can be misleading. In this paper, a different approach for the calculation of uncertainty intervals is introduced. This approach is applicable when a validation study has been conducted with samples with known concentrations. It will be shown how, under certain circumstances, asymmetric uncertainty intervals arise quite naturally and lead to more reliable uncertainty intervals.
ARTICLE | doi:10.20944/preprints202211.0460.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: Fundamental variability; homogeneity; variance; method validation; proficiency testing; measurement uncertainty
Online: 24 November 2022 (14:33:00 CET)
The question whether a given set of test items can be considered “identical” is often addressed in terms of the homogeneity of the test material from which said items were taken. However, for some types of matrices – in particular, for matrices consisting of minute separate particles, only some of which carry the analyte under consideration – even in the case of homogenous test material, an irreducible source of variability between test items may remain: the fundamental variability. In this paper, the concept of fundamental variability is explained, and procedures for reducing and characterizing it are described.