1. Introduction
Environmental RNA (eRNA), broadly defined as RNA recovered from environmental matrices (e.g., water, sediments, biofilms, or suspended particles), has been increasingly positioned as a next-step molecular readout for ecological monitoring and environmental assessment (Glover, Veilleux, & Misutka, 2025; Veilleux, Misutka, & Glover, 2021; Yates, Derry, & Cristescu, 2021; Zou et al., 2025). By contrast to environmental DNA, which is commonly interpreted as evidence of organismal presence and community composition, eRNA has been discussed as offering a route to short-timescale biological information, including transcription-linked signals that are more directly connected to organismal state and ecosystem functioning (Glover et al., 2025; Veilleux et al., 2021; Yates et al., 2021; Zou et al., 2025). In recent syntheses, the scope of eRNA research has also been argued to extend beyond mRNA abundance toward other RNA classes and RNA-mediated processes that respond to stress, development, and adaptation, thereby widening the space of candidate biomarkers (Ahi & Schenekar, 2025; Hechler & Cristescu, 2024). A systematic review of environmental nucleic acids in aquatic community ecology has also highlighted substantial heterogeneity in eRNA workflows and reporting, strengthening the case for harmonized minimum reporting elements (Bunholi, Foster, & Casey, 2023). In parallel, environmental RNA has a longer history in microbial ecology via metatranscriptomic and rRNA-based profiling of “active” communities (e.g., environmental RNA 16S rRNA amplicons and rRNA-depleted community sequencing) (Wemheuer et al., 2014).
In parallel with these expectations, eRNA has already been applied for biodiversity inference using workflows closely related to eDNA metabarcoding, including paired DNA/RNA analyses in impact assessment and monitoring contexts (Laroche et al., 2017). In pollution and bioassessment settings, comparative eRNA/eDNA metabarcoding has sometimes shown stronger associations between community signal and stressor gradients (e.g., heavy-metal spiking) and/or higher positive predictivity than eDNA in specific taxa, albeit with sensitivity and performance remaining marker-, matrix-, and reference-dependent (Greco et al., 2022; Miyata et al., 2022). In surveillance and biosecurity applications, eRNA has been evaluated as a complementary signal when relatively recent and/or more labile contributions are of interest, and legacy DNA is expected to bias interpretation (von Ammon et al., 2019). However, eRNA should not be treated as a definitive “viability” or “living-only” marker without study-specific justification and controls, because detections can persist for hours and can be stabilized in retained matrices (e.g., biofilms/particles) (Wood et al., 2020; Zaiko, Pochon, Garcia-Vazquez, Olenin, & Wood, 2018). Under field conditions, eRNA from macroeukaryotes has been shown to be recoverable and informative for community profiling, and direct comparisons with eDNA have been reported across seasons and across multiple taxonomic groups (Littlefair, Rennie, & Cristescu, 2022; Macher et al., 2024). In marine systems, differences between eRNA- and eDNA-derived community signals have also been documented, supporting the view that RNA templates can sometimes shift inference toward metabolically active or recently contributed fractions under specific conditions (Giroux, Reichman, Langknecht, Burgess, & Ho, 2022).
The strongest motivation for stricter reporting has emerged when eRNA has been used for physiological inference rather than species detection. In early proof-of-concept work, tissue-biased mRNA targets have been detected from aquarium water, supporting the feasibility of taxon-specific transcript assays from environmental media (Tsuri et al., 2021). More recently, extra-organismal eRNA has been sequenced from tank water to recover macroorganism heat-stress responses alongside community-wide functional shifts, demonstrating that environmental transcriptomics can be made measurable without direct tissue sampling (Hechler, Yates, Chain, & Cristescu, 2023). In fish toxicology, differential expression in water eRNA has been detected after sublethal chemical exposure, while very low host-genome mapping fractions and strong degradation of host nuclear transcripts and dominance of non-target microbial reads have been reported as limiting factors that can materially affect inference if left undocumented (Hiki, Yamagishi, & Yamamoto, 2023). Dose-dependent toxicant responses in fish water eRNA have also been reported, indicating that exposure gradients can be reflected in environmental transcript profiles and strengthening the case for biomarker-oriented applications in ecotoxicology and early-warning monitoring (Gou et al., 2025; He et al., 2025). Because candidate eRNA targets are now being proposed explicitly for management and conservation contexts, transparent reporting of target rationale, context of use, and confounders becomes especially important when moving from “proof-of-concept” to biomarker-style claims (Stevens & Parsley, 2023).
However, the same features that make eRNA attractive for near-real-time biomonitoring also increase vulnerability to noise. Faster decay of eRNA relative to eDNA has been quantified under controlled conditions, including across broad pH gradients, indicating that detectability is time-sensitive and that physicochemical context and handling can materially affect apparent signal recency (Kagzi, Hechler, Fussmann, & Cristescu, 2022). In a marine aquarium experiment, eRNA persisted longer than expected (detections up to ~13 h after organism removal) and both eDNA and eRNA were detected in biofilms, consistent with stabilization in retained matrices that can complicate simple assumptions about rapid turnover (Wood et al., 2020). In freshwater mesocosms, distinct decay behavior across RNA types and genomic origins has been documented, and the eRNA:eDNA ratio has been proposed as an additional interpretive axis for discriminating recent from older genetic material (Marshall, Vanderploeg, & Chaganti, 2021; Morgado-Gamero, Tournayre, & Cristescu, 2025). Because eRNA is physicochemically unstable and is readily affected by collection-to-stabilization time, preservative choice, extraction conditions, and inhibition, methodological sensitivity has been emphasized as a central barrier to reproducible application (T. S. Jo, 2023). Temperature and alkalinity effects on eRNA degradation have also been demonstrated experimentally, reinforcing that pre-analytical handling and environmental context must be reported with sufficient granularity when expression-linked conclusions are drawn (T. Jo, Tsuri, Hirohara, & Yamanaka, 2023).
Minimum-information standards have already improved reproducibility in adjacent domains, including metabarcoding reporting checklists that explicitly cover eDNA/eRNA workflows for biodiversity applications and technical reporting standards developed for environmental microbiome studies (Kelliher et al., 2025; Klymus et al., 2024). In functional genomics, longstanding minimum-information frameworks have been used to support interpretability and reuse of high-throughput expression datasets across studies and repositories (Brazma et al., 2001; Füllgrabe et al., 2020; Su et al., 2014). Those resources, however, have not been tailored to environmental transcript inference where (i) mixed-origin RNA, (ii) rapid and condition-dependent decay, and (iii) biomarker-style claims about stress or exposure are central sources of variability and potential overinterpretation (Ahi & Schenekar, 2025; T. S. Jo, 2023). For these reasons, eRNA-Min has been introduced in this article as a minimum information standard for environmental RNA reporting, intended to support comparable interpretation from species-specific assays to community-scale profiles and to make biomarker claims traceable to documented context, controls, and analytical provenance.
2. Scope and Definitions
Clear scope and stable terminology have been treated as prerequisites for any minimum-information standard, because checklist compliance cannot be evaluated consistently when key terms are used differently across studies. In this manuscript, “eRNA” refers to environmental RNA and should not be confused with “enhancer RNA” usage in functional genomics.
Scope of eRNA-Min: In eRNA-Min, environmental nucleic acids have been treated as nucleic acids recovered from an environmental medium, including both extra-organismal molecules and intracellular nucleic acids from organisms captured in the sampled material (Yates et al., 2021). Environmental RNA (eRNA) has been used in a broad operational sense as RNA extracted from environmental matrices, with inclusion of (i) extra-organismal RNA released by macroorganisms (e.g., as free fragments, vesicle-associated, or cell-associated material) and (ii) organismal (intracellular) RNA contributed by microorganisms (and other small organisms) captured within the same samples (Hechler et al., 2023; Yates et al., 2021). Environmental matrices have been considered in scope when sampling has been performed from natural or managed systems (e.g., water, sediments, soils, air/bioaerosols, and related biofilms/particulate fractions), and when downstream interpretation has been intended for organismal or community inference rather than solely for laboratory model characterization (Hechler et al., 2023; Zou et al., 2025). To support interoperability with sequence repositories and cross-study synthesis, environmental context fields can be mapped to Genomic Standards Consortium checklists (e.g., MIxS/MIMARKS environmental packages) and controlled vocabularies such as ENVO when applicable (Buttigieg et al., 2016; Yilmaz et al., 2011).
Study types covered: Three common eRNA study types have been included. First, eRNA metabarcoding has been considered in scope when taxonomic inference has been generated from RNA templates (typically rRNA-anchored or amplicon-style targets) using workflows analogous to eDNA metabarcoding. (Cagney et al., 2018; Giroux et al., 2022; Klymus et al., 2024; Laroche et al., 2017). Second, metatranscriptomics has been considered in scope when shotgun RNA sequencing has been used to infer functional activity within mixed microbial (and potentially multi-kingdom) assemblages captured from environmental samples (Ahi & Schenekar, 2025; Hechler et al., 2023). Third, targeted transcript assays (e.g., RT-qPCR/RT-dPCR or approaches for selected genes or tissues) have been considered in scope when transcript presence or abundance has been used for physiological inference beyond presence/absence (Tsuri et al., 2021). For targeted RT-qPCR/RT-dPCR studies, eRNA-Min is intended to complement (not replace) assay-focused reporting frameworks such as MIQE/dMIQE and environment-focused qPCR/dPCR reporting guidance such as EMMI (Borchardt et al., 2021; Bustin et al., 2009; Whale et al., 2020). Because eRNA research has been increasingly framed as extending beyond mRNA to include regulatory and structural RNA types and RNA processes (including post-transcriptional regulation and epitranscriptomic RNA modifications, e.g., m6A) that respond to environmental cues, those RNA classes have been treated as eligible targets under eRNA-Min when sufficient reporting has been provided to support interpretation (Ahi & Schenekar, 2025; Ahi & Singh, 2024).
Levels of inference and intended claims: Two main inference levels have been distinguished. Species-resolved inference has been defined as analysis in which reads (or assay signals) have been linked to one or more focal taxa using explicit reference resources (genome/transcriptome/marker references), with conclusions drawn about organismal state, stress responses, or exposure signatures (Hechler et al., 2023; Tsuri et al., 2021). Community-level inference has been defined as analysis in which functional profiles, expression signatures, or differential patterns have been derived from mixed assemblages, with attention required to taxonomic attribution limits and reference completeness (Ahi & Schenekar, 2025; Hechler et al., 2023). Claims framed as biomarkers have been treated as a special case and have been defined using the BEST glossary as “a defined characteristic that is measured as an indicator of normal biological processes, pathogenic processes, or biological responses to an exposure or intervention” (Cagney et al., 2018; FDA-NIH Biomarker Working Group, 2016). For biomarker claims, a stated context of use has been treated as required because interpretability has been understood to depend on the specified organism(s), matrix, exposure/stressor class, and decision context (Cagney et al., 2018).
What has not been covered: eRNA-Min has not been intended to prescribe laboratory protocols or bioinformatic pipelines; only the minimum reporting elements needed to interpret and reproduce results have been targeted (Harrill et al., 2021). Tissue-derived organismal RNA datasets have been considered out of scope unless they have been used as paired references or controls for interpreting environmental samples (Hechler et al., 2023). For studies limited to biodiversity inference via eRNA metabarcoding, MIEM has been treated as the primary minimum reporting resource, with eRNA-Min intended to be used as an add-on when expression-linked or biomarker-style conclusions have been pursued (Cagney et al., 2018). For targeted RT-qPCR/RT-dPCR studies, MIQE/dMIQE (and, where relevant, EMMI) remain the primary assay-level reporting resources, with eRNA-Min focused on environmental sampling, preservation, and interpretation of environmental transcript signals (Bustin et al., 2009; Whale et al., 2020). For studies centered on environmental microbiome reporting, STREAMS has been treated as the closest general checklist framework, with eRNA-Min positioned to specify eRNA-specific reporting needs where exposure-response or cross-taxa interpretation has been emphasized (Kelliher et al., 2025).
3. The eRNA-Min Checklist
Minimum-information checklists have been adopted in several molecular ecology and omics domains because missing metadata has repeatedly been shown to be a primary barrier to interpretation, replication, and reuse of high-throughput datasets (Brazma et al., 2001; Harrill et al., 2021; Kelliher et al., 2025; Klymus et al., 2024; Wilkinson et al., 2016). A brief audit of published eRNA studies illustrating the frequency of missing pre-analytical, control, and provenance fields is provided in Supplementary Table S1. Adjacent efforts in environmental nucleic acids have also emphasized that “minimum information” becomes substantially more reusable when checklist fields are structured for deposition and validation (e.g., the FAIR eDNA “FAIRe” metadata checklist and formatting guidance) (Takahashi et al., 2025). In environmental RNA work, that barrier has often been especially acute compared with eDNA-based species detection, because biological conclusions are often drawn from expression-linked patterns that can be shifted by pre-analytical handling and by analytical provenance (T. S. Jo, 2023; T. Jo et al., 2023; Kagzi et al., 2022; Wood et al., 2020). Faster and condition-dependent decay of eRNA relative to eDNA has been documented under controlled settings, and substantial variability in recoverable signal has been associated with physicochemical context and sample handling, which makes the reporting of “what happened to the sample” central rather than peripheral (T. Jo et al., 2023; Kagzi et al., 2022; Wood et al., 2020). For these reasons, eRNA-Min has been framed as a reporting baseline that captures the small set of decisions and measurements that most strongly determine whether an eRNA result can be compared across studies or re-analysed by others (Harrill et al., 2021; T. S. Jo, 2023; Kelliher et al., 2025; Klymus et al., 2024).
Two reporting tiers have been used to keep the standard workable. The required tier has been defined as the minimum needed to interpret the study design, evaluate the credibility of the signal (including contamination control), and reproduce the main analyses (Gant et al., 2017; Harrill et al., 2021; Kelliher et al., 2025; Klymus et al., 2024; Saarimäki, Melagraki, Afantitis, Lynch, & Greco, 2021). This “required vs recommended” separation is consistent with broader transcriptomics reporting initiatives that distinguish essential reporting elements from higher-value context fields (e.g., the OECD Omics Reporting Framework) (Harrill et al., 2021). The recommended tier has been defined as additions that measurably improve comparability across ecosystems, seasons, taxa, and laboratories, particularly when eRNA is used for physiological inference, stress biomonitoring, or exposure-signature development (Gant et al., 2017; T. S. Jo, 2023; Saarimäki et al., 2021). The standard has been intended to be implemented as a single supplementary checklist table (one row per sample, plus a small workflow/provenance block per pipeline and sequencing run), while
Table 1 has been provided as a compact overview for rapid scanning. For sequencing-based studies, eRNA-Min can also be cross-walked to “minimum information” expectations for high-throughput sequencing experiments (MINSEQE) to support unambiguous interpretation and reuse (Brazma et al., 2012).
Table 1.
Overview of eRNA-Min domains and reporting tiers. Reporting domains are summarized for quick scanning, while item-level fields are intended to be provided in the supplementary eRNA-Min checklist. “Required” denotes the minimum information needed for interpretation and re-analysis, whereas “recommended” denotes additions that improve cross-study comparability and transferability of biomarker-style claims.
Table 1.
Overview of eRNA-Min domains and reporting tiers. Reporting domains are summarized for quick scanning, while item-level fields are intended to be provided in the supplementary eRNA-Min checklist. “Required” denotes the minimum information needed for interpretation and re-analysis, whereas “recommended” denotes additions that improve cross-study comparability and transferability of biomarker-style claims.
| Domain |
Required (minimum) |
Recommended (strengthening) |
| Study context and sampling metadata |
Study aim and inference level; matrix and sampling unit; capture method; time to stabilization and storage; replication plan; blanks; exposure definition and comparator (when relevant) |
Key covariates (e.g., temperature/pH as relevant); field randomization/batch log; deviations log (e.g., clogging/volume loss) |
| Preservation, extraction, and control handling |
Preservation method and timing; extraction chemistry; DNase strategy (if used); inhibition steps (if used); negatives carried through workflow; QC thresholds and exclusion rules |
Inhibition test evidence (spike/dilution); replicate type declared (true biological vs split/technical) |
| Library preparation and sequencing descriptors |
Target RNA class; enrichment/depletion steps; library chemistry and cycles; sequencing platform/config; achieved depth; run/batch structure |
Read composition summary (rRNA fraction; microbial dominance; mapping yield to target references) |
| Bioinformatics and statistical provenance |
Workflow name/version; software versions; key parameters; reference databases & versions; normalization/filtering; multiple-testing control; effect sizes |
Executable provenance (container/environment); sensitivity checks for decisions driving key claims |
| Results reporting and biomarker claim format |
Discovery vs verification/validation separated; claim card (context of use, comparator, confounders, evidence tier, performance metric) |
External validation across sites/seasons/labs when deployment is implied |
Table 2.
Practical placement of eRNA-Min elements. Placement has been structured to keep the main text readable while ensuring that sample-level detail, provenance, and accession identifiers remain available for review and reuse.
Table 2.
Practical placement of eRNA-Min elements. Placement has been structured to keep the main text readable while ensuring that sample-level detail, provenance, and accession identifiers remain available for review and reuse.
| Element |
Where it should be reported |
Primary purpose |
| Study design, sampling frame, exposure/comparator |
Main methods (summary) + checklist (detail) |
Interpretability of inference and comparability |
| Capture, stabilization, preservation, extraction |
Main methods (summary) + checklist (timestamps/thresholds) |
Assessment of pre-analytical sensitivity |
| Controls and QC decision rules |
Main methods (overview) + checklist (IDs, results, pass/fail) |
Evaluation of contamination and exclusions |
| Library strategy and sequencing descriptors |
Main methods (overview) + checklist (run structure) + repository |
Reproducibility and cross-study alignment |
| Pipeline and reference provenance |
Main methods (overview) + repository/archived workflow record |
Re-execution and version traceability |
| Processed outputs used for conclusions |
Repository + accession-linked supplement |
Re-analysis and meta-analysis readiness |
| Biomarker claim card |
Results (or supplement if many) |
Portability and testability of claims |
Table 3.
Proposed adoption and governance model for eRNA-Min. Adoption is treated as a workflow problem: checklist completion, claim framing, and deposition are made routine at submission; discoverability and versioning are maintained after publication; and extensions are handled through a controlled pathway to avoid fragmentation while allowing ecosystem- or application-specific growth.
Table 3.
Proposed adoption and governance model for eRNA-Min. Adoption is treated as a workflow problem: checklist completion, claim framing, and deposition are made routine at submission; discoverability and versioning are maintained after publication; and extensions are handled through a controlled pathway to avoid fragmentation while allowing ecosystem- or application-specific growth.
| Stakeholder (touchpoint) |
Expected action |
Minimum deliverable |
What can be audited |
| Authors (study execution) |
Checklist fields are recorded during sampling/processing, not reconstructed at submission |
Draft eRNA-min checklist populated during data generation |
Presence of timestamps, batch logs, and control IDs rather than post hoc summaries |
| Authors (submission) |
Checklist and claim cards are submitted with the manuscript; accessions are provided or reserved |
Supplementary checklist table; biomarker claim card(s); data availability statement with accessions/placeholders |
Checklist completeness; claim cards match conclusions; accession placeholders present |
| Editors/journals (initial screening) |
Checklist submission is requested as part of completeness checks |
Author instructions + submission portal requirement |
Checklist submission rate; consistent enforcement across handling editors |
Reviewers (peer review) |
Checklist is used as a structured prompt for missing metadata/provenance |
Reviewer-facing short form mapped to eRNA-min domains |
Reviewer comments track missing fields rather than general style preferences |
Repositories (data deposition) |
Raw reads, negative controls, and processed outputs used in figures are deposited with linked sample metadata |
Accessioned raw data + processed tables used for conclusions + sample metadata package |
Key figures can be reproduced from accessions; controls are present and labeled |
| Standards registries (discoverability) |
eRNA-min is indexed and cross-linked to related standards and repositories |
Registry entry with scope, version, and deposition targets |
Visibility in registry searches; version record kept current |
| eRNA-min maintainers (maintenance) |
Versioned releases are issued; change logs are maintained; extensions are governed |
Versioned checklist + change log + extension pathway |
Traceable differences between versions; backward compatibility of required core fields |
| Community contributors (between releases) |
Issues and extension proposals are submitted via a standard template |
Public issue/extension proposal record |
Decisions are documented; rationale recorded; extensions do not fragment the core |
| Applied users (routine monitoring) |
Required fields are embedded into SOPs and data capture |
Local SOP mapped to eRNA-min required fields |
Field completeness in monitoring datasets; inter-lab comparability over time |
The checklist has been organised into five domains, each reflecting a frequent point of failure in environmental transcript inference. First, study context and sampling metadata have been required because capture design, filtration/capture materials, and time to stabilization are known to influence what fraction of the environmental RNA pool is observed and therefore what can be inferred (T. S. Jo, 2023; T. Jo et al., 2023; Kagzi et al., 2022; Wood et al., 2020). Second, preservation, extraction, and control handling have been required because inhibition, degradation, and contamination can generate false absences or spurious differential patterns if controls and decision thresholds are not disclosed (Dahl et al., 2025; Goldberg et al., 2016; T. S. Jo, 2023; Klymus et al., 2024; Sepulveda, Hutchins, Forstchen, Mckeefry, & Swigris, 2020). Because eRNA workflows include reverse transcription (and, often, DNase treatment), eRNA-Min should explicitly capture where relevant the presence/absence and results of “no-RT” and other negative controls used to rule out DNA carryover and to contextualize low-level signals. Third, library preparation and sequencing descriptors have been required because enrichment/depletion choices (e.g., total RNA vs poly(A), rRNA depletion) largely determine the balance between informative organismal transcripts and background RNA, and therefore shape both detectability and interpretability (Ahi & Schenekar, 2025; Saarimäki et al., 2021; Wang, Xiong, Huang, & Zhan, 2025; Wilkinson et al., 2016; Zhao, Zhang, Gamini, Zhang, & Von Schack, 2018). Fourth, bioinformatics and statistical provenance have been required because eRNA results are sensitive to reference resource versions, mapping/counting settings, filtering rules, and normalization choices; reproducibility cannot be evaluated without explicit provenance (Gant et al., 2017; Saarimäki et al., 2021; Sandve, Nekrutenko, Taylor, & Hovig, 2013). Because RNA-seq analysis choices can materially change quantification and differential results, citing a “best-practice” overview can help justify why explicit parameter and QC reporting is not optional (Conesa et al., 2016). Finally, results reporting and biomarker claim format have been required because biomarker-style statements are often made without a declared context of use, comparator definition, or evidence tier; structured claim reporting has been used to make such statements testable and transferable (Bossuyt et al., 2015; Cagney et al., 2018; Moons et al., 2015; Wolff et al., 2019). For targeted quantitative eRNA assays, “claim cards” should also encourage reporting of assay detection/quantification limits and validation maturity where relevant (e.g., LOD/LOQ reporting and assay readiness scales used in environmental molecular detection) (Klymus et al., 2020; Thalinger et al., 2021).
4. How to Report eRNA-Min in a Manuscript
Reporting standards have been adopted most readily when they have been designed to fit existing manuscript workflows, with checklists used as compact supplements that support peer review without expanding the main text (Kelliher et al., 2025; Klymus et al., 2024; Mirzayi et al., 2021). In eRNA-Min, the same implementation logic has been retained: the core methods narrative has been kept readable, while the item-level metadata needed for interpretation and reuse has been shifted into a structured checklist table that can be reviewed quickly and deposited alongside the data (Kelliher et al., 2025; Klymus et al., 2024; Mirzayi et al., 2021). To make reuse practical, the checklist is best provided in a machine-readable tabular format (e.g., CSV/TSV) in addition to any formatted PDF supplement. To support machine-actionable capture and validation, a fillable eRNA-Min checklist template is provided as Supplementary File S1 (CSV; with optional schema/validation rules).
Three components have been treated as sufficient for routine use. First, the main methods section has been used to describe the study design, sampling strategy, preservation and extraction approach, library strategy, and analysis overview at a level that allows the experimental intent to be understood without consulting supplementary files (Kelliher et al., 2025; Klymus et al., 2024; Mirzayi et al., 2021). Second, a supplementary eRNA-Min checklist table has been used to capture sample-by-sample and run-by-run fields (including negative-control identifiers, time-to-stabilization, batch structure, and explicit QC thresholds), because such details are rarely accommodated cleanly in narrative form yet are repeatedly requested during review (Brazma et al., 2001; Kelliher et al., 2025; Klymus et al., 2024; Mirzayi et al., 2021). A worked example demonstrating checklist completion (including biological samples and negative controls) is provided in Supplementary Table S2. Third, a data availability statement has been used to provide accession identifiers for raw reads and processed outputs, allowing immediate verification and re-analysis (Clough et al., 2024; Katz et al., 2022). Where journal policy permits, accession placeholders (or private access tokens/keys) should be included at submission so that reviewers can evaluate data and metadata during peer review.
Because eRNA studies frequently include multiple sample types, control streams, and processing batches, cross-referencing has been treated as a practical requirement rather than a stylistic preference. A single sample identifier has been expected to link the checklist, the repository sample record, and the sequencing record (BioSample/BioProject and run accessions where applicable), with negative controls included as first-class entries rather than being described only in prose (Barrett et al., 2012; Katz et al., 2022). Metadata that anchor the environmental context (notably collection date and location) have been expected to be retained in the repository record as well as in the checklist, consistent with INSDC spatiotemporal minimum standards introduced in 2023 that require at least “country” (geo_loc_name) and “collection date” for new BioSamples linked to INSDC sequence records unless a valid exemption/missingness reason is declared (Barrett et al., 2012; Karsch-Mizrachi et al., 2025; O’Cathail et al., 2025).
Depositions have been treated as part of “reporting,” because accession numbers are often requested during editorial processing and review, and reviewer-access mechanisms exist in some major omics repositories (for example, GEO provides secure reviewer tokens for private records) (Brazma et al., 2001; Clough et al., 2024). For expression-oriented studies, MIAME and MINSEQE expectations have been reflected in repository guidance that emphasizes availability of raw data, processed data used for conclusions (e.g., count matrices), essential sample annotation, and sufficient protocol description to interpret processing and normalization choices (Clough et al., 2024). GEO explicitly supports MIAME- and MINSEQE-compliant submissions and allows records to remain private until publication while still enabling reviewer access via token. For sequencing reads and environmental surveys, deposition in SRA/ENA/DRA has been treated as the default route, with BioProject and BioSample records required (or created during submission) to capture project-level and sample-level metadata (Barrett et al., 2012; Katz et al., 2022). For SRA submissions specifically, submitters can set or update a release date so that data remain private/embargoed until a chosen public-release timepoint (e.g., aligned to publication) (Katz et al., 2022). If reviewers need access before public release, authors should ensure that a reviewer-access route is available (e.g., repository token/key where supported, or a journal-approved confidential transfer mechanism), rather than assuming private sequencing-archive accessions are reviewable by default.
To avoid ambiguity in physiological or exposure-signature claims, a short “claim card” format has been treated as publishable content rather than optional annotation. Alignment has been maintained with established biomarker terminology in which the intended context of use is treated as central (FDA-NIH Biomarker Working Group, 2016), and with biomarker-reporting extensions of observational reporting practice that emphasize transparent description of measurement and analytical choices (Cagney et al., 2018; Cevallos & Egger, 2014; von Elm et al., 2007). When predictive or diagnostic-style performance has been reported, the metric has been expected to be matched to the claim (e.g., classification performance for a classifier, detection limits for a targeted assay), with the evidence tier stated explicitly (STARD 2015; TRIPOD; PROBAST). For targeted assays, reporting should make explicit whether limits of detection/quantification and validation status support the intended claim context (e.g., “screening,” “site comparison,” “deployment”).
5. Adoption Pathway and Governance
Broad uptake of minimum-information standards has typically been driven less by their technical merit than by how consistently they have been requested and enforced at submission and review (Heus et al., 2024; Simera et al., 2010). In a longitudinal survey of high-impact journals, endorsement of reporting guidelines in “instructions to authors” was shown to increase over time, while the way endorsement was operationalized (recommendation vs requirement; checklist requested or not) was found to vary substantially, with implications for compliance (Heus et al., 2024). Empirical evaluations have also reported that journal endorsement of reporting guidelines can be associated with more complete reporting, supporting the practical value of explicit endorsement plus implementation mechanisms (e.g., checklist submission) (Shamseer et al., 2012). Reporting guidelines have also been framed as consensus tools that remain effective only when periodic updates and post-publication implementation strategies are maintained, rather than being treated as static outputs (Simera et al., 2010).
In closely related environmental sequencing domains, a pattern has been established in which the standard has been packaged as a checklist that can accompany a published study and has been explicitly linked to FAIR-oriented reporting of methods, metadata, and archiving (Klymus et al., 2024). For environmental microbiome studies, a complementary model has been used in which broad community input has been coupled to a “living” resource that is expected to be updated by a consortium with continued consensus-building, and in which machine-actionable structure has been treated as part of the solution rather than an optional add-on (Kelliher et al., 2025). These precedents have suggested that eRNA-Min is most likely to be adopted when it is made easy to request (for journals and reviewers), easy to complete (for authors), and easy to audit (for readers and downstream re-users) (Kelliher et al., 2025; Klymus et al., 2024). In practice, uptake is accelerated when journals explicitly request completed checklists at submission (rather than only “encouraging” guideline consultation) and when tools reduce author burden by turning guidelines into structured, fillable checklists (The PLOS Medicine Editors, 2015).
Discoverability has also been treated as a practical determinant of adoption. Reporting guidelines have been curated and searchable through established libraries, and implementation examples have been maintained as editor-facing toolkits, which has reduced the effort required to integrate checklists into journal workflows (e.g., EQUATOR resources and checklist-to-tool initiatives) (Catalá-López et al., 2019). In parallel, registries that connect standards, repositories, and data policies have been used to make community standards visible to journals, funders, repositories, and researchers, and to track their adoption and recommendation status (McQuilton et al., 2020; Rinck, 2025; Sansone et al., 2019). In publisher-facing policy, checklist submission has been explicitly normalized in some venues, where supporting documentation has been requested at submission and authors have been directed to standard registries to locate specialized guidelines (Federer et al., 2018). These registry functions are especially relevant for a new domain standard (like eRNA-Min), because they provide a stable index entry, scope description, version record, and cross-links to repositories and related standards that journals can point to directly (Sansone et al., 2019). Under these conditions, eRNA-Min is expected to be adopted most rapidly when it is registered in common standards catalogues, linked from author instructions, and provided as a fillable supplement template that can be uploaded alongside the manuscript.
Governance has been treated as necessary because eRNA methods and use cases are changing quickly, and because uncontrolled proliferation of “local variants” would undermine comparability (Kelliher et al., 2025; Simera et al., 2010). Earlier coordination efforts for minimum-information checklists (e.g., MIBBI) were explicitly motivated by the same problem—many parallel “minimum information” initiatives emerging without coordination—highlighting the value of a shared governance model and cross-walks to neighboring standards (Taylor et al., 2008). A versioned release model has therefore been favored, in which each public release is citable and changes can be tracked transparently across versions (Peters, Kraker, Lex, Gumpenberger, & Gorraiz, 2017). DOI versioning has been used in general research infrastructure to support this approach, with a concept DOI representing the record as a whole and version-specific DOIs representing individual releases, enabling stable citation while still allowing iterative refinement (Peters et al., 2017). At the content level, the checklist itself has been treated as the stable “core”, while ecosystem- or application-specific additions have been treated as modular extensions that can be proposed, reviewed, and released without redefining the baseline (Kelliher et al., 2025; Simera et al., 2010).
Because “universal language” has been a central objective for cross-taxa and cross-ecosystem synthesis, machine-actionable metadata has been treated as part of governance rather than as an implementation detail. Minimum-information narratives have been recognized as human-readable but difficult to validate computationally, and the need for modular, reusable machine-readable models has been emphasized to support assessment of metadata completeness and standards-driven authoring (Batista, Gonzalez-Beltran, Sansone, & Rocca-Serra, 2022). Template-based approaches have also been proposed as a practical way for communities to encode domain standards into structured forms that can be reused across tools and repositories, thereby operationalizing FAIR expectations around rich metadata and domain-relevant standards (Jacobsen et al., 2020; Musen et al., 2022). In eRNA-Min, alignment with controlled vocabularies and identifiers has therefore been treated as a governance concern, because interoperability depends on stable, shared terms for environments, exposures, and taxa (Buttigieg, Morrison, Smith, Mungall, & Lewis, 2013; Jacobsen et al., 2020; Musen et al., 2022; Schoch et al., 2020). As a practical extension of this governance principle, providing an official machine-actionable template (e.g., a CEDAR template and/or a JSON/CSV schema with validation rules) helps prevent “drift” in field names and categories across labs while keeping the baseline checklist stable (Musen et al., 2022). A crosswalk between eRNA-Min fields and related reporting frameworks is provided in
Table 4.
6. Conclusion
Environmental RNA has moved quickly from proof-of-concept work toward routine use in ecological assessment, where signals closer to biological activity and stress responses are often sought in addition to presence-based inference. At the same time, eRNA results remain unusually sensitive to pre-analytical handling, mixed-origin templates, and analytical provenance, which can blur the line between biological change and technical variation when reporting is incomplete. eRNA-Min has been presented as a minimum information standard intended to make environmental RNA findings interpretable, comparable, and reusable across organisms, communities, and ecosystems. The standard has been designed to fit common publication workflows by shifting item-level detail into a checklist while keeping the main text focused on study intent and core methodological choices. Compatibility with existing reporting resources has been treated as a design requirement, so that eRNA-Min can be applied where expression-linked interpretation and exposure or stress signatures are advanced, without duplicating established biodiversity- or microbiome-focused guidance. Adoption of eRNA-Min is expected to improve cross-study synthesis and independent validation, and to reduce ambiguity in biomarker-style claims by making context of use, controls, and provenance explicit.
Funding
The author received no specific funding for this work.
Institutional Review Board Statement
Not applicable
Data Availability Statement
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
Acknowledgments
I sincerely thank Dr. Tamara Schenekar for the kind support with extensive knowledge sharing about the topic of environmental RNA research.
Conflicts of Interest
The author declares that he has no competing interests.
References
- Ahi, E. P.; Schenekar, T. The Promise of Environmental RNA Research Beyond mRNA. Molecular Ecology 2025, 34(12), e17787. [Google Scholar] [CrossRef]
- Ahi, E. P.; Singh, P. An emerging orchestrator of ecological adaptation: m6A regulation of post-transcriptional mechanisms. Molecular Ecology 2024, 17545. [Google Scholar] [CrossRef]
- Barrett, T.; Clark, K.; Gevorgyan, R.; Gorelenkov, V.; Gribov, E.; Karsch-Mizrachi, I.; Ostell, J. BioProject and BioSample databases at NCBI: facilitating capture and organization of metadata. Nucleic Acids Research 2012, 40(D1), D57–D63. [Google Scholar] [CrossRef]
- Batista, D.; Gonzalez-Beltran, A.; Sansone, S. A.; Rocca-Serra, P. Machine actionable metadata models. Scientific Data 2022, 9:1(9(1)), 592. [Google Scholar] [CrossRef]
- Borchardt, M. A.; Boehm, A. B.; Salit, M.; Spencer, S. K.; Wigginton, K. R.; Noble, R. T. The Environmental Microbiology Minimum Information (EMMI) Guidelines: QPCR and dPCR Quality and Reporting for Environmental Microbiology. Environmental Science and Technology 2021, 55(15), 10210–10223. [Google Scholar] [CrossRef]
- Bossuyt, P. M.; Reitsma, J. B.; Bruns, D. E.; Gatsonis, C. A.; Glasziou, P. P.; Irwig, L.; Cohen, J. F. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. BMJ (Clinical Research Ed.) 2015, 351. [Google Scholar] [CrossRef]
- Brazma, A.; Ball, C.; Bumgarner, R.; Furlanello, C.; Miller, M.; Quackenbush, J.; Taylor, R. C. MINSEQE: Minimum Information about a high-throughput Nucleotide SeQuencing Experiment - a proposal for standards in functional genomic data reporting; Zenodo, 2012. [Google Scholar] [CrossRef]
- Brazma, A.; Hingamp, P.; Quackenbush, J.; Sherlock, G.; Spellman, P.; Stoeckert, C.; Vingron, M. Minimum information about a microarray experiment (MIAME)—toward standards for microarray data. Nature Genetics 2001, 29:4(29(4)), 365–371. [Google Scholar] [CrossRef] [PubMed]
- Bunholi, I. V.; Foster, N. R.; Casey, J. M. Environmental DNA and RNA in aquatic community ecology: Toward methodological standardization. Environmental DNA 2023, 5(6), 1133–1147. [Google Scholar] [CrossRef]
- Bustin, S. a; Benes, V.; Garson, J. a; Hellemans, J.; Huggett, J.; Kubista, M.; Wittwer, C. T. The MIQE guidelines: minimum information for publication of quantitative real-time PCR experiments. Clinical Chemistry 2009, 55(4), 611–622. [Google Scholar] [CrossRef]
- Buttigieg, P. L.; Morrison, N.; Smith, B.; Mungall, C. J.; Lewis, S. E. The environment ontology: Contextualising biological and biomedical entities. Journal of Biomedical Semantics 2013, 4(1), 43. [Google Scholar] [CrossRef] [PubMed]
- Buttigieg, P. L.; Pafilis, E.; Lewis, S. E.; Schildhauer, M. P.; Walls, R. L.; Mungall, C. J. The environment ontology in 2016: Bridging domains with increased scope, semantic density, and interoperation. Journal of Biomedical Semantics 2016, 7(1), 1–12. [Google Scholar] [CrossRef]
- Cagney, D. N.; Sul, J.; Huang, R. Y.; Ligon, K. L.; Wen, P. Y.; Alexander, B. M. The FDA NIH Biomarkers, EndpointS, and other Tools (BEST) resource in neuro-oncology. Neuro-Oncology 2018, 20(9), 1162–1172. [Google Scholar] [CrossRef]
- Catalá-López, F.; Alonso-Arroyo, A.; Page, M. J.; Hutton, B.; Ridao, M.; Tabarés-Seisdedos, R.; Moher, D. Reporting guidelines for health research: protocol for a cross-sectional analysis of the EQUATOR Network Library. BMJ Open 2019, 9(3), e022769. [Google Scholar] [CrossRef]
- Cevallos, M.; Egger, M. STROBE (STrengthening the Reporting of Observational studies in Epidemiology); Guidelines for Reporting Health Research: A User’s Manual, 2014; pp. 169–179. [Google Scholar] [CrossRef]
- Clough, E.; Barrett, T.; Wilhite, S. E.; Ledoux, P.; Evangelista, C.; Kim, I. F.; Soboleva, A. NCBI GEO: archive for gene expression and epigenomics data sets: 23-year update. Nucleic Acids Research 2024, 52(D1), D138–D144. [Google Scholar] [CrossRef]
- Conesa, A.; Madrigal, P.; Tarazona, S.; Gomez-Cabrero, D.; Cervera, A.; McPherson, A.; Mortazavi, A. A survey of best practices for RNA-seq data analysis. Genome Biology 2016, 17:1(17(1)), 13. [Google Scholar] [CrossRef] [PubMed]
- Dahl, M. B.; Brachmann, S.; Söllinger, A.; Schnell, M.; Ahlers, L.; Wutkowska, M.; Urich, T. Quantifying Soil Microbiome Abundance by Metatranscriptomics and Complementary Molecular Techniques—Cross-Validation and Perspectives. Molecular Ecology Resources 2025, 25(7), e14130. [Google Scholar] [CrossRef] [PubMed]
- FDA-NIH Biomarker Working Group. BEST (Biomarkers, EndpointS, and other Tools) Resource. BEST (Biomarkers, EndpointS, and other Tools) Resource. Food and Drug Administration (US). 2016. Available online: https://www.ncbi.nlm.nih.gov/books/NBK326791/.
- Federer, L. M.; Belter, C. W.; Joubert, D. J.; Livinski, A.; Lu, Y. L.; Snyders, L. N.; Thompson, H. Data sharing in PLOS ONE: An analysis of Data Availability Statements. PLOS ONE 2018, 13(5), e0194768. [Google Scholar] [CrossRef] [PubMed]
- Füllgrabe, A.; George, N.; Green, M.; Nejad, P.; Aronow, B.; Fexova, S. K.; Papatheodorou, I. Guidelines for reporting single-cell RNA-seq experiments. Nature Biotechnology 2020, 38:12(38(12)), 1384–1386. [Google Scholar] [CrossRef]
- Gant, T. W.; Sauer, U. G.; Zhang, S. D.; Chorley, B. N.; Hackermüller, J.; Perdichizzi, S.; Poole, A. A generic Transcriptomics Reporting Framework (TRF) for ‘omics data processing and analysis. Regulatory Toxicology and Pharmacology 2017, 91, S36–S45. [Google Scholar] [CrossRef]
- Giroux, M. S.; Reichman, J. R.; Langknecht, T.; Burgess, R. M.; Ho, K. T. Environmental RNA as a Tool for Marine Community Biodiversity Assessments. Scientific Reports 2022, 12:1(12(1)), 1–13. [Google Scholar] [CrossRef]
- Glover, C. N.; Veilleux, H. D.; Misutka, M. D. Commentary: Environmental RNA and the assessment of organismal function in the field. Comparative Biochemistry and Physiology Part B: Biochemistry and Molecular Biology 2025, 275, 111036. [Google Scholar] [CrossRef]
- Goldberg, C. S.; Turner, C. R.; Deiner, K.; Klymus, K. E.; Thomsen, P. F.; Murphy, M. A.; Taberlet, P. Critical considerations for the application of environmental DNA methods to detect aquatic species. Methods in Ecology and Evolution 2016, 7(11), 1299–1307. [Google Scholar] [CrossRef]
- Gou, X.; Liu, X.; Su, X.; Ji, H.; Wang, Q.; Zhang, X. Water Environmental RNA Reveals Dose-Dependent Toxicant Responses in Fish. Environmental Science and Technology 2025, 59(46), 24717–24727. [Google Scholar] [CrossRef]
- Greco, M.; Lejzerowicz, F.; Reo, E.; Caruso, A.; Maccotta, A.; Coccioni, R.; Frontalini, F. Environmental RNA outperforms eDNA metabarcoding in assessing impact of marine pollution: A chromium-spiked mesocosm test. Chemosphere 2022, 298, 134239. [Google Scholar] [CrossRef]
- Harrill, J. A.; Viant, M. R.; Yauk, C. L.; Sachana, M.; Gant, T. W.; Auerbach, S. S.; Whelan, M. Progress towards an OECD reporting framework for transcriptomics and metabolomics in regulatory toxicology. Regulatory Toxicology and Pharmacology 2021, 125, 105020. [Google Scholar] [CrossRef]
- He, X.; Maruki, T.; Morgado-Gamero, W. B.; Barrett, R. D. H.; Fugère, V.; Fussmann, G. F.; Cristescu, M. E. Environmental RNA-Based Metatranscriptomics as a Novel Biomonitoring Tool: A Case Study of Glyphosate-Based Herbicide Effects on Freshwater Eukaryotic Communities. Molecular Ecology 2025, 34(22), e70164. [Google Scholar] [CrossRef]
- Hechler, R. M.; Cristescu, M. E. Revealing population demographics with environmental RNA. Molecular Ecology Resources 2024, 24(4), e13951. [Google Scholar] [CrossRef] [PubMed]
- Hechler, R. M.; Yates, M. C.; Chain, F. J. J.; Cristescu, M. E. Environmental transcriptomics under heat stress: Can environmental RNA reveal changes in gene expression of aquatic organisms? Molecular Ecology 2023, 00, 1–15. [Google Scholar] [CrossRef] [PubMed]
- Heus, P.; Idema, D. L.; Kruithof, E.; Damen, J. A. A. G.; Verhoef-Jurgens, M. S.; Reitsma, J. B.; Hooft, L. Increased endorsement of TRIPOD and other reporting guidelines by high impact factor journals: survey of instructions to authors. Journal of Clinical Epidemiology 2024, 165, 111188. [Google Scholar] [CrossRef] [PubMed]
- Hiki, K.; Yamagishi, T.; Yamamoto, H. Environmental RNA as a Noninvasive Tool for Assessing Toxic Effects in Fish: A Proof-of-concept Study Using Japanese Medaka Exposed to Pyrene. Environmental Science and Technology 2023, 57(34), 12654–12662. [Google Scholar] [CrossRef]
- Jacobsen, A.; Azevedo, R. de M.; Juty, N.; Batista, D.; Coles, S.; Cornet, R.; Schultes, E. FAIR Principles: Interpretations and Implementation Considerations. Data Intelligence 2020, 2(1–2), 10–29. [Google Scholar] [CrossRef]
- Jo, T. S. Methodological considerations for aqueous environmental RNA collection, preservation, and extraction. Analytical Sciences 2023, 39(10), 1711–1718. [Google Scholar] [CrossRef]
- Jo, T.; Tsuri, K.; Hirohara, T.; Yamanaka, H. Warm temperature and alkaline conditions accelerate environmental RNA degradation. Environmental DNA 2023, 5(5), 836–848. [Google Scholar] [CrossRef]
- Kagzi, K.; Hechler, R. M.; Fussmann, G. F.; Cristescu, M. E. Environmental RNA degrades more rapidly than environmental DNA across a broad range of pH conditions. Molecular Ecology Resources 2022, 22(7), 2640–2650. [Google Scholar] [CrossRef] [PubMed]
- Karsch-Mizrachi, I.; Arita, M.; Burdett, T.; Cochrane, G.; Nakamura, Y.; Pruitt, K. D.; Schneider, V. A. The international nucleotide sequence database collaboration (INSDC): enhancing global participation. Nucleic Acids Research 2025, 53(D1), D62–D66. [Google Scholar] [CrossRef]
- Katz, K.; Shutov, O.; Lapoint, R.; Kimelman, M.; Rodney Brister, J.; O’Sullivan, C. The Sequence Read Archive: a decade more of explosive growth. Nucleic Acids Research 2022, 50(D1), D387–D390. [Google Scholar] [CrossRef]
- Kelliher, J. M.; Mirzayi, C.; Bordenstein, S. R.; Oliver, A.; Kellogg, C. A.; Hatcher, E. L.; Eloe-Fadrosh, E. A. STREAMS guidelines: standards for technical reporting in environmental and host-associated microbiome studies. Nature Microbiology 2025, 10(12), 1–10. [Google Scholar] [CrossRef]
- Klymus, K. E.; Baker, J. D.; Abbott, C. L.; Brown, R. J.; Craine, J. M.; Gold, Z.; Theroux, S. The MIEM guidelines: Minimum information for reporting of environmental metabarcoding data. Metabarcoding and Metagenomics 2024, 8, 489–518. [Google Scholar] [CrossRef]
- Klymus, K. E.; Merkes, C. M.; Allison, M. J.; Goldberg, C. S.; Helbing, C. C.; Hunter, M. E.; Richter, C. A. Reporting the limits of detection and quantification for environmental DNA assays. Environmental DNA 2020, 2(3), 271–282. [Google Scholar] [CrossRef]
- Laroche, O.; Wood, S. A.; Tremblay, L. A.; Lear, G.; Ellis, J. I.; Pochon, X. Metabarcoding monitoring analysis: The pros and cons of using co-extracted environmental DNA and RNA data to assess offshore oil production impacts on benthic communities. PeerJ 2017, 2017(5), e3347. [Google Scholar] [CrossRef] [PubMed]
- Littlefair, J. E.; Rennie, M. D.; Cristescu, M. E. Environmental nucleic acids: A field-based comparison for monitoring freshwater habitats using eDNA and eRNA. Molecular Ecology Resources 2022, 22(8), 2928. [Google Scholar] [CrossRef]
- Macher, T. H.; Arle, J.; Beermann, A. J.; Frank, L.; Hupało, K.; Koschorreck, J.; Leese, F. Is it worth the extra mile? Comparing environmental DNA and RNA metabarcoding for vertebrate and invertebrate biodiversity surveys in a lowland stream. PeerJ 2024, 12(10), e18016. [Google Scholar] [CrossRef]
- Marshall, N. T.; Vanderploeg, H. A.; Chaganti, S. R. Environmental (e)RNA advances the reliability of eDNA by predicting its age. Scientific Reports 2021, 11:1(11(1)), 1–11. [Google Scholar] [CrossRef]
- McQuilton, P.; Batista, D.; Beyan, O.; Granell, R.; Coles, S.; Izzo, M.; Sansone, S. A. Helping the Consumers and Producers of Standards, Repositories and Policies to Enable FAIR Data. Data Intelligence 2020, 2(1–2), 151–157. [Google Scholar] [CrossRef]
- Mirzayi, C.; Renson, A.; Furlanello, C.; Sansone, S. A.; Zohra, F.; Elsafoury, S.; Waldron, L. Reporting guidelines for human microbiome research: the STORMS checklist. Nature Medicine 2021, 11(27(11)), 1885–1892. [Google Scholar] [CrossRef] [PubMed]
- Miyata, K.; Inoue, Y.; Amano, Y.; Nishioka, T.; Nagaike, T.; Kawaguchi, T.; Honda, H. Comparative environmental RNA and DNA metabarcoding analysis of river algae and arthropods for ecological surveys and water quality assessment. Scientific Reports 2022, 12:1(12(1)), 19828. [Google Scholar] [CrossRef]
- Moons, K. G. M.; Altman, D. G.; Reitsma, J. B.; Ioannidis, J. P. A.; Macaskill, P.; Steyerberg, E. W.; Collins, G. S. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration. Annals of Internal Medicine 2015, 162(1), W1–W73. [Google Scholar] [CrossRef] [PubMed]
- Morgado-Gamero, W. B.; Tournayre, O.; Cristescu, M. E. Comparative Decay Dynamics and Detectability of eDNA and eRNA in Connected and Isolated Freshwater Mesocosms Using Digital PCR. Molecular Ecology Resources 2025, 25(8), e70028. [Google Scholar] [CrossRef] [PubMed]
- Musen, M. A.; O’Connor, M. J.; Schultes, E.; Martínez-Romero, M.; Hardi, J.; Graybeal, J. Modeling community standards for metadata as templates makes data FAIR. Scientific Data 2022, 9:1(9(1)), 696. [Google Scholar] [CrossRef]
- O’Cathail, C.; Ahamed, A.; Burgin, J.; Cummins, C.; Devaraj, R.; Gueye, K.; Cochrane, G. The European Nucleotide Archive in 2024. Nucleic Acids Research 2025, 53(D1), D49–D55. [Google Scholar] [CrossRef]
- Peters, I.; Kraker, P.; Lex, E.; Gumpenberger, C.; Gorraiz, J. I. Zenodo in the Spotlight of Traditional and New Metrics. Frontiers in Research Metrics and Analytics 2017, 2, 318745. [Google Scholar] [CrossRef]
- Rinck, G. Models of Data Sharing and Best Practices. Data Privacy, Data Property, and Data Sharing 2025, 157–171. [Google Scholar] [CrossRef]
- Saarimäki, L. A.; Melagraki, G.; Afantitis, A.; Lynch, I.; Greco, D. Prospects and challenges for FAIR toxicogenomics data. Nature Nanotechnology 2021, 2021 17:1(17(1)), 17–18. [Google Scholar] [CrossRef]
- Sandve, G. K.; Nekrutenko, A.; Taylor, J.; Hovig, E. Ten Simple Rules for Reproducible Computational Research. PLOS Computational Biology 2013, 9(10), e1003285. [Google Scholar] [CrossRef] [PubMed]
- Sansone, S. A.; McQuilton, P.; Rocca-Serra, P.; Gonzalez-Beltran, A.; Izzo, M.; Lister, A. L.; Thurston, M. FAIRsharing as a community approach to standards, repositories and policies. Nature Biotechnology 2019, 37:4(37(4)), 358–367. [Google Scholar] [CrossRef] [PubMed]
- Schoch, C. L.; Ciufo, S.; Domrachev, M.; Hotton, C. L.; Kannan, S.; Khovanskaya, R.; Karsch-Mizrachi, I. NCBI Taxonomy: a comprehensive update on curation, resources and tools; Database: The Journal of Biological Databases and Curation, 2020. [Google Scholar] [CrossRef]
- Sepulveda, A.; Hutchins, P. R.; Forstchen, M.; Mckeefry, M.; Swigris, A. M. The elephant in the lab (and field): Contamination in aquatic environmental DNA studies. Frontiers in Ecology and Evolution 2020, 8. [Google Scholar] [CrossRef]
- Shamseer, L.; Stevens, A.; Skidmore, B.; Turner, L.; Altman, D. G.; Hirst, A.; Moher, D. Does journal endorsement of reporting guidelines influence the completeness of reporting of health research? A systematic review protocol. Systematic Reviews 2012, 1(1), 24. [Google Scholar] [CrossRef] [PubMed]
- Simera, I.; Moher, D.; Hirst, A.; Hoey, J.; Schulz, K. F.; Altman, D. G. Transparent and accurate reporting increases reliability, utility, and impact of your research: Reporting guidelines and the EQUATOR Network. BMC Medicine 2010, 8(1), 24. [Google Scholar] [CrossRef]
- Stevens, J. D.; Parsley, M. B. Environmental RNA applications and their associated gene targets for management and conservation. Environmental DNA 2023, 5(2), 227–239. [Google Scholar] [CrossRef]
- Su, Z.; Łabaj, P. P.; Li, S.; Thierry-Mieg, J.; Thierry-Mieg, D.; Shi, W.; Shi, L. A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the Sequencing Quality Control Consortium. Nature Biotechnology 2014, 32:9(32(9)), 903–914. [Google Scholar] [CrossRef]
- Takahashi, M.; Frøslev, T. G.; Paupério, J.; Thalinger, B.; Klymus, K.; Helbing, C. C.; Berry, O. A Metadata Checklist and Data Formatting Guidelines to Make eDNA FAIR (Findable, Accessible, Interoperable, and Reusable). Environmental DNA 2025, 7(3), e70100. [Google Scholar] [CrossRef]
- Taylor, C. F.; Field, D.; Sansone, S. A.; Aerts, J.; Apweiler, R.; Ashburner, M.; Wiemann, S. Promoting coherent minimum reporting guidelines for biological and biomedical investigations: the MIBBI project. Nature Biotechnology 2008, 26:8(26(8)), 889–896. [Google Scholar] [CrossRef] [PubMed]
- Thalinger, B.; Deiner, K.; Harper, L. R.; Rees, H. C.; Blackman, R. C.; Sint, D.; Bruce, K. A validation scale to determine the readiness of environmental DNA assays for routine species monitoring. Environmental DNA 2021, 3(4), 823–836. [Google Scholar] [CrossRef]
- The PLOS Medicine Editors. From Checklists to Tools: Lowering the Barrier to Better Research Reporting. PLOS Medicine 2015, 12(11), e1001910. [Google Scholar] [CrossRef]
- Tsuri, K.; Ikeda, S.; Hirohara, T.; Shimada, Y.; Minamoto, T.; Yamanaka, H. Messenger RNA typing of environmental RNA (eRNA): A case study on zebrafish tank water with perspectives for the future development of eRNA analysis on aquatic vertebrates. Environmental DNA 2021, 3(1), 14–21. [Google Scholar] [CrossRef]
- Veilleux, H. D.; Misutka, M. D.; Glover, C. N. Environmental DNA and environmental RNA: Current and prospective applications for biological monitoring. Science of The Total Environment 2021, 782, 146891. [Google Scholar] [CrossRef] [PubMed]
- von Ammon, U.; Wood, S. A.; Laroche, O.; Zaiko, A.; Lavery, S. D.; Inglis, G. J.; Pochon, X. Linking Environmental DNA and RNA for Improved Detection of the Marine Invasive Fanworm Sabella spallanzanii. Frontiers in Marine Science 2019, 6, 483759. [Google Scholar] [CrossRef]
- von Elm, E.; Altman, D. G.; Egger, M.; Pocock, S. J.; Gøtzsche, P. C.; Vandenbroucke, J. P. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet 2007, 370(9596), 1453–1457. [Google Scholar] [CrossRef]
- Wang, F.; Xiong, W.; Huang, X.; Zhan, A. Selecting Competent Reverse Transcription Strategies to Maximise Biodiversity Recovery With eRNA Metabarcoding. Molecular Ecology Resources 2025, 25(6), e14092. [Google Scholar] [CrossRef]
- Wemheuer, B.; Güllert, S.; Billerbeck, S.; Giebel, H. A.; Voget, S.; Simon, M.; Daniel, R. Impact of a phytoplankton bloom on the diversity of the active bacterial community in the southern North Sea as revealed by metatranscriptomic approaches. FEMS Microbiology Ecology 2014, 87(2), 378–389. [Google Scholar] [CrossRef]
- Whale, A. S.; De Spiegelaere, W.; Trypsteen, W.; Nour, A. A.; Bae, Y. K.; Benes, V.; Huggett, J. F. The Digital MIQE Guidelines Update: Minimum Information for Publication of Quantitative Digital PCR Experiments for 2020. Clinical Chemistry 2020, 66(8), 1012–1029. [Google Scholar] [CrossRef]
- Wilkinson, M. D.; Dumontier, M.; Aalbersberg, Ij. J.; Appleton, G.; Axton, M.; Baak, A.; Mons, B. The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data 2016, 3:1(3(1)), 1–9. [Google Scholar] [CrossRef] [PubMed]
- Wolff, R. F.; Moons, K. G. M.; Riley, R. D.; Whiting, P. F.; Westwood, M.; Collins, G. S.; Mallett, S. PROBAST: A tool to assess the risk of bias and applicability of prediction model studies. Annals of Internal Medicine 2019, 170(1), 51–58. [Google Scholar] [CrossRef]
- Wood, S. A.; Biessy, L.; Latchford, J. L.; Zaiko, A.; von Ammon, U.; Audrezet, F.; Pochon, X. Release and degradation of environmental DNA and RNA in a marine system. Science of The Total Environment 2020, 704, 135314. [Google Scholar] [CrossRef]
- Yates, M. C.; Derry, A. M.; Cristescu, M. E. Environmental RNA: A Revolution in Ecological Resolution? Trends in Ecology and Evolution 2021, 36(7), 601–609. [Google Scholar] [CrossRef]
- Yilmaz, P.; Kottmann, R.; Field, D.; Knight, R.; Cole, J. R.; Amaral-Zettler, L.; Glöckner, F. O. Minimum information about a marker gene sequence (MIMARKS) and minimum information about any (x) sequence (MIxS) specifications. Nature Biotechnology 2011, 29:5(29(5)), 415–420. [Google Scholar] [CrossRef]
- Zaiko, A.; Pochon, X.; Garcia-Vazquez, E.; Olenin, S.; Wood, S. A. Advantages and limitations of environmental DNA/RNA tools for marine biosecurity: Management and surveillance of non-indigenous species. Frontiers in Marine Science 2018, 5(SEP), 400559. [Google Scholar] [CrossRef]
- Zhao, S.; Zhang, Y.; Gamini, R.; Zhang, B.; Von Schack, D. Evaluation of two main RNA-seq approaches for gene quantification in clinical RNA sequencing: polyA+ selection versus rRNA depletion. Scientific Reports 2018, 8:1(8(1)), 4781. [Google Scholar] [CrossRef] [PubMed]
- Zou, N.; Wang, S.; Qiu, W.; Kong, W.; Wang, G.; Wang, S. Environmental RNA as a transformative tool for aquatic ecosystem health assessment: progress and challenges. Ecological Indicators 2025, 180, 114328. [Google Scholar] [CrossRef]
Table 4.
Crosswalk: eRNA-Min vs examples of existing reporting frameworks. Legend: ✓ = explicitly covered, ◐ = partially/indirectly covered, — = not addressed / out of scope.
Table 4.
Crosswalk: eRNA-Min vs examples of existing reporting frameworks. Legend: ✓ = explicitly covered, ◐ = partially/indirectly covered, — = not addressed / out of scope.
| eRNA-Min element (selected) |
MIEM (metabarcoding) |
STREAMS (microbiome) |
MIxS/ MIMARKS (contextual metadata) |
MIQE family (MIQE/dMIQE/EMMI; PCR reporting) |
MINSEQE (HTS minimum info) |
Claim/ performance frameworks (BEST glossary; STARD/TRIPOD reporting; PROBAST) |
eRNA-Min unique emphasis |
| Study aim + inference level (species-resolved vs community-level) |
◐ |
◐ |
— |
◐ |
✓ |
— |
Forces match between design and claim level |
| Context of use declared (taxa, matrix, stressor/exposure, decision use) |
◐ |
◐ |
◐ |
— |
◐ |
✓ (intended use / model role analogues) |
Makes claims portable/testable |
| Exposure/comparator definition + confounders considered |
— |
◐ |
— |
◐ |
◐ |
◐ |
Prevents “DE = biomarker” overreach |
| Matrix + sampling unit + volume/mass |
✓ |
◐ |
✓ |
✓ |
◐ |
— |
Baseline comparability |
| Capture method/materials (incl. filtration steps where used) |
✓ |
◐ |
◐ |
— |
◐ |
— |
eRNA recovery is capture-dependent |
| Preservation + storage conditions + storage duration |
✓ |
◐ |
— |
✓ |
◐ |
— |
Central eRNA vulnerability |
| Time-to-stabilization (collection→stabilization timestamp) |
— |
— |
— |
— |
— |
— |
Key eRNA-specific “missing” field |
| Environmental covariates (temp/pH etc when relevant) |
✓ |
◐ |
✓ |
— |
— |
— |
Needed to interpret decay/turnover |
| DNase strategy + DNA carryover evaluation + no-RT controls (where applicable) |
✓ (for eRNA) |
— |
— |
✓ |
— |
— |
Critical in RT workflows |
| Workflow-wide negatives (field/site/process; extraction; PCR) as first-class samples |
✓ |
◐ |
— |
✓ |
— |
— |
Contamination-aware interpretation |
| Inhibition testing/mitigation (and decision rule) |
◐ |
— |
— |
✓ |
— |
— |
Avoids false absences / biased quant |
| Target RNA class (total RNA vs poly(A) vs rRNA/small/viral etc) |
◐ |
◐ |
— |
— |
◐ |
— |
Makes “what was measured” explicit |
| Enrichment/depletion strategy (rRNA depletion, poly(A), etc) |
✓ (RNA enrichment methods) |
◐ |
— |
— |
◐ |
— |
Drives interpretability (host vs background) |
| Sequencing platform/config + reads produced (depth proxy) |
✓ |
✓ |
◐ |
— |
✓ |
— |
Reproducibility |
| Read composition summary (rRNA %, host %, mapping yield) |
— |
— |
— |
— |
— |
— |
eRNA-specific QC/interpretability axis |
| Software versions + key parameters + reference DB versions |
✓ |
◐ |
— |
◐ |
✓ |
— |
Prevents “same name, different results” |
| Executable provenance (container/commit) + sensitivity checks |
◐ (code archiving) |
◐ |
— |
— |
◐ |
— |
Enables re-execution and robustness |
| Raw reads + processed outputs used for conclusions deposited |
✓ |
◐ |
— |
— |
✓ |
— |
Makes study re-analysable |
| Discovery vs verification/validation separated + claim card (evidence tier + matched metric) |
— |
— |
— |
◐ (assay validation norms) |
— |
◐ (strong analogues in STARD/TRIPOD) |
Core novelty: structured claim reporting |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).