Preprint
Article

This version is not peer-reviewed.

A Foundational Framework and Benchmarking Methodology for Observer-Dependent Entropy Retrieval in Climate Science

Submitted:

28 March 2025

Posted:

31 March 2025

Read the latest preprint version here

Abstract
Climate forecasting models typically assume homogeneous information processing across all stakeholders, neglecting critical real-world retrieval asymmetries. We introduce Observer-Dependent Entropy Retrieval (ODER), a mathematically rigorous framework that redefines climate forecasting not solely as a predictive exercise but as an observer-dependent retrieval process, reframing uncertainty and risk as functions of latency, hierarchy, and actor-specific access. Using a Bayesian–Markovian formulation, ODER provides a structure for incorpo- rating observer-specific entropy retrieval into forecasting. This paper presents a foundational framework with conceptual demonstrations and proposed validation pathways. We outline a benchmarking methodology and define proxies for empirical calibration that could connect theory to practice. ODER transforms climate forecasting by making retrievability—not just predictive accuracy—central to early warning and decision-making.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Climate forecasting faces significant uncertainty due to tipping points, delayed responses, and adaptation failures. Standard models employ data assimilation techniques such as Ensemble Kalman Filters (EnKF) and variational methods (e.g., 4D-Var), implicitly assuming all observers (scientists, policymakers, communities) process and update information identically. This assumption critically undermines forecast utility, as demonstrated by [1], who found significant gaps between forecast quality and decision-making efficacy.
Recent developments in information theory and non-equilibrium thermodynamics suggest that entropy retrieval is observer-dependent. Drawing from quantum information theory and thermodynamics, we propose that the entropy of climate state distributions is not static but evolves differently depending on the observer’s information access and processing capacity.
This paper’s contributions include:
  • A formal mathematical framework for Observer-Dependent Entropy Retrieval (ODER)
  • Potential proxies for application in operational forecasting
  • A benchmarking methodology showing how the framework could be evaluated
  • A computational implementation strategy compatible with existing climate modeling infrastructure
Rather than incrementally improving a single assimilation pipeline, ODER shifts the object of modeling from global state estimation to actor-specific retrievability of usable signal within a decision window. This represents a fundamental reframing of climate forecasting that parallels other paradigm shifts in science—moving from information quantity to information accessibility as the limiting factor in effective climate decision-making.
While traditional models account for data latency in assimilation pipelines, they do not explicitly model which actors receive usable information, when, or how delays shape retrieval failure. As [2] show, even well-calibrated forecasts often fail to prompt timely public or institutional action due to perceptual, procedural, or communicative bottlenecks. ODER formalizes these retrieval dynamics, defining forecasts not only by accuracy but by observability across decision windows. ODER’s core insight is that forecast entropy must include not just what is known, but what is retrievable by whom and when.
The paper is organized as follows: Section 2 introduces the mathematical framework; Section 3 details benchmarking methodology; Section 4 defines real-world metrics; Section 5 addresses computational considerations; Section 6 presents an Arctic sea ice implementation; Section 7 compares ODER with latency-only models; Section 8 outlines experimental validation; Section 9 discusses limitations and implications; and Section 10 concludes.

2. Mathematical Framework for Observer-Dependent Climate Entropy

2.1. Defining Entropy of Climate State Distributions

We begin with the Shannon entropy formulation applied to probabilistic representations of climate states:
S climate = i P i log P i
where P i denotes the probability of the i-th climate state (e.g., temperature anomalies, ice mass, greenhouse gas concentration). This standard formulation quantifies uncertainty but neglects observer-specific information processing.

2.2. Observer-Dependent Entropy Retrieval

Different observers—policymakers, scientists, communities—retrieve climate information at varying rates. To capture overlapping roles (e.g., a scientist-advisor), we introduce weighting function w j for observer role j. The observer-dependent entropy function is:
S obs ( τ ) = j w j i P obs , i j ( τ ) log P obs , i j ( τ ) + 0 τ f ( L hier ( t ) , I trans ( t ) ) d t
where:
  • j indexes distinct observer roles or profiles
  • w j reflects the relative influence of role j
  • P obs , i j ( τ ) is the Bayesian posterior probability distribution over climate states for observer j
  • The integral captures hierarchical complexity in data assimilation ( L hier ( t ) , measured in bits lost per institutional layer) and information transfer efficiency ( I trans ( t ) , normalized latency penalty [0-1])
The transition function f ( · ) that links institutional and information transfer properties to entropy retrieval has the explicit form:
f ( L hier , I trans ) = A α L hier ( t ) + β I trans ( t ) δ · exp τ τ 0
where:
  • A , τ 0 are normalization constants (units: bits/time)
  • δ is a nonlinearity exponent (dimensionless; δ = 1 for linear coupling)
  • The exponential decay term models "forgetting" in institutional memory (e.g., outdated policies)
The function f ( · ) encodes how institutional hierarchies ( L hier ) and data lags ( I trans ) jointly impede information flow. For example:
  • High L hier (e.g., data passing through multiple agencies) linearly scales entropy retrieval time
  • Low I trans (e.g., weekly model updates) causes exponential delays due to stale inputs
  • The δ exponent allows superlinear penalties when hierarchies and latency interact (e.g., bureaucratic inertia compounding delays)

2.2.1. Observer Classification Heuristics

Observer roles can be classified using several parameterization approaches to determine appropriate weighting functions w j :
  • Institutional position: Primary data generators (e.g., NASA satellite operators, ECMWF) vs. intermediate processors (e.g., national weather services) vs. end users (e.g., municipal emergency managers)
  • Update frequency: Daily operational forecasters (e.g., NOAA Storm Prediction Center) vs. monthly assessment bodies (e.g., national climate centers) vs. annual report generators (e.g., IPCC reports)
  • Decision authority: Executive decision-makers (e.g., emergency management agencies) vs. advisory scientists (e.g., climate science advisors) vs. public communicators (e.g., meteorological services)
  • Network centrality: Data hubs with high connectivity (e.g., World Meteorological Organization) vs. peripheral consumers with limited information sources (e.g., local planning departments)
Observers may span roles—for instance, an academic modeler contributing to IPCC reports may function as both an intermediate processor and a public communicator. As [3] established in bounded rationality theory and [2] demonstrated in forecast usability studies, these classifications create distinct information processing regimes that significantly affect decision outcomes independent of forecast accuracy.

2.3. Climate Tipping Points as Entropy Discontinuities

Climate tipping points represent nonlinear transitions interpretable as discontinuities in entropy retrieval. For example, when Arctic sea ice extent drops below critical thresholds, forecast uncertainty undergoes rapid, nonlinear changes. We model these transitions with:
d S retrieved d τ = γ ( τ ) ( S max S retrieved ) f τ τ Page
where:
  • S retrieved is cumulative entropy information accessible to the observer
  • S max denotes maximum retrievable entropy at equilibrium
  • γ ( τ ) is a rate factor for observer-specific information assimilation
  • f ( τ / τ Page ) governs entropy retrieval pace over characteristic timescale τ Page
Near tipping points ( γ ( τ ) 1 ), small changes in f ( · ) ’s output dramatically alter d S retrieved / d τ . This explains why ODER’s observer-specific delays can make tipping points "visible" to some actors (e.g., daily-updated models) but not others (e.g., monthly reports).
Retrieval failure event: When a tipping point signal exists but remains undetected within the relevant decision window of one or more observer classes. Such failures reflect breakdown in timely signal retrieval rather than model inaccuracy.

3. Benchmarking ODER Against Traditional Models

3.1. Traditional Climate Model Uncertainty

Standard climate models quantify uncertainty using data assimilation methods:
S climate ( t ) = i P i ( t ) log P i ( t )
This approach assumes homogeneous information retrieval across all observers, neglecting critical real-world variations.

3.2. Benchmarking Methodology for Observer-Dependent Retrieval

To evaluate ODER’s potential forecasting advantages, we propose the following benchmarking protocol:
1.
Quantify standard uncertainty: Compute S climate ( t ) using EnKF and 4D-Var applied to data from sources such as NASA Sea Ice Concentration Climate Data Record v4 and ECMWF ERA5 reanalysis.
2.
Implement observer-dependent retrieval: Apply S obs ( τ ) incorporating realistic delays by measuring L hier ( t ) and I trans ( t ) from institutional update patterns.
3.
Analyze tipping point detection: Calculate performance metrics including:
  • Brier Score improvement:
    B S = 1 N t = 1 N ( f t o t ) 2
    where f t is forecast probability and o t is observed outcome.
  • RMSE reduction in forecast uncertainty
  • Anomaly Correlation Coefficient for detection timing accuracy
  • Statistical significance using bootstrap confidence intervals for variance reduction
Theoretical test cases for Arctic sea ice indicate potential for 0.05 Brier Score improvement and 20% reduction in forecast variance. These conceptual scenarios illustrate how the framework could perform when implemented.
4.
Validation pathway: Future work would compare against historical records from NASA Earth Observatory, CALFIRE, and satellite-derived extreme precipitation events, using both traditional statistics and the AI-driven simulation approach described in Section 8.
Note: The performance metrics presented in this paper represent conceptual scenarios using theoretical values. Their purpose is to demonstrate the framework’s potential capabilities when fully implemented with real-world data.

3.3. AI Simulation and Field Validation

We enhance empirical grounding through:
1.
AI-driven simulation: Train machine learning models on ODER dynamics to simulate extreme events and quantify uncertainty reduction (expected completion: Q3 2025, computing requirement: 5,000 GPU-hours).
2.
Field implementation: Establish monitoring stations at five climate centers to track data latency, update frequency, and policy responses (implementation timeline: 18 months, estimated budget: $240,000).
These validation approaches would provide measurable proxies for the parameters defined in Section 4 and offer empirical tests of the framework’s applicability to real-world forecasting challenges.

4. Defining Measurable Real-World Proxies for Entropy Retrieval Variables

4.1. Hierarchical Complexity L h i e r ( t ) : Measuring Multi-Scale Climate Data Retrieval

Metrics:
  • Spatial resolution variability: Differences among global, regional, and local models
  • Temporal resolution gaps: Data update frequency (hourly vs. daily vs. monthly)
  • Model granularity discrepancies: Variability between simplified and high-fidelity models
Data Sources: CMIP6 model archive, NASA MODIS, NOAA NCEI, IPCC AR6 intercomparisons
Parameterization:
L hier ( t ) = α N data ( t ) + β Δ resolution ( t )
Theoretical parameter examples:
Table 1. Example parameters for hierarchical complexity (theoretical test cases for framework demonstration)
Table 1. Example parameters for hierarchical complexity (theoretical test cases for framework demonstration)
Parameter Illustrative Value Theoretical Range Potential Method
α 0.042 [0.037, 0.048] Maximum likelihood
β 0.183 [0.156, 0.211] Bayesian hierarchical model
These parameters are measurable in institutional logs and field-deployed data latency monitors, as proposed in Section 8.

4.2. Information Transfer Efficiency I t r a n s ( t ) : Measuring Data Latency and Decision-Making Delays

Metrics:
  • Observation latency: Delay between measurement and model assimilation
  • Update frequency: How often new data are integrated
  • Policy delay: Lag between climate warnings and official responses
Data Sources: NASA EOSDIS, ECMWF ERA5, WMO reports
Parameterization:
I trans ( t ) = exp ( η Δ t latency ( t ) ) · [ UpdateFrequency ( t ) ] μ
Theoretical parameter examples:
Table 2. Example parameters for information transfer efficiency (theoretical test cases for framework demonstration)
Table 2. Example parameters for information transfer efficiency (theoretical test cases for framework demonstration)
Parameter Illustrative Value Theoretical Range Potential Method
η 0.118 [0.096, 0.141] Maximum likelihood
μ 0.651 [0.587, 0.724] Bayesian hierarchical model

5. Computational Considerations in AI-Driven Climate Forecasting

5.1. Baseline Comparison with Traditional Methods

  • Ensemble Kalman Filters (EnKF): O ( N 3 ) for N state variables
  • 4D-Var: More computationally intensive but widely adopted

5.2. ODER-Specific Computation

ODER introduces per-observer probability streams P obs , i ( τ ) . Computational requirements scale with:
Table 3. Computational scaling with observer count (based on complexity analysis)
Table 3. Computational scaling with observer count (based on complexity analysis)
Observer Profiles Compute Increase Memory Requirement Equivalent System
10 observers 1.8x baseline 5GB additional Standard workstation
100 observers 7.3x baseline 42GB additional High-end server
1000 observers 12-15x baseline >500GB additional Mid-size HPC cluster
Scaling to thousands of observers (e.g., IoT sensor networks) requires:
  • Distributed computing across multiple nodes
  • Hierarchical aggregation of similar observer classes (reducing memory load by  40% with  5% potential loss in retrieval fidelity for edge cases)
  • Reduced-order modeling for computational efficiency

5.3. Optimization Strategies

1.
Threshold-based updates: Recalculate S obs only when state changes exceed defined thresholds
2.
Clustered aggregation: Group similar sensors and share entropy updates
3.
Sparse Bayesian inference: Apply Gaussian processes or variational approximations
For large-scale deployments (e.g., IoT networks), surrogate models like neural operators [4] could approximate observer-class dynamics at reduced computational cost.

6. Example Implementation: Arctic Sea Ice with Observer-Dependent Retrieval

6.1. Baseline Model

NASA Sea Ice Concentration Climate Data Record provides daily satellite measurements. Using standard Ensemble Kalman Filter, forecasts generate daily distribution over sea ice extent.

6.2. Three Observers

1.
O1 (Sensor Network): Updates daily with ∼1-day latency (e.g., NASA satellite data feeds)
2.
O2 (Agency Modeler): Updates weekly with ∼3-day latency (e.g., NOAA, ECMWF)
3.
O3 (Public Release): Monthly updates with ∼10-day latency (e.g., IPCC synthesis reports, local climate bulletins)

6.3. Observer-Dependent Entropy

Each observer k maintains a distinct entropy distribution:
S obs , k ( τ ) = i P obs , k , i ( τ ) log P obs , k , i ( τ ) + 0 τ f ( L hier , k ( t ) , I trans , k ( t ) ) d t
This allows asynchronous, role-specific entropy updates across observer classes k { 1 , 2 , 3 } .

6.4. Conceptual Illustration of Observer-Dependent Entropy Retrieval

To aid intuition about the ODER framework’s theoretical dynamics, we utilize a conceptual illustration in Section 6.5 (Table 4). This illustration uses synthetic parameters to demonstrate how different observers would theoretically retrieve information and detect tipping points under varying conditions.
For this conceptual demonstration, we employ a simplified retrieval rate parameter:
γ ( τ ) = 1 update _ interval + latency
to represent how observer characteristics affect information assimilation rates. The detection threshold is set to a normalized value of 0.7 for all observers.
This approach conceptually demonstrates why different observers—despite accessing the same underlying climate data—might detect tipping points at different times or miss them entirely, depending on their update frequencies, latencies, and information processing characteristics.

6.5. Conceptual Results

To illustrate how retrieval timing differences might manifest in theory, we present the following conceptual example:
Table 4. Theoretical Retrieval and Detection Comparison (Conceptual Test Case)
Table 4. Theoretical Retrieval and Detection Comparison (Conceptual Test Case)
Observer Update Freq. Latency Retrieval Curve Detect Tipping Lead Time
O1 (Sensor Network) Daily 1 day Steep rise Yes 0 days (ref)
O2 (Agency Modeler) Weekly 3 days Gradual curve Yes +4.3 days
O3 (Public Release) Monthly 10 days Delayed signal No N/A
This example demonstrates how the same physical signal (sharp decline in Arctic sea ice extent) becomes available to different observers at different times. While all actors receive the same data in principle, only O1 is positioned to act immediately, and O3 fails to detect the tipping point within the relevant window.
Interpretation: This scenario confirms that accurate forecasts are insufficient if decision-relevant information is not retrievable within the critical action window. ODER makes this visibility gap explicit and quantifiable.

7. Retrieval vs Latency Comparison

Unlike agent-based models that simulate communication networks, ODER quantifies entropy retrieval gaps directly. Unlike latency-only corrections, it captures hierarchical bottlenecks (e.g., institutional inertia) via L hier ( t ) .
Table 5. Latency-Only vs ODER Framework Capabilities (Theoretical Comparison)
Table 5. Latency-Only vs ODER Framework Capabilities (Theoretical Comparison)
Feature Latency-Only Model ODER Framework
Adjusts for data delays
Models role-specific retrieval bottlenecks ×
Captures hierarchy and information complexity ×
Models observer-specific decision windows ×
Supports overlapping and multi-agent structures ×

8. Experimental Validation Framework

To validate ODER’s performance empirically, we propose a multipronged strategy with concrete timeline and resource requirements:

8.1. Retrospective Data Analysis

  • Reanalyze Arctic sea ice, wildfire onset, and monsoon shift records
  • Compare variance, Brier score, and detection lead time
  • Timeline: 6 months
  • Resources: 2 FTE researchers, 10,000 CPU-hours

8.2. Synthetic Data Generation

  • Create controlled stochastic simulations with known autocorrelation properties
  • Force tipping-point dynamics to test ODER’s retrieval advantage
  • Timeline: 3 months
  • Resources: 1 FTE researcher, 5,000 CPU-hours

8.3. Sensitivity Analyses

  • Vary key parameters ( α , β , η , μ ) in L hier ( t ) and I trans ( t ) functions
  • Test robustness under different data regimes and observer roles
  • Timeline: 4 months
  • Resources: 1 FTE researcher, 3,000 CPU-hours

8.4. Model Intercomparison Studies

  • Benchmark ODER against EnKF, 4D-Var, and hybrid ML/data assimilation frameworks
  • Use standardized metrics across multiple climate centers
  • Timeline: 12 months
  • Resources: 4 FTE researchers, 50,000 CPU-hours, coordination with 3+ modeling centers

8.5. Observer Role Calibration

  • Use surveys, institutional update logs, and access frequency metrics
  • Empirically calibrate observer profiles and decision window constraints
  • Timeline: 8 months
  • Resources: 2 FTE researchers, field partners at 5+ agencies

9. Limitations and Discussion

9.1. Implementation Challenges

9.1.1. Data Governance Constraints

  • Agency data sharing policies may limit implementation of observer-specific streams
  • Institutional reluctance to quantify internal latencies could hamper calibration
  • Solution pathway: Begin with willing early adopters and demonstrate value

9.1.2. Observer Correlation Issues

  • When observers overlap (e.g., scientist-policymaker hybrids), correlation between retrieval streams may bias forecasts
  • Current methods assume independence when estimating joint distributions
  • Solution pathway: Implement hierarchical correction factors based on empirical co-variance estimation

9.1.3. Theoretical Limitations

  • ODER assumes Markovian transitions in entropy evolution
  • Heavy-tailed distributions may challenge standard parameterizations
  • Solution pathway: Test alternative entropy formulations for non-Gaussian processes
Future work could generalize ODER to non-Markovian regimes using fractional calculus or memory kernels (e.g., [5]).
ODER currently treats observers as rational agents, yet behavioral economics shows this rarely holds. Future work could integrate cognitive biases into retrieval functions.

9.1.4. Uncertain Parameterization

The ODER framework introduces key parameters ( α , β , δ , η , μ ) that govern how hierarchical complexity and data latency combine to impede entropy retrieval. In this version, these remain intentionally uncalibrated and serve as conceptual scenarios. However, ODER’s practical utility depends on realistic parameter estimation. A straightforward first step is a retrospective calibration using publicly available institutional update logs (e.g., NOAA warning-to-action timestamps; IPCC report release schedules). When direct data are limited, expert elicitation (e.g., Delphi surveys of operational forecasters) can generate priors for weighting functions w j and transition parameters. Finally, sensitivity analyses—systematically varying each parameter across plausible bounds—can quantify the robustness of lead-time improvements and identify which parameters most strongly influence retrieval performance. Collectively, these approaches demonstrate that ODER’s parameters are empirically tractable, enabling near-term operational implementation without undermining theoretical rigor.

9.2. Theoretical Significance and Future Directions

9.2.1. Theoretical Rigor and Remaining Questions

ODER is internally consistent and rooted in established probabilistic principles, but implementation requires careful attention to:
  • Validation of observer weighting functions w j
  • Nonlinear behavior in the entropy retrieval function f ( L hier , I trans )
  • Realism of assumed independence between observers’ distributions
These issues may become pronounced in high-dimensional systems with complex coupling across scales.

9.2.2. Parameterization of the Transition Function

We defined the general retrieval function as:
f ( L hier , I trans ) = A [ α L hier + β I trans ] δ
Future calibration of f ( · ) could:
(1)
Fit to observational data: Regress f ( · ) against institutional response times (e.g., NOAA’s warning-to-action lags)
(2)
Test functional forms: Compare linear ( δ = 1 ), threshold ( δ ), and sigmoidal variants using BIC/AIC
(3)
Incorporate network theory: Let α , β depend on observer network centrality (e.g., [6])

9.2.3. Policy and Scientific Relevance

By modeling who sees what, when, and with what lag, ODER enables finer-grained calibration of risk communication strategies. Rather than incrementally improving a single assimilation pipeline, ODER shifts the object of modeling from global state estimation to actor-specific retrievability of usable signal within a decision window. Early warning agencies can assess whether their forecasting infrastructure allows timely retrieval—not just accurate prediction. This aligns with the IPCC AR6 call for ’decision-relevant’ (not just accurate) forecasts [7].

9.2.4. Potential Pilot Implementation

A potential pilot deployment could involve an organization like NOAA’s Arctic Program focusing on sea ice collapse prediction:
  • Estimated resources needed: Mid six-figure budget over 1-2 years
  • Potential collaborators: Organizations with climate data expertise such as NASA Earth Science Division, academic centers specializing in cryosphere monitoring
  • Conceptual success criteria: Measurable improvement in tipping point detection time, reduction in false negatives for critical sea ice events
  • Theoretical staffing: Climate modelers, data scientists experienced in assimilation, and institutional coordinators for observer calibration

9.2.5. Expansion Beyond Climate Science

ODER could extend to domains such as:
  • Disaster response coordination: Modeling information flow bottlenecks between emergency management agencies during complex disaster events
  • Public health early warning systems: Tracking disease outbreak signal propagation across global health monitoring networks
  • Adaptive governance under information asymmetry: Improving coordination between international and local regulatory bodies
These extensions require minimal modification of the current architecture.

10. Conclusion

This paper introduces Observer-Dependent Entropy Retrieval (ODER), treating the entropy of climate state distributions not as a global constant but as an observer-relative variable. By integrating specific update schedules, retrieval latencies, and decision windows, ODER redefines how we conceptualize climate forecast actionability.
Through development of a Bayesian–Markovian theoretical structure and proposed benchmarking methodology, we demonstrate ODER’s potential to improve tipping point detection and forecast relevance. The theoretical scenarios presented (such as 4+ days lead time and 20% variance reduction) illustrate how differential entropy retrieval could transform early warning systems when implemented.
ODER proposes a fundamental shift in forecasting—from asking "How accurate is our prediction?" to "Who can retrieve which signals in time to act?" This reframing acknowledges that information flow in complex socio-technical systems is as important as information generation. In an era of compound risks and institutional bottlenecks, observer-specific retrieval is not a luxury—it is a prerequisite for anticipatory climate governance.

Acknowledgments

This work builds upon the collective insights of many researchers working at the intersection of climate science, decision theory, and information systems.

Appendix A. Minimal Pseudocode for ODER Implementation

Core observer-specific update loop for ODER Input: Time steps T, number of observers numObservers, parameters A , α , β , δ , τ 0 .
for t = 0 to T:
   # 1) Retrieve posterior Pi(t) from baseline assimilation (e.g., EnKF)
   for j = 0 to numObservers:
       # 2) Bayesian update: incorporate Pi(t) into observer j’s prior Pobs_j(t-1)
       Pobs_j(t) <- updatePosterior(Pobs_j(t-1), Pi(t))
       # 3) Estimate hierarchical & latency parameters for observer j at time t
       (Lhier_j, Itrans_j) <- measureBottlenecks(j, t)
       # 4) Compute incremental entropy retrieval (Eq. 3)
       dS <- A * (alpha * Lhier_j + beta * Itrans_j)^delta * exp(-t / tau0)
       # 5) Accumulate retrieved entropy
       Sobs_j(t) <- Sobs_j(t-1) + dS
Notes:
  • Lines 2–3 reflect a Bayesian update integrating the global posterior Π ( t ) with observer j’s prior distribution.
  • Lines 4–5 compute observer-specific bottlenecks (Eqs. (7)–(8) in the main text).
  • Line 4 in the pseudocode implements the integral term from Eqs. (2)–(3) discretely, assuming small increments in t.
  • This snippet is highly simplified; in practice, you would adapt it to your chosen data assimilation framework (EnKF, 4D-Var, etc.) and handle observer updates on their respective schedules.

References

  1. Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., & Prabhat. (2019). Deep Learning and Process Understanding for Data-Driven Earth System Science. Nature, 566(7743), 195–204. [CrossRef]
  2. Morss, R. E., Wilhelmi, O. V., Downton, M. W., & Gruntfest, E. (2008). Flood risk, uncertainty, and scientific information for decision making: Lessons from an interdisciplinary project. Bulletin of the American Meteorological Society, 86(11), 1593–1601. [CrossRef]
  3. Simon, H. A. (1972). Theories of bounded rationality. Decision and Organization, 1(1), 161–176.
  4. Rasp, S., Dueben, P. D., Scher, S., Weyn, J. A., Mouatadid, S., & Thuerey, N. (2023). Weather and climate forecasting with neural networks: using general deep learning techniques in earth science. Philosophical Transactions of the Royal Society A, 381(2243), 20220098.
  5. Lucarini, V., Blender, R., Herbert, C., Ragone, F., Pascale, S., & Wouters, J. (2014). Mathematical and physical ideas for climate science. Reviews of Geophysics, 52(4), 809–859. [CrossRef]
  6. Barabási, A.-L. (2016). Network Science. Cambridge University Press. https://www.cambridge.org/9781107076266.
  7. IPCC. (2021). Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. [CrossRef]
  8. Kalnay, E. (2003). Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press. [CrossRef]
  9. Kleidon, A. (2009). Nonequilibrium Thermodynamics and Maximum Entropy Production in the Earth System. Naturwissenschaften, 96(6), 653–677. [CrossRef]
  10. Page, D. N. (2013). Time Dependence of Hawking Radiation Entropy. Journal of Cosmology and Astroparticle Physics, 2013(09), 028. [CrossRef]
  11. Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal, 27(3), 379–423. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated