Preprint
Article

This version is not peer-reviewed.

Mathematics Has Already Chosen Federalism: The Logic of Domain Separation

Submitted:

15 April 2026

Posted:

23 April 2026

You are already at the latest version

Abstract
Mathematics, as actually practiced, operates as a federated system: practitioners work within autonomous domain-specific axiomatizations (geometry, algebra, analysis) and construct explicit bridges only when cross-domain reasoning is required. This organization is not accidental; it is a structural adaptation that safeguards local decidability and algorithmic efficiency. Yet the dominant foundational narrative still operates on the Compiler Myth—the belief that all mathematics must theoretically compile down into ZFC set theory to achieve rigor. We argue that this monolithic reductionism confuses representational universality with logical priority. Embedding a decidable (tame) domain into an undecidable (wild) one does not clarify foundations; it imposes a crippling epistemic overhead. It buries efficient, domain-specific decision procedures under general proof search and destroys the native structural immunities of the object. We introduce the Decidability Threshold — a litmus test based on Negation, Representability, and Discrete Unboundedness — to explain why mathematicians instinctively isolate tame domains from wild ones. Finally, we distinguish the Mathematician (builder of formal systems) from the Scientist (consumer modeling empirical reality). We argue that federalism is not a failure of unification, but the primary safeguard preventing the scientist from inadvertently importing uncomputable, undecidable paradoxes into physical theories. We show that for empirical applications, syntactic safety is insufficient; valid scientific modeling must be strictly confined to the constructively computable sub-fragments of these domains.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

It is impossible that someone should set up a certain well-defined system of axioms and rules and consistently make the following assertion about it: All of these axioms and rules I perceive to be correct, and moreover I believe that they contain all of mathematics.Kurt Gödel [1]
Modern mathematics has organically developed a federated structure. Geometers invoke Hilbert’s axioms or Tarski’s postulates [2]; analysts rely on the ordered-field completeness of the reals; algebraists work with group and ring axioms. When cross-domain reasoning is required—analytic geometry, Galois theory, algebraic topology—mathematicians construct explicit, carefully defined bridges that are built once and reused.
This is not a theoretical proposal. It is a description of observed reality.
Nevertheless, the standard foundational narrative often presents ZFC set theory as the universal substrate: that all mathematical objects are, in principle, sets (formally: every mathematical object is encoded as a set), and that domain-specific axioms are merely shorthand for set-theoretic constructions. This creates a pragmatic gap:
  • Theory claims: Everything reduces to ZFC.
  • Practice shows: ZFC is almost never invoked outside of logic and set theory itself.
  • Theory claims: Incompleteness is a universal fog covering all mathematics.
  • Practice shows: Most working domains (geometry, finite algebra, elementary analysis) behave as if they are complete and decidable.
Why does this gap exist? We argue that the federated organization of mathematics is a sophisticated response to a deep logical reality. Mathematicians have instinctively partitioned their field to create "Safe Zones"—domains of local decidability—and they resist monolithic reduction because it destroys these local properties.
The standard foundational narrative fails to internalize Gödel’s fundamental warning: no single formal system can contain all of mathematics. The federated organization of mathematical practice is not a failure to achieve a unified foundation; it is the organic, pragmatic realization of Gödel’s insight. By analyzing the structural boundaries between domains, we demonstrate that federalism is the only architecture that preserves local decidability while allowing rigorous, controlled cross-domain reasoning.

2. Axiom 1: The Requirement of External Vantage Point

The exact definition of the concept of truth implies the necessity of a strict distinction between the language which is the object of our investigations and the language in which these investigations are carried out.Alfred Tarski [3]
 Axiom 1
(Axiom 1 — Domain Separation). A foundational framework should allow stable external vantage points for metareasoning, so meta-level statements need not be modelled within the same object-level universe.
A monolithic ZFC-based framework does not satisfy Axiom 1. Because every object is declared to be a set, metareasoning about ZFC often requires ascending to stronger set theories (large cardinals), creating a hierarchy with no stable external vantage point.
By attempting to model both the object language and the metalanguage within a single, universal set-theoretic ontology, monolithic foundations inherently violate Tarski’s requirement for a strict distinction between the two. When the ambient universe is forced to reflect upon itself, the resulting self-reference dissolves the external vantage point.
Federalism, by contrast, structurally enforces a separation of concerns. In a federated system, we can study Geometry from the vantage point of Algebra, or Arithmetic from the vantage point of Model Theory, without forcing one to be the other. By maintaining domain autonomy, one domain can serve as the rigorous, external metalanguage for another, naturally satisfying Tarski’s condition for defining exact mathematical truth.

3. The Logic of Separation: The Decidability Threshold

In consequence of the philosophical significance of the mathematical integers...one could perhaps expect that it would be possible to find a few evident axioms from which all properties of integers could be derived. This expectation...is false.Kurt Gödel [4]
Why do mathematicians instinctively separate Geometry from Arithmetic? It is not merely a matter of taste. It is a matter of logical safety. We can identify a structural "Litmus Test" that explains these boundaries.

3.1. The Trinity of Danger

Undecidability and incompleteness arise only when a system possesses three specific ingredients simultaneously. We define these formally as the Decidability Threshold:
1.
Classical Negation: The ability to assert falsity and rely on classical proof principles sufficient to express provability and negation in the usual sense used in diagonalization arguments.
2.
Representability (Encoding): The capacity to define a discrete infinite predicate N ( x ) and represent all primitive recursive functions on that predicate.
Preprints 208647 i001
3.
Discrete Unboundedness: An infinite supply of distinct discrete states.
Technical Assumptions: The argument assumes that the theory is recursively axiomatizable (effectively enumerable axioms) and consistent. Under these conditions, a "Wild" theory (possessing all three ingredients) is essentially undecidable.

3.2. Federalism as Quarantine

Mathematical practice organizes itself by managing these ingredients:
  • Type I (Finite Domains): Boolean Algebra, Finite Groups. They possess Negation and Encoding, but drop Unboundedness. They are trivially decidable and inherently computable.
  • Type II (Tame Continuous/Linear Domains): Euclidean Geometry, Presburger Arithmetic, Real Closed Fields. They possess Negation and Unboundedness, but they drop Representability. Core domains remain syntactically decidable via quantifier elimination. (Note: For the empirical scientist, syntactic decidability is insufficient; physical modeling requires restricting these to their Computable sub-fragments, as detailed in Section 5).
  • Type III (Wild Domains): Peano Arithmetic, ZFC Set Theory. They possess all three ingredients. They are essentially undecidable.
Gödel’s observation regarding the integers is the historical-philosophical counterpart to our technical Type III criteria. As he noted, the expectation that the integers could be tamed by a few evident axioms was demonstrably false. The integers inherently possess the complete Trinity of Danger; they are the canonical Wild domain.
Mathematicians separate domains precisely to preserve the Tame status of Type I and Type II fields. They intuitively know that mixing Geometry with full Arithmetic (without a controlled bridge) triggers this Wild state, inadvertently infecting otherwise decidable spaces with the inescapable incompleteness of the integers.

4. The Cost of Reduction: The Compiler Myth

It is a fact of great philosophical interest that our knowledge of the system of real numbers and its geometry is much more complete than our knowledge of the system of integers.Alfred Tarski [2]
The standard foundational doctrine operates on what we might call the Compiler Myth. In this view, ZFC set theory is the “machine code” of mathematics, and domain-specific axioms (like those of Geometry or Algebra) are merely high-level “syntactic sugar.”
The claim that ZFC is the foundational machine code mistakes representational universality for logical priority. ZFC is not the foundation of mathematical practice; it is a universal coding environment. Confusing codability with epistemic priority is the foundational error.
A tame domain derives its rigor precisely from the disciplined poverty of its language, the locality of its admissible constructions, and the availability of effective procedures native to that domain. Reducing such a domain to ZFC does not strengthen these virtues; it dissolves them into a vastly more expressive ambient theory whose additional questions are irrelevant to the domain and often undecidable. This imposes a crippling Epistemic Overhead:
1.
Loss of Structural Immunity: In suitably chosen tame geometric formalisms (such as Tarski’s first-order geometry [2]), the admissible first-order questions admit effective elimination procedures. The native language excludes the expressive resources required to generate the familiar set-theoretic independence phenomena.
When we embed geometry into ZFC, we expose geometric objects to the full expressive power of set theory. We can now formulate questions about geometric points—involving arbitrary set membership, cardinality, or choice functions—that are undecidable. Reduction to ZFC is not neutral; it changes the logical ecology of the objects. We have moved the object from a Clean Room to a Wild environment.
2.
The Proof-Theoretic Overhead: In the native domain, truth is often algorithmic or governed by domain-specific decision methods. In the reduced domain (ZFC), truth is fundamentally deductive. While ZFC can simulate the geometric algorithm, the default mode of set theory is general proof search. By compiling down, we bury the efficient, domain-specific insight under layers of generic set-theoretic machinery.
3.
The Combinatorial Explosion of Strict Reduction: We do not need to look far to see what happens when the Compiler Myth is taken literally. The Bourbaki group famously attempted to rigorously formalize mathematics from the ground up using a strict set-theoretic foundation. As calculated by A.R.D. Mathias [5], the fully expanded formal definition of the number 1 in Bourbaki’s strict foundational system requires 4 , 523 , 659 , 424 , 929 symbols.
A standard defense is that abstraction layers exist precisely so no one has to write fully expanded proofs. But this misses the point: strict reduction is not epistemically faithful to the structures mathematicians actually exploit. A foundation that requires 4.5 trillion symbols to define a trivially decidable primitive concept is not providing ultimate semantic clarity; it is creating an epistemic collapse.
A reduction that preserves theoretical existence while destroying locality, decidability, and structural immunity is not foundational clarification; it is epistemic sabotage. Mathematicians ignore ZFC in their daily work because the high-level architecture—the domain autonomy—is where the logical safety actually resides.
This is precisely why Tarski observed that our knowledge of geometry and the real numbers is vastly more complete than our knowledge of the integers. Tame domains yield complete knowledge because of their restricted expressivity. If we insist on reducing these clear, decidable spaces into a universal set theory—a Wild environment built to accommodate the infinite and the undecidable—we willfully discard the very structural immunity that made them comprehensible in the first place.

5. Implications for the Consumer: The Warning Label

The deductive method cannot, of course, provide any absolute guarantee of the material truth of the theorems established by its means. The truth of a theorem depends entirely on the truth of the axioms from which it has been deduced.Alfred Tarski [6]
A crucial distinction must be made between the Mathematician (the builder) and the Scientist (the user).
The Mathematician is an Architect. They build formal structures of varying strength and expressiveness. For the Mathematician, a domain is "safe" if it is syntactically decidable (e.g., Classical Type II). The Scientist, however, is a Tenant looking for a stable place to model empirical reality and perform concrete computations on finite hardware.
For the Scientist, syntactic decidability is necessary but insufficient. A classical Type II domain (such as real closed fields over an uncountable continuum) may guarantee that a solution exists without providing any algorithm to compute it. Therefore, the operational safety boundary for empirical modeling is vastly stricter than the logical safety boundary for pure mathematics.
Preprints 208647 i002
To assist the scientist in navigating this landscape, we provide the following strict decision procedure:
Preprints 208647 i003

5.1. The Mirage of Non-Constructive Existence

[PDE]
A pervasive source of confusion for the scientific consumer is the conflation of Mathematical Existence (valid in Classical Type II or Type III domains) with Physical Computability (strictly limited to Type I and Computable Type II).
Standard analysis provides powerful theorems—such as the De Giorgi-Nash Theorem [7,8]—which guarantee the existence and regularity (smoothness) of solutions to certain partial differential equations (PDEs).
Crucially, these existence proofs are often Non-Constructive. They prove that a solution exists in the Platonic realm of the classical continuum or ZFC sets (typically via compactness arguments, classical limits, or fixed-point theorems), but they do not provide a Turing machine algorithm to generate the solution from finite initial data.
For a physicist, a solution that "exists" logically but cannot be computed is operationally indistinguishable from a solution that does not exist. To guarantee a realizable model, the scientist must demand that existence proofs be localized to the Computable Type II fragment. This operational necessity aligns exactly with the mathematical tradition of Constructive Mathematics, spearheaded by Errett Bishop [9]. Bishop demonstrated that continuous analysis can be rigorously rebuilt such that every mathematical existence theorem fundamentally provides a finite algorithmic witness. When extended into Computable Analysis [10], this framework guarantees that theoretical PDE solutions map directly to terminating algorithms on finite hardware. The warning label is clear: Existence ≠ Computability.

5.2. The Risk of Using the Wrong Domain

If a physicist accepts the doctrine that "ZFC is the foundation of everything," they are unknowingly importing Type III paradoxes into their physical model.
Example: The Banach-Tarski Paradox. The Banach-Tarski paradox offers a concrete illustration of the risk of using Wild mathematics carelessly. In the usual ZFC framework (which includes the Axiom of Choice), one can prove that a solid 3-dimensional ball can be partitioned into finitely many non-measurable pieces and reassembled to form two balls identical to the original.1 This construction conflicts fundamentally with physical principles like the conservation of mass.
For applied modeling, the salient issue is whether the idealizations used—pointwise set membership, arbitrary subsets of an uncountable continuum, and the freedom to invoke non-computable selector functions—are physically justified.
In pure mathematics, choosing to retain or reject Choice is a matter of aesthetic and structural preference. For the empirical scientist bound by finite hardware, however, mapping physical properties to arbitrary uncomputable subsets is an invalid operational move. By strictly adhering to Type I or Computable Type II domains (where all objects must have explicit effective algorithmic presentations), the paradox is structurally blocked. Algorithms cannot generate non-measurable sets. By choosing the correct formal domain, the scientist immunizes their model against physical absurdity.

6. Bridges Are Explicit, Not Automatic

How do these separate domains communicate? Through Bridges.
Every major cross-domain advance has been an explicit, deliberate construction, not a reduction:
  • Analytic Geometry: A bridge from Euclidean Space to Field Arithmetic.
  • Galois Theory: A bridge from Fields to Groups.
  • Algebraic Topology: A bridge from Spaces to Algebraic Invariants.

6.1. Conservative vs. Non-Conservative Bridges

Not all formal embeddings are equally dangerous. A conservative extension or faithful interpretation that merely rephrases theorems of a tame domain in another language does not change the decidability status of sentences formulated in the original language. If every sentence about geometry provable in ZFC was already provable in the native geometric axioms, decidability for geometric sentences is preserved.
Preprints 208647 i004
By contrast, a non-conservative encoding that enriches the language available to talk about originally tame objects—for example, by allowing quantification over arbitrary set-theoretic constructions built from geometric points or by naming a new discrete predicate inside a continuous theory—can enable synthetic encodings of computation and thereby create undecidable questions about the objects. The practical prescription is: prefer bridges that are explicitly conservative or whose expressive increase is tightly controlled and documented.
Implicit reduction is risky precisely because it often performs non-conservative encoding silently, escalating privilege without warning.

7. The Federated Organisation We Already Possess

We advocate codifying the structure mathematics has already adopted:
Level 1 — Primary Domain Axiomatizations
The working axioms of each field (Type I and II). These are the daily tools.
Level 2 — Explicit Bridges
Structure-preserving translations defined when required.
Level 3 — The Wild Reserve (ZFC)
A designated "Wild" domain (Type III) used for studying infinity, cardinality, and consistency strength. It is a valuable laboratory, not a universal basement.
Modern proof assistants (Lean/mathlib, Coq) have effectively implemented this. They use modular libraries where imports are explicit and distinct. For example, `mathlib` treats real analysis through ordered field axioms and topological structures, rather than forcing a universal encoding into set theory at every step [13]. If monolithic reduction were optimal, these systems would funnel everything through ZFC sets. They do not; they choose federalism for scalability and logical hygiene.

8. Objections and Replies

Objection 1 — "ZFC is useful as a universal lingua franca; abandoning it risks fragmentation."
Reply: ZFC remains an invaluable laboratory for set-theoretic inquiry and for comparing relative consistency strengths. This paper does not propose abandoning ZFC as a research domain; it recommends treating ZFC as a designated Wild reserve rather than the default operational substrate for every applied domain. Explicit, well-documented bridges preserve interoperability while protecting local decidability.
Objection 2 — "Embedding a decidable theory into an undecidable one does not automatically make the original problems undecidable."
Reply: Correct—conservative embeddings preserve decidability for sentences in the original language [13]. The risk arises when the host theory’s extra expressive resources are used to formulate new predicates or quantifications about the original objects (for instance, naming a discrete subset or quantifying over arbitrary set constructions). Such implicit enrichments are the common source of accidental privilege escalation.
Objection 3 — "Constructive or non-classical logics evade these limitations."
Reply: While constructive frameworks change some formal properties (e.g., the role of excluded middle), once sufficient arithmetic is interpretable the essential limitations reappear in adapted form. The Decidability Threshold emphasizes the structural features that enable encoding of computation; changing the proof calculus shifts technicalities but not the core need for caution.
Objection 4 — "Reverse mathematics already provides a fine-grained map of required axioms."
Reply: Exactly—reverse mathematics and proof-theoretic calibration are complementary to the federalist prescription. They help identify minimal bridges: import only the subsystem needed for the target theorems [14]. Reverse mathematics provides recipes for which subsystems to import when a particular theorem is needed (e.g., many classical analysis results are equivalent to WKL 0 or ATR 0 ), so use these calibrations to select minimal bridges. The paper’s recommendation is operational: prefer minimal, explicit, and conservative imports informed by such calibrations.
Objection 5 — "Category theory or univalent foundations already provide a non-set-theoretic federation."
Reply: Indeed—homotopy type theory and categorical approaches offer alternative federated architectures [15,16]. The argument here is largely agnostic about the specific metalanguage; the key is rejecting implicit universal reduction in favor of explicit, controlled connections.

9. Conclusions: Codifying Implicit Wisdom

Either mathematics is too big for the human mind, or the human mind is more than a machine.Kurt Gödel [1]
Mathematics has already chosen federalism. The evidence is in the autonomy of its fields, the explicit nature of its bridges, and the intuitive separation of Tame and Wild domains.
The contribution of this paper is to provide the structural explanation for this behavior and its implications for empirical science:
1.
Logical Safety: Domains separate to preserve the Decidability Threshold (avoiding the Trinity of Danger).
2.
Epistemic Efficiency: Reduction to ZFC is avoided because it imposes a crippling Epistemic Overhead, destroying the native algorithmic methods of local domains.
3.
Algorithmic Realizability: This separation protects the Consumer (the Scientist) from inheriting uncomputable logical paradoxes. It allows empirical science to cleanly identify and operate exclusively within structurally valid, computable frameworks (Type I and Computable Type II).
We should stop teaching students that mathematics is a precarious tower resting on a single infinite foundation. Instead, we should teach them the architecture of the Federation: a robust network of safe, decidable domains connected by strong, explicit bridges.
Gödel famously speculated that either mathematics is too big for the human mind, or the human mind is more than a single machine. The federalist architecture of mathematics suggests a resolution: mathematics is indeed too big for any single formal machine, such as ZFC. The human mind navigates this vastness not by forcing all mathematical truth into one universal syntax, but by acting as the architect of a federation—building localized, tame machines, and traversing the wild spaces between them with explicit, deliberate bridges. Federalism is not defeat; it is an organizing insight that makes both mathematical practice and computational science possible.

Appendix A. Formalizing the Decidability Threshold

 Proposition A1
(Formalized Decidability Threshold). Let T be a first-order theory in classical logic that is recursively axiomatizable (the axioms are recursively enumerable) and consistent. Suppose (i) T interprets Robinson arithmetic Q (or equivalently, contains a definable predicate N ( x ) on which all primitive recursive functions are representable); and (ii) for each standard natural number n (coded in the meta-language) T proves there exist x 1 , , x n in N with x i x j for i j (i.e., N is provably infinite). Then the set of sentences provable in T is undecidable: there is no algorithm that, given an arbitrary sentence φ in the language of T, decides whether T φ .
Proof sketch. 
Recursive axiomatizability together with representability yields an effective internal coding of syntax and a formula expressing the proof predicate; provably infinite discrete N supplies the definable domain necessary for diagonalization; classical logic allows the usual inferences about negation and provability. Hence one can construct a self-referential sentence asserting its own non-provability, and the standard Gödel/Turing diagonal argument shows that decidability of provability in T would yield a decision procedure for known undecidable sets (contradiction). See [17,18,19] for full formal details and variants. □
Preprints 208647 i005

Commentary: Mapping the Lemma to Modeling Mistakes

The lemma above is not a mere formal curiosity: it captures a recurring class of modeling errors. In practice, these errors take the following forms:
First, implicitly assuming the availability of arbitrary choice or selector functions (for example, “choose one representative from each equivalence class”) is equivalent to enriching the language of the model with new names or predicates that can encode discrete choices; this is the same move as adding a predicate P naming distinct points.
Second, introducing a mechanism to count, index, or enumerate otherwise continuous objects (for instance, by postulating a distinguished sequence of points or a labeled grid) is exactly the step that produces a definable copy of N and allows arithmetic to be interpreted. Both moves are syntactic: they change the expressive power of the theory even if the unchanged, original formulas about the continuous domain look the same.

References

  1. Gödel, K. Some basic theorems on the foundations of mathematics and their implications. In Collected Works, Volume III: Unpublished Essays and Lectures; Feferman, S., et al., Eds.; Oxford University Press, 1995; Delivered as the 25th Josiah Willard Gibbs Lecture (1951). [Google Scholar]
  2. Tarski, A. A Decision Method for Elementary Algebra and Geometry; University of California Press, 1951. [Google Scholar]
  3. Tarski, A. The Semantic Conception of Truth: and the Foundations of Semantics. Philosophy and Phenomenological Research 1944, 4, 341–376. [Google Scholar] [CrossRef] [PubMed]
  4. Gödel, K. Remarks before the Princeton bicentennial conference on problems in mathematics. In Collected Works, Volume II: Publications 1938–1974; Feferman, S., et al., Eds.; Oxford University Press, 1990; Originally presented in 1946. [Google Scholar]
  5. Mathias, A.R.D. A Term of Length 4,523,659,424,929 Calculates the precise combinatorial bloat of Bourbaki’s foundational system when defining the number 1. Synthese 2002, 133, 75–86. [Google Scholar] [CrossRef]
  6. Tarski, A. Introduction to Logic and to the Methodology of Deductive Sciences; Oxford University Press: New York, 1941. [Google Scholar]
  7. De Giorgi, E. Sulla differenziabilità e l’analiticità delle estremali degli integrali multipli regolari. Memorie della Accademia delle Scienze di Torino. Classe di Scienze Fisiche, Matematiche e Naturali 1957, 3, 25–43. [Google Scholar]
  8. Nash, J. Continuity of solutions of parabolic and elliptic equations. American Journal of Mathematics 1958, 80, 931–954. [Google Scholar] [CrossRef]
  9. Bishop, E. Foundations of Constructive Analysis; McGraw-Hill: New York, 1967. [Google Scholar]
  10. Pour-El, M.B.; Richards, J.I.  Computability in Analysis and Physics. In Perspectives in Mathematical Logic; Springer-Verlag: Berlin, Heidelberg, 1989. [Google Scholar]
  11. Banach, S.; Tarski, A. Sur la décomposition des ensembles de points en parties respectivement congruentes. Fundamenta Mathematicae 1924, 6, 244–277. [Google Scholar] [CrossRef]
  12. Wagon, S. The Banach–Tarski Paradox; Cambridge University Press: Cambridge, 1985. [Google Scholar]
  13. Montague, R. Deterministic Theories and Interpretations; 1965. [Google Scholar]
  14. Feferman, S. What rests on what? The proof-theoretic analysis of mathematics. In Proceedings of the Proceedings of the 15th International Congress of Logic, Methodology and Philosophy of Science, 1998. [Google Scholar]
  15. Voevodsky, V. The Origins and Motivations of Univalent Foundations. The Institute Letter, 2014. [Google Scholar]
  16. Mac Lane, S. Categories for the Working Mathematician; Springer, 1998. [Google Scholar]
  17. Gödel, K. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik 1931, 38, 173–198. [Google Scholar] [CrossRef]
  18. Turing, A.M. On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2 1936, 42, 230–265. [Google Scholar]
  19. Tarski, A.; Mostowski, A.; Robinson, R.M. Undecidable Theories; North-Holland, 1953. [Google Scholar]
1
The Banach–Tarski theorem depends crucially on the Axiom of Choice (or equivalent forms producing non-measurable sets); see [11] and [12].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated