Preprint
Review

This version is not peer-reviewed.

The Symbiotic Mandate: On the Urgency of a Mutually Uplifting Synergy Between Artificial Intelligence and Sustainability

Submitted:

01 May 2026

Posted:

04 May 2026

You are already at the latest version

Abstract
The current trajectory of Artificial Intelligence (AI) development represents a critical phase transition from a tenable academic pursuit to an untenable industrial behemoth, and ultimately toward an unsustainable environmental burden. In this review, we redefine waste management in sensu lato, encompassing digital redundancy, cognitive underutilization, and the physical e-waste generated by rapid hardware obsolescence. We argue that the current AI paradigm suffers from a ‘Curse of Dimensionality’ not only in its feature space but in its ecological footprint, necessitating a return to Algorithmic Parsimony—rooted in the Minimum Description Length principle [1] and William of Ockham’s razor—as a fundamental pillar of international sustainability standards. By analyzing the interplay between the outcry over blatantly unsustainable data centers [2–5] and emerging green AI frameworks [6,7], this paper provides a roadmap for a mutually uplifting synergy. We further introduce The Symbiotic Policy Covenant—a concrete policy intervention framework comprising f ive pillars: Algorithmic Parsimony Standards, Expanded Waste Taxonomy, AI Equity Safeguards, Paradigm Transition Investment, and International Regulatory Alignment. We conclude that true sustainability in the age of AI requires a holistic adherence to global standards [8,9] that transcend mere climate concerns, fostering a safer, more equitable, and durable integration of machine intelligence with ecological stewardship.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction: The Crisis of the Tenable and the Thermodynamics of Intelligence

For decades, the pursuit of machine intelligence was governed by the principle of parsimony [1,10]; algorithms were designed for efficiency [11], and the “intelligence” of a system was measured by its ability to extract maximal insight from minimal data and compute. This was the era of the Tenable. The current trajectory of Artificial Intelligence (AI) development represents a singular moment in the history of human technology—a transition that can be best characterized as a phase shift from the Tenable to the Untenable, and rapidly toward the Unsustainable.
However, the advent of the “Brute Force” paradigm has decoupled intelligence from efficiency. We have entered the era of the Untenable, where the marginal gains in predictive accuracy ϵ are achieved only through an exponential expansion of parameters p and a corresponding surge in energy consumption E [2,3]. In this regime, the “Curse of Dimensionality” [12] is no longer merely a statistical challenge involving the sparsity of high-dimensional space; it has become an ecological reality [6]. The hyperscale data centers powering modern Large Language Models (LLMs) have become the “New Smokestacks” of the 21st century, demanding vast quantities of water for cooling [4,5] and stressing local power grids beyond their design limits [13].
If this trajectory remains uncorrected, we face an Unsustainable future where the digital waste generated by our pursuit of intelligence begins to degrade the very physical substrate—our planet—that supports the life and consciousness we seek to emulate [14,15].

1.1. Waste Management in Sensu Lato

Central to our thesis is the redefinition of waste management in sensu lato (in the broadest sense). Sustainability in the context of AI must transcend the typical, narrow focus on carbon offsets and climate hardware [16]. We propose a holistic taxonomy of waste that includes:
  • Digital Waste: The entropy of redundant, low-utility computation cycles [6].
  • Cognitive Waste: The misallocation of human and machine potential toward extractive or trivial tasks [17].
  • Ecological Waste: The physical depletion of rare-earth minerals and the toxic lifecycle of specialized AI hardware [18,19].

1.2. The Mutually Uplifting Synergy

Despite these dire trends, the relationship between AI and our planet need not be parasitic. We argue for a Mutually Uplifting Synergy—a reciprocal framework where AI tools are leveraged as the “Planetary Operating System” for sustainability [14], while the urgency of ecological survival forces a return to Algorithmic Parsimony [10,20]. By adhering to international standards such as ISO/IEC 42001 [8] and the emerging mandates of 2026 [9,21], we can pivot from a model of extraction to one of stewardship.
This review aims to map this transition, providing a statistical and philosophical roadmap for an AI that is not only “intelligent” but fundamentally durable and safe for the global commons.

1.3. Contributions to Sustainability: The Symbiotic Covenant in Action

This work advances the science, philosophy, and governance of sustainability along five distinct and mutually reinforcing dimensions.
First, a definitional contribution. We expand the very meaning of sustainability beyond its conventional ecological boundaries by introducing a formal contrapositive criterion applicable to any system—biological, computational, or social. A system is unsustainable if and only if its structural and functional requirements depend upon sources that are insufficient, perishable, or disproportionately scarce relative to its intended operational timescale. This definition, which we term sustainability in sensu lato, provides a unifying logical framework that accommodates the ecological, digital, and cognitive dimensions of AI’s footprint simultaneously—dimensions that existing frameworks treat in isolation [15,19].
Second, a diagnostic contribution. We identify and visualize the Intelligence–Cost Divergence: the growing chasm between logarithmic intelligence gain and exponential resource cost in the current AI paradigm [2,3,5]. By naming and quantifying this divergence—and by connecting it to the historical trajectory from the ENIAC to the transistor [22,23]—we provide the sustainability science community with a precise, historically grounded vocabulary for what is otherwise experienced as a diffuse and overwhelming crisis.
Third, a mathematical contribution. We rehabilitate the Principle of Parsimony—rooted in Rissanen’s Minimum Description Length [1,10] and William of Ockham’s razor—as the foundational criterion for sustainable AI. We introduce the Intelligence-per-Joule ( I / J ) index and the Sustainability Index S, formally defined as:
S Information Gain Entropy Production + Resource Cos t
These metrics provide sustainability science with actionable, mathematically grounded instruments for evaluating, comparing, and certifying AI systems—tools that go beyond narrative assessment to enable rigorous, quantitative stewardship [6,7].
Fourth, a policy contribution. We translate diagnosis into prescription through the Symbiotic Policy Covenant: a three-pillar framework encompassing Algorithmic Parsimony Mandates, Expanded Waste Taxonomy in sensu lato, and AI Equity Safeguards, operationalized through a proposed ISO/IEC 42001-Plus standard addendum [8,9]. This framework is, to the best of our knowledge, the first to integrate mathematical rigor, ecological accounting, and social equity into a single, actionable governance instrument for sustainable AI.
Fifth, and most fundamentally, a visionary contribution. We argue—and demonstrate—that the relationship between Artificial Intelligence and Sustainability is not adversarial but symbiotic: AI is simultaneously the greatest present threat to sustainability and its most powerful potential instrument of salvation [14,16]. The resolution of this paradox is not a choice between intelligence and ecology, but the recognition that genuine intelligence—parsimonious, elegant, and efficient—is itself the most sustainable form of computation [10,20]. In making this argument, this work invites the sustainability community to reclaim AI as its own: not as a problem to be managed, but as a paradigm to be shaped, guided, and ultimately redeemed.

2. Waste Management in Sensu Lato: A Taxonomic Expansion

In the traditional discourse of sustainability, “waste” is often relegated to the physical domain—silicon scrap, carbon emissions, and heat dissipation [19]. However, to achieve a mutually uplifting synergy, we must adopt a perspective of waste management in sensu lato. We define sustainability through a contraposed physiological argument: a system or process is inherently unsustainable if its anatomy (structure) and physiology (functioning) depend on sources that are insufficient, easily perishable, or disproportionately volatile. Formally: a system is unsustainable if and only if its structural and functional requirements depend upon sources that are insufficient, perishable, or disproportionately scarce relative to the timescale of the system’s intended operation. This definition unifies ecological, computational, and social dimensions of sustainability under a single logical criterion.

2.1. The Physiology of Failure: The Runner and the Drunkard

The “metabolic” cost of intelligence can be illustrated through two grounding analogies that highlight the transition from the Tenable to the Unsustainable:
  • The Exhausted Runner: If a runner starts at a pace that exceeds their reserve of energy and strength, they will inevitably collapse or quit the race owing to the premature exhaustion of resources. The current scaling paradigm for LLMs exhibits precisely this exhaustion profile [2]: training a single large model can emit as much carbon dioxide as the lifetime emissions of five automobiles [3].
  • The Algorithmic Drunkard: If an algorithm is overly greedy for inputs that are increasingly rare and requires hardware that consumes resources with no regard for the rigors of computational complexity theory [24], it becomes a liability to itself and to society as a whole. The global data center industry already rivals mid-sized nations in electricity consumption [13,25].

2.2. The ENIAC Moment: A Historical Parallel of Infancy

The current frenzy surrounding Brute Force AI mirrors the first days of computing. The legendary ENIAC, completed in 1945, was a revolution, yet it was “dirty,” bulky, consumed 150 kilowatts of power, and required an entire building to function [22]. Its computational successor—the transistor—emerged precisely because ENIAC’s unsustainability made the status quo untenable.
  • The Era of the Elite: It was impossible for a household to own an ENIAC; only governments and the wealthiest corporations could support its gluttony. The concentration of AI compute today in a handful of hyperscale providers echoes this structural inequity [17,18].
  • The Transition to Maturity: The realization of this unsustainability drove the scientific community toward the transistor and the microchip [23]. Today, the smartphone in a human palm is many orders of magnitude more powerful than the ENIAC [22].
  • The Prophecy of Efficiency: We assert that current AI is in its “ENIAC phase”—a loud, resource-intensive infancy [6]. The transition from “walking on foot” from Rochester to Buffalo, to driving, and ultimately to the helicopter flight of algorithmic elegance, is the only path toward a truly sustainable intelligence. Research on parsimony and learning [26,27] already signals the first stirrings of this paradigm shift.
The ENIAC parallel also carries an urgent equity warning. The ENIAC’s prohibitive cost concentrated a civilization-altering technology in the hands of the few. If the present trajectory of AI development is left uncorrected, we risk reproducing this inequity at unprecedented scale [17]. A handful of well-resourced entities will own the infrastructure of intelligence itself, ushering in what can only be described as a new colonial era—one whose currency is not land, but computation [18].

2.3. Historical Transitions Toward Algorithmic Parsimony

History demonstrates that whenever a system becomes a liability to itself, the friction of its own inefficiency creates the heat necessary to forge its successor [23].
Table 1. Historical Transitions from Resource-Gluttony to Sustainable Efficiency.
Table 1. Historical Transitions from Resource-Gluttony to Sustainable Efficiency.
Era / Entity The “Greedy” Resource The Successor The Paradigm Shift
The ENIAC [22] Vacuum Tubes / Bulk Power The Transistor From physical bulk to solid-state elegance [23].
The Concorde Massive Fuel (Supersonic) Efficient Jets Prioritizing fuel-per-passenger over raw speed.
Whale Oil Perishable Bio-resources Kerosene / Electricity Transitioning to abundant, scalable energy.
AltaVista Computational Bloat Google PageRank [28] Using inherent web structure as “fuel.”
Brute Force AI [2] O ( p ) Parameters Parsimonious AI [20] From memorization to genuine understanding [10].

2.4. The Virtue of Algorithmic Parsimony

“Tout ce qui se conçoit bien s’énonce clairement, et les mots pour le dire arrivent aisément.” — Nicolas Boileau, L’Art poétique (1674)
The wisdom of William of Ockham—entia non sunt multiplicanda praeter necessitatem (entities must not be multiplied beyond necessity)—is not a medieval curiosity; it is the beating heart of statistical machine learning. This intuition was formalized by Rissanen’s Minimum Description Length (MDL) principle [1]: the best model is the one that yields the shortest combined description of the model and the data. MDL unifies Ockham’s Razor with information theory [29] and is the mathematical ancestor of the Akaike Information Criterion [30] and the Bayesian Information Criterion [31]—tools that every practicing statistician and machine learning researcher employs daily.
Designing and building systems that are sustainable is a demonstration of intelligence, wisdom, and loving-kindness. Brute-force systems—the current “New Smokestacks” [6]—are the hallmark of a momentary departure from this ideal. The Conference on Parsimony and Learning [26] and recent work on knowledge-aware parsimony [27] already demonstrate that the scientific community is awakening to this imperative.

3. The Intelligence–Cost Divergence: Visualizing the Abyss

The “Curse of Dimensionality” [12] has transitioned from a statistical challenge to an ecological reality [6,7]. Figure 1 visualizes the divergence between intelligence gain and resource cost across three eras of AI development. The shaded region between the two curves represents the growing domain of Waste in Sensu Lato: the gap between what is gained and what is spent, a gap whose area is now measurable in megatons of carbon [3,32] and billions of liters of water [4,5].

4. The Interplay: Regulatory Mandates and the Infrastructure Crisis

The year 2026 marks a watershed moment in the intersection of AI and sustainability [9,21]. We are witnessing a collision between the unbridled growth of generative AI and the emergent “Hard Boundaries” of energy grids and international law.

4.1. The Outcry of Data Centers: The New `Smokestacks’ of Intelligence

The physical manifestation of AI’s “Curse of Dimensionality” is most visible in the global explosion of hyperscale data centers [13,25].
  • Grid Instability and Energy Rates: Global data center electricity consumption rose to approximately 460 terawatt-hours in 2022—equivalent to the national consumption of France [13]. In regions such as Northern Virginia and Ireland, data centers now exceed 20% of total domestic electricity capacity in some locales, driving residential energy rate increases [34].
  • The Hydrological Footprint: Evaporative cooling for high-density GPU clusters has become an acute environmental contention point. Large AI-focused data centers can consume up to 5 million gallons of water per day [35]. A 2025 study published in Nature Sustainability projected the annual water footprint of U.S. AI server deployment at between 731 and 1,125 million cubic meters between 2024 and 2030 [5].
  • The Carbon Burden: Training a single large NLP model can emit over 626,000 pounds of CO2-equivalent—approximately the lifetime emissions of five automobiles [2]. Training GPT-3 consumed an estimated 1,287 megawatt-hours of electricity and generated 552 metric tons of carbon dioxide equivalent [3].

4.2. Adherence to 2026 International Standards

In response to these untenable trends, the international community has moved from voluntary “Green AI” guidelines [6] to mandatory compliance frameworks.
1.
ISO/IEC 42001 [8] (Artificial Intelligence Management System): This standard has become the global benchmark for responsible AI governance. It mandates that organizations perform rigorous “System Impact Assessments” that include explicit environmental cost-benefit analyses before deployment.
2.
The EU AI Act (2024/2026 Implementation) [9]: As of 2026, providers of General-Purpose AI (GPAI) models are legally required to provide technical documentation on the energy consumption of their models. High-impact models that exceed specific compute thresholds are classified under “Systemic Risk,” necessitating stringent efficiency audits [36].
3.
ITU-T L.1801 [21]: The published guidelines provide a holistic life-cycle assessment (LCA) framework, forcing developers to account for the environmental impact of AI from raw material extraction for chips to “End-of-Life” hardware disposal [19].

4.3. From Compliance to Synergy

The “Outcry” serves as a necessary corrective force [14]. It compels a shift from the Untenable state of “Black Box” gluttony to the Sustainable state of transparent, standardized efficiency [16]. Adherence to these standards is not merely a legal hurdle but the primary mechanism for achieving the Mutually Uplifting Synergy proposed in this work.

5. The Synergy: Toward a Mutually Uplifting AI–Sustainability Framework

The path forward is not the abandonment of Artificial Intelligence, but its radical realignment [14]. Having identified the transition from the tenable to the unsustainable, we now propose the Mutually Uplifting Synergy—a framework where AI serves as the primary engine for sustainability [16], while the constraints of our planet force a return to mathematical elegance and algorithmic parsimony [10,20].

5.1. AI as the `Planetary Operating System’

When deployed within the boundaries of international standards, AI tools provide a multi-dimensional “up-lift” to ecological stewardship [14]:
  • Precision Environmental Monitoring: Using manifold-based discriminant analysis and high-dimensional topology [11], AI can now identify subtle shifts in biodiversity and ecosystem health from satellite and sensor data long before they become catastrophic [16].
  • The Methane Sentinel: AI-coupled hyperspectral imaging is achieving unprecedented rates of global methane leak identification, allowing real-time mitigation [14].
  • Optimizing the Circular Economy: Advanced AI-driven robotics are achieving high accuracy in waste sorting in sensu lato, transforming traditional landfills into “urban mines” for rare-earth minerals and high-value polymers [16].
  • Precision Agriculture: Recent systematic reviews document AI-driven gains including 25% yield increases, 28% fertilizer savings, and 35% reductions in nitrogen runoff [37].

5.2. The Virtue of Algorithmic Parsimony

The synergy is not merely about using AI for “green” tasks; it is about making the AI itself “green” [6]. We argue that the most sustainable model is the one that achieves the highest “Intelligence-per-Joule” ( I / J ). Formally:
Sustainability Index ( S ) Information Gain Entropy Production + Resource Cos t
This index echoes the MDL principle [1]: the optimal model is not the most powerful but the most efficient relative to what it achieves. By moving away from “brute-force” scaling and toward biologically-inspired, sparse, and quantized architectures [20,38], we reduce the “Curse of Dimensionality” [12] in our ecological footprint. Techniques such as pruning, quantization, and knowledge distillation [38] already demonstrate that models can be compressed by up to 90% with minimal loss of accuracy [3]. This return to parsimony is not a regression; it is a maturation of the field [26].

5.3. Safe and Sustainable: A Dual Mandate

A “Safe Planet” requires AI that is predictable, interpretable, and durable [15]. The synergy ensures that:
1.
AI Protects the Grid: Agentic AI systems can balance volatile renewable energy loads (solar and wind) across smart grids in real-time, mitigating the very blackouts that unsustainable data centers once threatened [16].
2.
AI Preserves Resources: From “Digital Twins” of entire watersheds to AI-optimized crop yields [37], we are replacing physical consumption with digital optimization.

6. The Alchemy of AI: Transmuting Leaden Brute Force into Golden Parsimony

Drawing inspiration from the timeless wisdom of alchemy and the history of science, we recognize that the solution to the present-day energy and sustainability woes of Artificial Intelligence is already contained within its current processes. Ideas, like mathematical theorems, do not age [39]. The giants of statistics and information theory—Fisher [40], Kolmogorov, Rissanen [1]—offer tools of enduring power that the present era of brute-force AI has temporarily set aside. It is a form of algorithmic ressourcement—a return to foundational sources, not out of conservatism, but to drink again from the original spring with renewed purpose.
We are currently in what may be termed the “Leaden Era” of AI [6], characterized by heavy, dense, and energy-expensive architectures that rely on sheer mass rather than refined form [2]. However, the “Gold” of high-efficiency, sustainable intelligence is already present in the mathematical foundations of the field: in the sparse representations of compressed sensing [20], in the model selection guarantees of MDL [10], and in the biologically-motivated architectures that activate only a fraction of their parameters per inference [3].
By applying the catalysts of the Principle of Parsimony [1] and Mathematical Rigor [40], we perform the ultimate transition: stripping away the “dross” of billions of redundant parameters to reveal the “Golden” signal underneath. This is the hallmark of true intelligence—to find the most elegant path already written in the natural laws of the universe.

7. The Symbiotic Policy Covenant: A Framework for Actionable Intervention

7.1. Preamble: From Diagnosis to Prescription

The foregoing analysis establishes a clear diagnosis: the current trajectory of AI is ecologically, socially, and intellectually unsustainable [3,5,6,17]. Diagnosis, however eloquent, is insufficient. The present section translates that diagnosis into a concrete, actionable policy intervention—which we term The Symbiotic Policy Covenant. Figure 2 presents the architecture of this intervention as a structured framework moving from diagnosis through pillars to actionable levers.

7.2. Pillar I: The Algorithmic Parsimony Mandate

The first and most foundational pillar flows directly from the Principle of Parsimony attributed to William of Ockham and formalized in Rissanen’s Minimum Description Length [1]. Statistical inference has always known that the best model is not the most complex model; it is the simplest model consistent with the data—a theorem enshrined in MDL [10], the Akaike Information Criterion [30], and the Bayesian Information Criterion [31]. The tragedy of the present moment is that AI practice has drifted so far from its own theoretical foundations [6].
The Parsimony Mandate proposes the following concrete policy interventions:
1.
Parsimony Impact Statements (PIS): Every organization seeking regulatory approval or public funding for an AI system above a defined compute threshold must file a documented justification for why the proposed parameter count is necessary for the stated objective, and what alternatives at lower complexity were considered and rejected [9].
2.
Complexity Ceilings in Public Procurement: Public sector agencies procuring AI solutions must impose complexity ceilings expressed in terms of parameter counts, FLOPs per inference, and energy-per-decision ratios [7]. This mirrors existing energy efficiency standards in procurement for physical goods.
3.
An Intelligence-per-Joule ( I / J ) Index: Funding bodies, journals, and standards organizations should adopt the I / J index as a normalized metric for evaluating AI systems [6]. Publication of this index alongside performance benchmarks should be mandatory for any system claiming to be “state-of-the-art.”

7.3. Pillar II: Waste Taxonomy in Sensu Lato as Policy Language

The holistic waste taxonomy introduced in Section 2 must graduate from academic discourse into formal policy language [19]. The three categories each demand distinct regulatory treatment:
  • Digital Waste should be subject to computational audit requirements, analogous to financial audits, requiring organizations to disclose the ratio of useful computation to total computation performed [6,7].
  • Cognitive Waste should be addressed through national AI workforce strategies that ensure human intelligence is directed toward high-value, creative, and ethical AI development rather than toward the maintenance of bloated infrastructure [17].
  • Ecological Waste is partially addressed by existing frameworks [9,21], but must be extended to cover the full hardware life-cycle—from rare-earth mining to disposal—using the contrapositive sustainability criterion established in Section 2 [18,19].

7.4. Pillar III: The AI Equity Safeguard

The ENIAC’s inequity was resolved, in part, by the democratizing force of the transistor and the personal computer [23]. We cannot afford to wait for the same accident of history. Research has shown that algorithmic progress in AI has disproportionately benefited larger model builders [41], reinforcing structural inequity. The Equity Safeguard proposes:
1.
Anti-Monopoly Provisions: International AI governance frameworks must include explicit provisions preventing any single entity or small consortium from controlling the critical infrastructure of foundation model training and inference [18].
2.
Open Efficiency Standards: The most parsimonious and efficient AI architectures should be required, as a condition of any public subsidy or regulatory approval, to be published as open standards—ensuring benefits are available to the full global research community [17].
3.
Distributed AI Infrastructure Investment: National and multilateral investment in AI infrastructure should be directed toward distributed, energy-efficient computing facilities rather than the consolidation of hyperscale data centers in a handful of jurisdictions [5].

7.5. The Intervention: A Proposed ISO/IEC 42001-Plus Addendum

The three pillars converge in a single, actionable policy instrument: a proposed addendum to the ISO/IEC 42001 standard [8], designated ISO/IEC 42001-Plus. Its core requirements are:
1.
Mandatory Parsimony Impact Statements for systems above defined compute thresholds [9].
2.
Full Waste Accounting across all three categories of the in sensu lato taxonomy [19].
3.
Equity Impact Assessments evaluating the distributional consequences of AI deployment [17].
4.
Complexity Ceilings as conditions of certification [6].
5.
I / J Index Disclosure as a mandatory element of system documentation [7].

7.6. The Four Action Levers

The ISO/IEC 42001-Plus standard is not self-executing. Its adoption and enforcement depends upon four coordinated action levers:
Research 
[26,27]. The scientific community must develop the measurement infrastructure necessary for the Covenant—standardized parsimony benchmarks, validated I / J indices, and open audit methodologies. This is a call to the mathematics and statistics communities to reassert their foundational role in AI [10,11].
Regulation 
[9,36]. National regulatory bodies must translate the principles of the Covenant into enforceable law, beginning with public procurement rules and extending to all AI systems deployed in consequential domains (healthcare, justice, education, infrastructure).
Education 
[24]. The curriculum of every computer science and AI program must restore the Principle of Parsimony and the rigors of computational complexity theory to their rightful centrality. The generation of engineers who will build the next era of AI must be equipped to recognize and resist the temptations of brute force.
International Coordination 
[14,16]. The Covenant must be adopted as a contribution to the United Nations Sustainable Development Goals (SDGs), and its principles must be advocated at the Conference of the Parties (COP), at the OECD, and in the ITU. AI sustainability is planetary in scope; its governance must be correspondingly universal.

8. Conclusions: The Symbiotic Mandate

Artificial Intelligence will only deliver on its lofty promises by striving to be sustainable in every possible sense of the word—and sustainability will only help save our planet if it fully embraces all the most beneficial and empowering aspects of emerging artificial intelligence [14,16]. This is not merely a technical alignment, but a profound synergy and symbiosis.
In this review, we have traced the evolution of AI from a tenable academic pursuit [1,11] to an unsustainable industrial trajectory [2,3,5]. We have proposed a holistic taxonomy of waste in sensu lato, identified the three-era Intelligence–Cost Divergence from Parsimony to Gluttony to the Abyss, and articulated the Mutually Uplifting Synergy as the corrective framework. We have then translated this philosophical and mathematical diagnosis into The Symbiotic Policy Covenant: a three-pillar policy intervention framework culminating in a proposed ISO/IEC 42001-Plus standard addendum [8] and four coordinated action levers.
The transition from the Tenable to the Unsustainable is not an inevitable law of technological evolution. It is a consequence of a brute-force paradigm that has momentarily lost its way [6]. Ideas do not age [39]: the mathematical parsimony of Rissanen [1], the sparse representations of Donoho [20], and the statistical elegance of Fisher [40] are as alive and potent today as when they were first conceived.
By embracing Algorithmic Parsimony and adhering to the rigorous new international standards of this era [8,9], we can pivot from resource exhaustion to planetary stewardship. It is our duty, as scientists and citizens of this Common Home, to ensure that the light of Artificial Intelligence shines without burning the house down.
The synergy is the solution. The symbiosis is the survival.

Author Contributions

The author (E.F.) conceptualized the study, performed the taxonomic analysis of waste in sensu lato, developed the Symbiotic Policy Covenant framework, and drafted the manuscript in its entirety.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The author expresses his deepest gratitude to the Editorial Board of MDPI Sustainability for the invitation to share this vision. A special tribute is due to the global community of researchers striving for a more parsimonious, equitable, and ethical Artificial Intelligence—and to all those who believe that true intelligence begins with wisdom, not with scale.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Rissanen, J. Modeling by shortest data description. Automatica 1978, 14, 465–471. [Google Scholar] [CrossRef]
  2. Strubell, E.; Ganesh, A.; McCallum, A. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019; pp. 3645–3650. [Google Scholar] [CrossRef]
  3. Patterson, D.; Gonzalez, J.; Le, Q.; Liang, C.; Munguia, L.M.; Rothchild, D.; So, D.R.; Texier, M.; Dean, J. Carbon emissions and large neural network training. arXiv 2021, arXiv:2104.10350. [Google Scholar] [CrossRef]
  4. Mytton, D. Data centre water consumption. npj Clean. Water 2021, 4, 11. [Google Scholar] [CrossRef]
  5. Xiao, T.; You, F. Environmental impact and net-zero pathways for sustainable artificial intelligence servers in the USA. Nature Sustainability 2025. Published. 10 November 2025. [CrossRef]
  6. Schwartz, R.; Dodge, J.; Smith, N.A.; Etzioni, O. Green AI. Commun. ACM 2020, 63, 54–63. [Google Scholar] [CrossRef]
  7. Luccioni, A.S.; Viguier, S.; Ligozat, A.L. Estimating the carbon footprint of BLOOM, a 176B parameter language model. J. Mach. Learn. Res. 2023, 24, 253. [Google Scholar]
  8. ISO/IEC. ISO/IEC 42001:2023 — Artificial Intelligence Management System (AIMS); International Organization for Standardization / International Electrotechnical Commission. Technical report. 2023.
  9. European Parliament and Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Technical report. Official Journal of the European Union, 2024. [Google Scholar]
  10. Rissanen, J. Stochastic Complexity in Statistical Inquiry; World Scientific: Singapore, 1989. [Google Scholar]
  11. Fokoue, E. Model Selection for Optimal Prediction in Statistical Machine Learning. Not. Am. Math. Soc. 2020, 67, 155–165. [Google Scholar] [CrossRef]
  12. Bellman, R. Dynamic programming Introduces the “curse of dimensionality.”. In Princeton University Press; 1957. [Google Scholar]
  13. International Energy Agency. Energy and AI Report 2025, 2025.
  14. Rolnick, D.; Donti, P.L.; Kaack, L.H.; Kochanski, K.; Lacoste, A.; Sankaran, K.; et al. Tackling climate change with machine learning. ACM Comput. Surv. Originally posted as. 2022, arXiv:1906.0543355, 1–96. [Google Scholar] [CrossRef]
  15. van Wynsberghe, A. Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics 2021, 1, 213–218. [Google Scholar] [CrossRef]
  16. Kaack, L.H.; Donti, P.L.; Strubell, E.; Kamiya, G.; Creutzig, F.; Rolnick, D. Aligning artificial intelligence with climate change mitigation. Nat. Clim. Change 2022, 12, 518–527. [Google Scholar] [CrossRef]
  17. Bender, E.M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2021; pp. 610–623. [Google Scholar] [CrossRef]
  18. Crawford, K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press: New Haven, CT, 2021. [Google Scholar]
  19. Ligozat, A.L.; Lefèvre, J.; Bugeau, A.; Combaz, J. Unraveling the hidden environmental impacts of AI solutions for environment life cycle assessment of AI solutions. Sustainability 2022, 14, 5172. [Google Scholar] [CrossRef]
  20. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  21. International Telecommunication Union. ITU-T L.1801: Methodology for assessing the environmental impact of artificial intelligence; Technical report; ITU, 2026. [Google Scholar]
  22. Haigh, T.; Priestley, M.; Rope, C. ENIAC in Action: Making and Remaking the Modern Computer; MIT Press: Cambridge, MA, 2016. [Google Scholar]
  23. Riordan, M.; Hoddeson, L. Crystal Fire: The Invention of the Transistor and the Birth of the Information Age; W.W. Norton: New York, 1997. [Google Scholar]
  24. Sipser, M. Introduction to the Theory of Computation, 3rd ed.; Cengage Learning: Stamford, CT, 2013. [Google Scholar]
  25. Masanet, E.; Shehabi, A.; Lei, N.; Smith, S.; Koomey, J. Recalibrating global data center energy-use estimates. Science 2020, 367, 984–986. [Google Scholar] [CrossRef] [PubMed]
  26. Conference on Parsimony and Learning. CPAL 2026: Third Conference on Parsimony and Learning; ELLIS Institute Tübingen, in conjunction with Max Planck Institute for Intelligent Systems, 2026.
  27. Wang, Y.; et al. Beyond scaleup: Knowledge-aware parsimony learning from deep networks. arXiv 2024, arXiv:2407.00478. [Google Scholar]
  28. Brin, S.; Page, L. The anatomy of a large-scale hypertextual Web search engine. Comput. Netw. ISDN Syst. 1998, 30, 107–117. [Google Scholar] [CrossRef]
  29. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley-Interscience: Hoboken, NJ, 2006. [Google Scholar]
  30. Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control 1974, 19, 716–723. [Google Scholar] [CrossRef]
  31. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  32. de Vries-Gao, A. The carbon and water footprints of data centers and what this could mean for artificial intelligence. In ScienceDirect (Cell Press); 2025. Published December 2025. [Google Scholar] [CrossRef]
  33. Kaplan, J.; McCandlish, S.; Henighan, T.; Brown, T.B.; Chess, B.; Child, R.; Gray, S.; Radford, A.; Wu, J.; Amodei, D. Scaling laws for neural language models. arXiv 2020, arXiv:2001.08361. [Google Scholar] [CrossRef]
  34. Lincoln Institute of Land Policy. Data Drain: The Land and Water Impacts of the AI Boom. Land Lines Magazine, 2025. [Google Scholar]
  35. Environmental and Energy Study Institute. Data Centers and Water Consumption. EESI Article, 2025. [Google Scholar]
  36. Ebert, K.; Alder, N.; Herbrich, R.; Hacker, P. AI, climate, and regulation: From data centers to the AI act. arXiv 2024, arXiv:2410.06681. [Google Scholar] [CrossRef]
  37. Okechukwu Paul-Chima, U. Implementing artificial intelligence and machine learning algorithms for optimized crop management: a systematic review on data-driven approach to enhancing resource use and agricultural sustainability. Cogent Food Agric. 2025. [Google Scholar] [CrossRef]
  38. Dettmers, T.; Lewis, M.; Belkada, Y.; Zettlemoyer, L. int8(): 8-bit matrix multiplication for transformers at scale. Adv. Neural Inf. Process. Syst. 2022, 35. [Google Scholar]
  39. Hardy, G.H. A Mathematician’s Apology On the timelessness of mathematical ideas; Cambridge University Press: Cambridge, 1940. [Google Scholar]
  40. Fisher, R.A. Statistical Methods for Research Workers; Oliver and Boyd: Edinburgh, 1925. [Google Scholar]
  41. Thompson, N.; Ge, Y.; Ferguson, A. On the origin of algorithmic progress in AI. arXiv 2025, arXiv:2511.21622. [Google Scholar] [CrossRef]
Figure 1. The Intelligence–Cost Divergence: The shaded region between the logarithmic Intelligence Gain curve (blue, following diminishing returns as in [33]) and the exponential Resource Cost curve (red, as documented in [2,3]) represents the growing domain of Waste in Sensu Lato. The three eras demarcate the trajectory from sustainable AI development to its present unsustainable industrialization.
Figure 1. The Intelligence–Cost Divergence: The shaded region between the logarithmic Intelligence Gain curve (blue, following diminishing returns as in [33]) and the exponential Resource Cost curve (red, as documented in [2,3]) represents the growing domain of Waste in Sensu Lato. The three eras demarcate the trajectory from sustainable AI development to its present unsustainable industrialization.
Preprints 211447 g001
Figure 2. The Symbiotic Policy Covenant: A structured policy intervention framework for sustainable AI, proceeding from diagnosis through three foundational pillars to a proposed ISO/IEC 42001-Plus standard addendum and four actionable levers.
Figure 2. The Symbiotic Policy Covenant: A structured policy intervention framework for sustainable AI, proceeding from diagnosis through three foundational pillars to a proposed ISO/IEC 42001-Plus standard addendum and four actionable levers.
Preprints 211447 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated