Preprint
Article

This version is not peer-reviewed.

Algorithmic Sustainability: Ethics, Governance, and Global Policies for AI-Driven Environmental Decision-Making

Submitted:

08 February 2026

Posted:

10 February 2026

You are already at the latest version

Abstract
In recent years, artificial intelligence has become embedded in environmental decision-making, shaping how climate risk is zoned, how land use is planned and managed, and how regulatory oversight and energy-related decisions are carried out. Despite this expansion, discussions surrounding the use of AI in decisions related to sustainability often focus on performance measures, with limited attention given to broader institutional and environmental implications. Such accounts frequently sidestep questions of governance legitimacy while underestimating the environmental burdens associated with computational processes and the infrastructure that supports them. This paper develops algorithmic sustainability as a governance framework oriented toward public policy in contexts where artificial intelligence informs environmental decision-making. The concept is defined through the simultaneous alignment of three conditions. These include ecological effectiveness assessed across the full lifecycle of AI systems, institutional accountability anchored in oversight that can be enforced in practice, and ethical legitimacy grounded in freedom, justice, and the possibility to contest decisions. Rather than treating these dimensions as separable, the framework assumes that sustainability claims weaken when any one condition is absent. The research methodology adopts a framework-development approach supported by a qualitative comparative review. The review integrates scholarship on climate impact pathways with ethical and political analyses of algorithmic authority, while also drawing on governance instruments found in global normative frameworks, regional regulatory models, and organizational practice. Through this synthesis, the paper produces two outcomes. One is a four-domain ethical risk register that consolidates epistemic and technical concerns, risks tied to justice and political economy, issues of accountability and legitimacy, and impacts associated with the environmental footprint of AI systems over time. The second outcome is a governance toolkit that translates algorithmic sustainability into practice through proportional risk tiering based on decision criticality, requirements for documentation and auditability, a tiered Environmental AI Impact Assessment, standardized disclosure of environmental footprints, procurement-based leverage, and enforceable mechanisms that allow contestation and remedy. The analysis shows that environmental AI governance remains institutionally fragile when sustainability evaluation is disconnected from transparency obligations, challenge pathways, and distributive accountability as they operate in practice.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

Across public administration, environmental decision-making is increasingly shaped by algorithmic systems that operate inside regulatory, planning, and compliance processes. Within areas such as climate risk zoning, land-use regulation, emissions monitoring, energy management, and environmental compliance, algorithmic outputs influence how risks are classified, how priorities are set, and how interventions are justified. The expansion of these systems reflects institutional pressure to act under conditions marked by uncertainty, urgency, and environmental risk that spans multiple spatial and temporal scales. It does not reflect a neutral shift toward technical efficiency alone [1].
As artificial intelligence becomes embedded in routine environmental administration, the scope of governance concern extends beyond model accuracy or computational performance. Attention increasingly turns to how algorithmic systems acquire authority within institutions, how they shape discretionary judgment, and how they persist over time as part of administrative routines [2,3]. These dynamics position AI not merely as a technical instrument, but as an emerging condition through which environmental governance is exercised.

1.1. The Problem: Sustainability Claims Without Governance Legitimacy or Lifecycle Visibility

Despite the growing role of AI in environmental decision-making, the capacity to govern its sustainability remains limited. Much of the existing literature approaches environmental AI through narratives centered on performance, where predictive accuracy, optimization outcomes, or localized reductions in emissions receive primary attention [4]. In these accounts, AI systems are often treated as detached from the institutional settings in which decisions are enacted. At the same time, the material infrastructures that sustain computation tend to receive limited scrutiny [4,5]. As a result, claims about sustainability are frequently advanced without sustained consideration of environmental effects that unfold across the system lifecycle, of arrangements that assign responsibility for decisions, or of the ethical legitimacy of outcomes shaped by algorithmic mediation [5,6]. Two structural concerns recur throughout this literature.
One concern relates to material impact displacement. Energy use and emissions arise primarily during model training and inference. Water demand and hardware impacts are linked to data storage and data-center operation. These impacts are often geographically and institutionally separated from the locations where environmental benefits are reported [4,5]. A second concern relates to authority and contestability. When probabilistic outputs are translated into administrative categories, such as risk designations, compliance thresholds, eligibility determinations, or enforcement triggers, the ability of affected actors to understand assumptions, to challenge classifications, or to seek remedy may be reduced [6,7]. In environmental governance, these issues carry particular weight. Decisions taken in this domain often shape rights, livelihoods, and settlement patterns over long-time horizons [7].

1.2. Limitations of Prevailing Sustainability-Oriented AI Framings in Addressing Governance Tensions

Prevailing concepts of green artificial intelligence and sustainability-oriented artificial intelligence do not sufficiently account for these tensions. While such approaches may improve computational efficiency or demonstrate benefits at the level of specific applications, they rarely integrate institutional accountability and ethical legitimacy within a single evaluative frame [4,6]. Under these conditions, an AI system may appear environmentally successful when assessed through narrow indicators, while remaining procedurally weak or environmentally burdensome when examined across its full lifecycle [4,5]. Sustainability is therefore often treated as a technical attribute that can be optimized. It is not treated as a governance condition that must be continuously sustained.

1.3. Algorithmic Sustainability: Definition and Evaluative Logic

This research paper addresses that gap by developing algorithmic sustainability as a governance-oriented concept for AI-driven environmental decision-making. Algorithmic sustainability is defined here through the alignment of three interdependent conditions, each of which must hold simultaneously if sustainability claims are to remain credible.
  • The first condition concerns ecological effectiveness assessed across the lifecycle of AI systems, including impacts associated with computing processes, infrastructure requirements, and the effects that emerge at the level of systems [4,5].
  • The second condition concerns institutional accountability, which must be supported and addressed through governance mechanisms that operate in practice and can be enforced in proportion to the impact of the decision at stake [8,9].
  • The third condition concerns ethical legitimacy, understood as being grounded in freedom, justice, and the possibility to contest decisions [6,7].
Ethical legitimacy is treated in this study as a functional condition of governance rather than as an abstract moral ideal. It requires transparency that provides reasons for decisions in proportion to their criticality. It also requires responsibility for outcomes to be clearly assigned and enforceable. Where dispute or harm arises, meaningful pathways for contestation and remedy must exist [6,8,10]. Framed in this way, the evaluative question shifts. The question is no longer whether AI improves a specific environmental metric. The issue becomes whether AI-enabled environmental decision-making remains environmentally effective and institutionally accountable. Alongside the main aspect that it should be ethically legitimate over time [4,6].

1.4. Research Objectives

The paper pursues three objectives:
  • First, it integrates scholarship on climate impact pathways with literatures on ethics and governance in order to construct a coherent framework for evaluating AI systems used in environmental decision-making [4,5,6].
  • Second, it maps the governance conditions that shape environmental AI deployment across global normative instruments, regional regulatory models, and organizational practice [8,10,11,12].
  • Third, it translates ethical and governance principles into instruments that can be implemented in practice, with the aim of supporting oversight, accountability, lifecycle visibility, and contestability in environmental decision contexts [5,8,9].

1.5. Contribution

The contribution of the paper unfolds across three dimensions. To begin with, it clarifies why sustainability in environmental AI cannot be assessed through performance metrics or application-level emissions indicators alone, given the presence of lifecycle impacts and system-level effects [4,5]. It then develops a four-domain ethical risk register that consolidates epistemic and technical risks, risks tied to justice and political economy, risks related to accountability and legitimacy, and risks associated with the environmental footprint of AI systems over time, as synthesized across the literature [4,6,7]. Finally, the paper proposes a governance toolkit oriented toward policy use. This toolkit operationalizes algorithmic sustainability through proportional risk tiering based on decision criticality, requirements for documentation and auditability, a tiered Environmental AI Impact Assessment, obligations for footprint disclosure, governance leverage exercised through procurement, and mechanisms that allow contestation and remedy to be enforced. These elements are aligned with emerging governance architectures at global and regional levels [5,8,10,11].

2. Literature Review: From AI-for-Climate Claims to Governance Conditions

Research on artificial intelligence in relation to environmental sustainability has expanded rapidly over the past decade. Contributions now span climate science, computer science, ethics, and public policy. This growing body of work demonstrates that AI has become increasingly relevant to environmental action. At the same time, it remains fragmented along disciplinary lines. As a consequence, sustainability is often treated as a property of system performance, rather than as a condition shaped by institutional authority, ethical legitimacy, and material constraints. This section brings together three strands of scholarship that, when examined in combination, expose the limits of prevailing approaches and point toward the need for algorithmic sustainability as an integrative framework.

2.1. AI-Related Climate Impact Pathways and System-Level Effects

A foundational strand of the literature examines how artificial intelligence influences environmental outcomes through multiple impact pathways. [4] distinguish between impacts linked to computation, impacts that arise through specific applications, and effects that emerge at the level of systems. This distinction matters because it resists evaluating efficiency gains in isolation. Instead, it shows how AI may reduce environmental pressure in some contexts while intensifying it elsewhere. Impacts linked to computation include energy consumption and associated emissions. They also involve water demand and material extraction, which arise through model training, inference, data storage, and the operation of data centers. Early research focused primarily on the carbon intensity of training large models. More recent work demonstrates that cumulative impacts are often dominated by scaled inference and continuous deployment. This tendency becomes especially pronounced when AI systems are embedded in public administration or infrastructure management [4]. [5] reinforces this view by arguing that environmental assessment must extend across the full lifecycle of AI systems, including hardware supply chains and end-of-life disposal.
At the level of application, AI is frequently presented as a tool that improves efficiency within environmental governance. Common examples include energy optimization, enhanced monitoring, and faster regulatory response. Yet gains at this level do not necessarily produce net environmental benefit. Reduced operating costs or increased system stability may prolong environmentally harmful activities or reinforce infrastructures that remain carbon intensive [4]. Under these conditions, claims that AI-driven efficiency is inherently sustainable become difficult to sustain.
System-level effects capture broader dynamics that unfold beyond individual applications. These include rebound effects, institutional lock-in, and market concentration. Efficiency gains enabled by AI may be offset by rising demand, or by the entrenchment of existing governance arrangements and technical infrastructures. Such effects develop over extended time horizons and resist straightforward quantification. This helps explain why they are often excluded from sustainability assessments. From a governance perspective, they underscore the limits of evaluating AI sustainability through short-term performance indicators alone.
Recent scholarship situates these climate impact pathways within wider transformations of environmental governance. [2] describe the emergence of eco-algorithmic governance, where environmental administration increasingly relies on data analytics and automated classification. [3] similarly argues that governing AI cannot be separated from governing climate, since algorithmic systems shape what counts as environmental evidence and how it enters policy processes. Together, these contributions show that environmental impacts associated with AI cannot be separated from institutional and epistemic dynamics.

2.2. Ethical Legitimacy and the Authority of Algorithmic Decision-Making

Furthermore, the ethical and political implications of AI-mediated environmental decision-making extend beyond technical performance, raising questions of accountability, legitimacy, and democratic oversight [6]. The central issue is not whether AI can support environmental objectives. It is whether deployment reshapes freedom, justice, and democratic authority in ways that are difficult to justify.
One line of inquiry treats the material footprint of AI as an ethical concern rather than a technical side effect. When energy use, emissions, and resource extraction are displaced into opaque infrastructures, environmental harm becomes less visible to those who rely on AI systems. Responsibility then becomes harder to attribute. Under these conditions, the moral credibility of sustainability claims weakens [5,6].
Another line of work focuses on autonomy and behavioral influence. AI-enabled nudging, optimization, and automated decision support may reduce emissions or improve compliance. At the same time, they can bypass deliberation and consent. [13] highlights this tension in discussions of climate nudging, where urgency is used to justify forms of influence that would otherwise be contested. Environmental effectiveness achieved through diminished autonomy may therefore undermine ethical legitimacy rather than reinforce it.
Concerns related to justice further intensify these debates. Environmental risks and mitigation burdens are distributed unevenly across regions, communities, and generations. [7] argues that AI systems can reinforce existing inequalities when they mediate access to protection, compensation, or relocation without adequate safeguards. [3] adds that forms of data colonialism and epistemic dominance may emerge when models developed in high-capacity settings are imposed on regions with fewer resources. In such cases, priorities and knowledge hierarchies within environmental governance are reshaped. Across this body of work, a shared insight emerges. Ethical legitimacy is not an optional supplement to environmental AI. It is a functional requirement for governance systems that claim authority over climate-related decisions affecting rights, livelihoods, and long-term social arrangements.

2.3. Governance Instruments and Unresolved Gaps

The literature also clarifies governance mechanisms intended to manage risks associated with AI deployment. [9] review a range of instruments, including risk classification, documentation practices, transparency requirements, audit processes, and oversight arrangements. Although such tools are becoming more visible across policy domains, their application to environmental decision-making remains uneven and often lacks operational clarity.
At the global level, governance of AI is articulated primarily through non-binding regulating instruments. UNESCO’s Recommendation on the Ethics of Artificial Intelligence emphasizes human rights, accountability, transparency, and societal and environmental well-being [10]. OECD principles similarly promote robustness and accountability, framing sustainability as an objective to be pursued through policy design and procurement [11]. Within the United Nations system, ethical guidance for public-sector AI use stresses traceability and designated human responsibility [12]. These instruments shape expectations of legitimacy. They offer limited means of enforcement. Binding regulation introduces enforceability, although it remains partial. The European Union’s Artificial Intelligence Act adopts a governance architecture based on risk, with obligations assigned according to the impact of decisions on rights and societal interests [8]. While not designed specifically for environmental policy, its structure is directly relevant for AI systems that influence zoning, permitting, enforcement, or eligibility determinations. Lifecycle environmental impacts, however, are not yet incorporated in a systematic way within risk classification. A gap therefore persists between AI governance and environmental governance priorities.
Institutions focused on environmental policy approach this gap from a different direction. [5] emphasizes the need for lifecycle assessment of AI systems. [1] stresses that technological tools cannot substitute for governance capacity, policy coherence, or institutional trust. The ITU draws attention to uneven regulatory maturity and infrastructure constraints across regions, which shape how AI impacts materialize in practice [14]. Despite these contributions, existing frameworks rarely integrate climate impact pathways, ethical legitimacy, and governance instruments within a single evaluative structure.

2.4. Synthesis and Implications for Framework Development

Considered together, these three strands reveal a common limitation. Research on climate impact pathways documents environmental burdens, but often brackets governance. Ethical analyses expose problems of legitimacy, yet rarely translate these concerns into operational instruments. Governance frameworks articulate principles and compliance mechanisms, while treating environmental impacts as secondary considerations. The result is a fragmented landscape. Within it, AI-driven environmental decision systems may appear sustainable under one evaluative lens while failing under another.
This fragmentation motivates the development of algorithmic sustainability as an integrative governance framework. Rather than introducing new ethical principles, the framework aligns existing insights within a single structure for evaluation. Climate impact pathways establish the material and systemic dimensions of sustainability. Ethical legitimacy defines functional constraints related to freedom, justice, and contestability. Governance instruments provide the means through which these constraints can be enforced.

3. Methodology and Approach

This study adopts a framework-development methodology supported by a qualitative comparative review. The paper does not seek to measure empirical outcomes or to make statistical generalizations, nor does it aim to be a comprehensive bibliometric review. Instead, the research methodology aims to conceptual integration in support of governance design. The research problem addressed, how sustainability should be evaluated in the context of AI-driven environmental decision-making, cannot be resolved within a single analytical tradition. It requires the integration of perspectives drawn from environmental impact assessment, ethical legitimacy, and institutional governance, which are commonly examined in isolation.
Sustainability in this lens is treated not as the outcome of technical optimization, but as a governance condition that emerges through interaction. Algorithmic systems, institutional authority, and material infrastructure jointly shape this condition. The methodology therefore focuses on identifying recurring modes of governance failure and translating them into a structured evaluative framework that can support policy analysis and institutional application. optimization.

3.1. Review Scope and Corpus Construction

The qualitative comparative review draws on peer-reviewed academic literature alongside authoritative institutional and regulatory publications that address AI use in environmental and climate-related decision contexts. The review was designed to ensure balanced coverage across three analytical domains that are central to the framework. One domain concern environmental and climate impacts associated with AI systems, including lifecycle and system-level effects. A second domain addresses ethical and political analyses of algorithmic decision authority. A third domain focuses on governance instruments and regulatory approaches that are relevant to the deployment of AI in public-sector decision-making.
The final corpus consists of 7 peer-reviewed journal articles and 7 institutional or regulatory reports, published between 2020 and 2025, yielding a total of 14 sources. This period corresponds to the emergence of environmental AI governance as a distinct policy concern. Earlier sources were included only where they continue to structure contemporary debate or provide foundations for governance frameworks referenced in later work. Corpus construction prioritized analytical relevance and governance significance. Disciplinary coverage and citation volume were not treated as selection criteria. Institutional sources were drawn from organizations that hold recognized authority in AI governance and environmental policy, including UNESCO, UNEP, the European Union, the OECD, the IPCC, and the ITU.

3.2. Search Strategy and Screening Protocol

Academic literature was identified through structured searches conducted in Scopus, Web of Science, and Google Scholar. Institutional sources were identified through targeted searches of official organizational repositories. Search strategies were designed to capture intersections between artificial intelligence, environmental decision-making, and governance. Representative search strings combined references to AI with terms related to governance, climate, sustainability, lifecycle impacts, and public policy.
Screening proceeded in three stages: Titles were first reviewed to exclude sources unrelated to AI or environmental governance. Abstracts were then assessed to identify substantive engagement with governance, ethics, institutional decision-making, or lifecycle impacts. Full texts were examined where preliminary screening suggested analytical relevance. Sources were included where they addressed AI systems used in environmental, climate, land-use, infrastructure, or resource-related decision contexts. Inclusion also required explicit engagement with governance, ethical, institutional, or lifecycle considerations, rather than a sole focus on technical performance. Academic relevance or institutional authority within sustainability or policy research was treated as a necessary condition. Sources were excluded where they focused exclusively on algorithmic accuracy, abstract ethical debates without environmental context, or private consumer applications that lacked implications for public decision authority.

3.3. Analytical Anchors and Boundary Conditions

Two analytical anchors structured the synthesis and established minimum conditions for sustainability evaluation:
  • The first anchor is provided by frameworks that examine AI-related climate impact pathways. These frameworks identify environmental effects that arise at the level of computation, application, and broader systems. They establish the material and systemic dimensions of sustainability and inform evaluation of lifecycle impacts.
  • The second anchor is drawn from ethical and political analyses of algorithmic governance. This literature articulates legitimacy constraints related to justice, autonomy, accountability, and the delegation of decision authority. It informs assessment of procedural fairness, contestability, and responsibility allocation in decisions mediated through AI systems.
Together, these anchors operate as boundary-setting references. Claims about the sustainability of AI-driven environmental decision-making are treated as incomplete where either material impacts or legitimacy conditions are absent. The framework applies to AI systems embedded within public or institutional environmental decision contexts. These include zoning, permitting, compliance monitoring, resource allocation, climate-risk assessment, and regulatory enforcement. The framework does not address private consumer applications, military systems, or advisory tools that lack decision authority. It also does not provide jurisdiction-specific legal interpretation.

3.4. Synthesis and Framework Construction Logic

Synthesis proceeded through deductive coding and consolidation guided by the two analytical anchors. Risks identified across the reviewed literature were grouped according to shared governance failure modes. Examples include situations in which opacity undermines accountability, or where lifecycle impacts remain institutionally invisible. Risks were consolidated where they reflected the same form of governance breakdown. They were retained as distinct where different governance responses were required. Each resulting risk domain was confirmed through evidence appearing across at least two analytical strands. These strands included environmental impact research, ethical analysis, and governance literature. This process framed the four domains presented in the ethical risk register. Governance instruments were then examined for inclusion in the policy toolkit. An instrument was retained only where three conditions were satisfied. It needed to be implementable or enforceable through legal, contractual, or institutional mechanisms. It also needed to enable clear assignment of accountability. Finally, it needed to mitigate at least one identified risk domain. Instruments were mapped to risks according to their primary mitigation function rather than their stated intent.

3.5. Limitations

The study does not evaluate individual AI systems, quantify environmental impacts, or conduct legal compliance analysis. It does not substitute for lifecycle assessment, case-based evaluation, or jurisdiction-specific regulatory review. The contribution is conceptual and structural. It provides a transparent governance framework intended to support institutional evaluation and policy design. Empirical application and regulatory testing represent appropriate directions for future research.

4. Governance Landscape: Global, Regional, and Organizational Conditions Approach

Governance of AI used in environmental decision-making does not arise from a single authority or policy tool. It takes shape through the interaction of ethical norms, legally binding regulation, and everyday institutional practice. These layers do not operate in sequence, nor do they align smoothly. Overlap is common. Divergence is frequent. In some cases, direct conflict emerges. For this reason, understanding algorithmic sustainability requires close attention to how authority, responsibility, and environmental accountability are distributed across institutions. It also requires attention to how these responsibilities may be displaced.

4.1. Global Normative Frameworks and Ethical Expectations

At the global level, governance of AI is articulated primarily through non-binding regulating instruments that establish ethical expectations for use in the public sector. UNESCO’s Recommendation on the Ethics of Artificial Intelligence places human rights, accountability, transparency, and societal and environmental well-being at the center of AI governance [10]. Its relevance for environmental decision-making lies in the recognition that AI systems affecting collective goods must remain understandable, subject to human oversight, and open to challenge by those affected.
OECD principles express a related position. They emphasize robustness and accountability, while framing sustainability as an institutional objective rather than as a feature of technical performance alone [11]. Within the United Nations system, ethical guidance for public-sector AI use reinforces expectations of traceability and clearly assigned human responsibility. These expectations extend to deployments linked to climate monitoring, adaptation planning, and development assistance [12]. Such instruments play an important role in shaping shared understandings of legitimacy. Their influence, however, remains primarily normative. They describe what responsible use should involve. They do not establish binding mechanisms for oversight, redress, or environmental accountability. Ethical baselines are therefore set, while questions of compliance and sanction remain unresolved.

4.2. Regional Regulation and the Logic of Enforceability

Legally binding regulation introduces enforceability into the governance landscape, although it often does so through a logic centered on rights and safety rather than on environmental impact. The European Union’s Artificial Intelligence Act represents the most developed example of this approach. Its risk-based structure assigns obligations according to the potential impact of AI systems on fundamental rights and societal interests. These obligations include requirements related to documentation, transparency, human oversight, and post-market monitoring [8]. The Act is not designed specifically for environmental policy. Even so, its structure is directly relevant to AI systems used in zoning decisions, environmental permitting, enforcement actions, or eligibility determinations. These uses vary significantly in their consequences. The Act’s differentiated obligations therefore offer a model for proportional governance that can travel across policy domains.
At the same time, this regulatory approach reveals a gap. Lifecycle environmental impacts associated with AI systems, such as energy use, water demand, and hardware supply chains, are not systematically integrated into risk classification. As a result, an AI system may satisfy regulatory requirements for AI governance while remaining environmentally burdensome or opaque in relation to its material footprint. This separation reflects a broader misalignment between priorities in AI governance and those in environmental governance.

4.3. Environmental Institutions and Lifecycle-Oriented Perspectives

Institutions focused on environmental and climate policy approach AI governance from a different starting point. Their emphasis falls on material accountability rather than on decision authority alone. UNEP explicitly treats AI as a source of environmental impact, calling for assessment across the full lifecycle of AI systems. This includes computation, infrastructure, and hardware supply chains [5]. In doing so, UNEP challenges narratives that portray AI as immaterial or purely digital. Footprint disclosure is framed as a governance responsibility, not as a voluntary sustainability measure.
The IPCC reinforces a related insight. While digital and algorithmic tools can support climate mitigation and adaptation, they cannot replace governance capacity, policy coherence, or institutional trust [1]. Environmental outcomes depend on political coordination and social legitimacy. Analytical sophistication alone is not sufficient. The ITU adds a further dimension by highlighting uneven regulatory maturity and infrastructure capacity across regions. These differences shape how both environmental and governance impacts of AI emerge in practice [14]. Together, these perspectives foreground environmental accountability. What they often lack are binding mechanisms that connect assessment to decision authority, regulatory compliance, or remedies when harm occurs.

4.4. Interaction, Tension, and Governance Fragmentation

Viewed together, these governance layers form a fragmented landscape rather than a coherent system. Normative frameworks define ethical expectations without enforcement. Regulatory regimes enforce compliance while often setting aside lifecycle environmental impacts. Environmental institutions emphasize material accountability without providing binding governance tools. Organizational actors must operate within these tensions while deploying AI systems under constraints of resources, capacity, and political pressure.
Several recurring tensions follow. Compliance with risk-based regulation may overlook environmental footprints. Ethical commitments may exceed institutional capacity to enforce them. Procurement practices may compensate for regulatory gaps, while increasing reliance on proprietary vendors. In such settings, sustainability claims may circulate without clear accountability. AI systems may persist even when ethical or environmental concerns remain unresolved.
These tensions are structural. They create the conditions under which risks related to opacity, distributive injustice, accountability gaps, and lifecycle impacts arise and interact. Algorithmic sustainability cannot be secured by strengthening a single governance layer in isolation. Alignment is required.

4.5. Implications for Risk Identification

The fragmentation described above provides the basis for identifying interconnected risk domains associated with AI-driven environmental decision-making. Risks do not arise solely from technical design choices. They also emerge from misalignments between ethical norms, regulatory structures, and institutional practice. The following section consolidates these risks into an ethical risk register. The register is structured to capture how epistemic uncertainty, distributive effects, accountability failures, and lifecycle impacts interact in practice.

5. Ethical Risk Register for AI-Driven Environmental Decision-Making

The ethical risk register presented in this section is not intended as a complete inventory of all risks associated with artificial intelligence. It is designed instead as a governance-oriented synthesis. Its purpose is to capture the minimum set of interconnected risk domains that must be addressed if claims about the sustainability of AI-driven environmental decision-making are to be considered substantively complete. Risk domains were derived through deductive coding guided by the two analytical anchors introduced in Section 3. One anchor draws on frameworks that examine climate impact pathways associated with AI, which identify material and system-level environmental effects [4,5]. The second anchor is provided by ethical and political analyses of algorithmic governance, which articulate legitimacy constraints related to justice, autonomy, accountability, and authority [6,7].
Across the reviewed literature, risks were consolidated where they reflected a shared mode of governance failure. An example is opacity, which can undermine accountability and limit contestability at the same time. Risks were retained as distinct domains where different governance responses or policy instruments were required. Each domain was confirmed through evidence appearing across at least two analytical strands, including climate impact research, ethical analysis, or governance scholarship. This process produced four domains that together capture the main pathways through which sustainability failures emerge in environmental AI governance. The register is structured to support diagnostic evaluation. It is not designed to predict outcomes. Its function is to identify where governance arrangements are likely to fail when AI systems become embedded in environmental decision processes.

5.1. Domain I: Epistemic and Technical Risks

The first domain concerns risks linked to how environmental knowledge is produced and stabilized through algorithmic systems, and how that knowledge is translated into decision authority. Environmental AI relies on probabilistic models trained on data that are incomplete, limited in time, and sensitive to contextual change. Once these models enter administrative routines, uncertainty is often compressed into fixed categories. Common examples include risk zones, compliance indicators, or eligibility thresholds [2,4].
A central risk emerges when epistemic uncertainty is converted into administrative certainty without sufficient safeguards. Model outputs may acquire authority that exceeds their evidentiary strength, particularly where institutions prioritize efficiency or consistency over deliberation. Over time, environmental conditions change, sensors deteriorate, and social–ecological relations shift. Without explicit requirements for recalibration or time-limited validity, models trained on historical data may continue to inform binding decisions beyond their appropriate scope [5].
The use of proxies introduces a related concern. Where direct environmental data are unavailable, AI systems frequently rely on socio-economic, spatial, or infrastructural indicators as substitutes. These proxies can embed structural bias and produce distorted representations of vulnerability or risk. When proxy selection and weighting remain opaque, affected communities have limited capacity to question how lived environmental realities are translated into administrative categories [3,7].

5.2. Domain II: Justice and Political Economy Risks

This domain addresses how AI-mediated environmental governance redistributes benefits, burdens, and decision authority. Many environmental AI systems prioritize aggregate outcomes, such as emissions reduction or overall risk mitigation, without explicit attention to how associated costs are distributed. Practices related to relocation prioritization, insurance classification, targeted enforcement, or resource allocation may therefore disproportionately affect communities with limited adaptive capacity. This can occur even when systems are presented as technically neutral or evidence-based [7]. A related risk concerns data colonialism and epistemic dominance. Environmental AI often depends on data extracted from regions with limited institutional capacity, while model development, ownership, and governance authority remain concentrated elsewhere. This asymmetry can shape which forms of environmental knowledge are treated as legitimate and whose priorities are embedded in decision systems [3]. Over time, reliance on external platforms and expertise may constrain local autonomy and weaken institutional learning within environmental governance.
Political economy risks also arise through institutional capture. Proprietary AI systems may define metrics, benchmarks, and narratives of success. Where public institutions lack access to model internals or viable alternatives, policy agendas may be shaped indirectly by vendor priorities rather than through democratic deliberation or environmental objectives [9]. These dynamics complicate accountability and reduce public control over environmental decision infrastructure.

5.3. Domain III: Accountability and Legitimacy Risks

This section concerns procedural conditions that underpin legitimate environmental governance. Opacity remains a persistent challenge, particularly where complex models are deployed in high-impact decision contexts. When decision rationales cannot be articulated in a form that administrators, courts, or affected communities can understand, procedural safeguards weaken [10,11]. Contestability presents a closely related concern. Many AI-enabled environmental systems lack clear pathways through which individuals or communities can challenge classifications, request explanations proportionate to decision impact, or seek remedy. Even where human oversight is formally required, it may operate as confirmation rather than substantive review. This tendency is reinforced under conditions of administrative pressure or urgency [8]. Accountability becomes further complicated by fragmented responsibility. Environmental AI systems commonly involve multiple actors, including public agencies, software vendors, data providers, and consultants. This dispersion can obstruct redress and weaken institutional learning. Systemic problems may persist without effective correction, allowing formal compliance to coexist with practical irresponsibility [9].

5.4. Domain IV: AI Lifecycle Footprint Risks

The fourth domain captures environmental impacts generated by AI systems themselves. These impacts extend beyond initial model development and are often understated in sustainability narratives. Energy use and associated emissions frequently increase over time through scaled inference, continuous monitoring, and persistent optimization. This is particularly evident when systems operate continuously within environmental governance infrastructures [4].
Water demand introduces additional pressures. Data-center cooling requirements can intensify stress in regions already experiencing scarcity, producing localized trade-offs that are rarely disclosed or incorporated into decision processes [5]. Hardware supply chains represent a further concern. Resource extraction, manufacturing, and disposal of computing equipment generate embodied emissions and ecological pressure that are often displaced geographically. These dynamics complicate claims that AI-enabled environmental governance is immaterial or low-impact [5,6].

5.5. Interactions Among Risk Domains

These four domains do not operate in isolation. Epistemic uncertainty can amplify distributive harm when misclassification affects vulnerable populations. Opacity can conceal lifecycle impacts, limiting accountability for environmental burdens generated by AI systems themselves. Political economy dynamics may entrench systems that remain ethically or environmentally problematic despite formal compliance. Therefore, the ethical risk register functions as a diagnostic governance instrument. It supports evaluation of AI-driven environmental decision systems by showing how technical design choices, institutional arrangements, and material infrastructures interact to produce sustainability failures. Addressing these risks requires governance instruments capable of operating across domains rather than isolated technical fixes.

6. Policy Instruments: Translating Principles into Enforceable Governance

The ethical risk register developed in Section 5 shows that sustainability failures in AI-driven environmental decision-making emerge where technical uncertainty, distributive effects, procedural opacity, and material impacts interact. Addressing these failures requires governance instruments that operate in practice, that can be enforced institutionally, and that scale in proportion to the criticality of decisions. This section presents a governance toolkit designed to translate ethical and environmental principles into mechanisms that can function within real decision contexts.

6.1. Instrument Eligibility and Selection Logic

Governance instruments included in the toolkit were selected through a structured eligibility assessment. Inclusion was not based on normative appeal alone. An instrument was retained only where it could be implemented or enforced through legal mandate, contractual obligation, or established institutional procedure. Voluntary commitments were not treated as sufficient. A second condition concerned accountability. Instruments needed to assign responsibility for outcomes, oversight, or remedy in a manner that could be operationalized. A third condition concerned risk mitigation. Each instrument had to address at least one domain identified in the ethical risk register and demonstrate capacity to reduce governance failure rather than merely articulate aspiration. Instruments were mapped to risk domains according to their primary mitigation function, while recognizing that some instruments operate across domains. The resulting toolkit is not exhaustive. It represents a minimum governance set required to support algorithmic sustainability in environmental decision contexts.

6.2. Proportional Risk Differentiation by Decision Criticality

Differentiation of risk constitutes a foundational governance instrument. Environmental AI systems differ not only in their technical design, but in the degree of authority they exercise over rights, obligations, and access to resources. Governance arrangements that do not differentiate by decision impact risk imposing excessive requirements on low-impact tools while leaving high-impact systems under-governed. A policy-relevant typology distinguishes between systems that provide information without direct administrative effect, systems that shape prioritization or recommendations while requiring human validation, and systems that trigger or constrain rights, duties, enforcement actions, or eligibility conditions. Risk classification by decision impact links acceptable levels of uncertainty, documentation burden, oversight intensity, and contestability requirements to decision criticality [8]. Within this narrative, it mitigates epistemic and legitimacy risks. Automation is constrained where justice concerns are pronounced, and scrutiny increases where authority is exercised.

6.3. Documentation, Traceability, and Auditability

Documentation functions as a governance instrument rather than as a procedural formality. In the context of environmental AI, minimum documentation clarifies decision purpose and scope, identifies data sources and proxy use, specifies assumptions and uncertainty characteristics, defines recalibration limits, and explains how algorithmic outputs are integrated into administrative processes. Auditability extends documentation into practice by enabling traceability of how outputs influence decisions. Together, these instruments reduce epistemic risk by clarifying model boundaries. They also reduce justice risk by exposing proxy choices. Accountability is strengthened by enabling review and contestation where decisions are disputed [10,11].

6.4. Environmental AI Impact Assessment (EAIA)

To address the interconnected risks identified in Section 5, this study proposes Environmental AI Impact Assessment as a tiered governance instrument adapted to AI-mediated environmental decision-making. EAIA differs from generic AI impact assessments in several respects. It evaluates not only the consequences of AI-mediated decisions, including ecological outcomes, distributive effects, and procedural legitimacy, but also the environmental impacts generated by the AI system itself across its lifecycle. It requires assessment under changing environmental baselines, recognizing that climate conditions and ecological thresholds evolve. It explicitly examines how environmental benefits and burdens are distributed across communities, regions, and generations. It also treats explanation pathways and remedy mechanisms as assessment outputs rather than as secondary considerations.
EAIA requirements vary with decision criticality. Systems that provide information are subject to lighter assessment. Systems that trigger decisions require comprehensive evaluation. In this way, EAIA mitigates lifecycle footprint risks, justice concerns, and accountability failures by integrating environmental assessment into governance authority rather than relegating it to external reporting [5].

6.5. Footprint Disclosure and Reporting Obligations

Standardized disclosure of lifecycle impacts is a necessary condition for governing the environmental burdens generated by AI systems themselves. Disclosure obligations make visible the energy and emissions associated with training and inference. They also account for deployment scale, operational duration, characteristics of electricity grids at deployment locations, water use where relevant, and assumptions related to hardware replacement and end-of-life management. Such disclosure mitigates lifecycle footprint risks by rendering material impacts visible. It also strengthens accountability and contestability by enabling informed scrutiny of sustainability claims and constraining overstatement of benefits [4,5].

6.6. Mapping risk Domains to Governance Instruments

The governance toolkit addresses the ethical risk register through complementary instruments rather than through one-to-one correspondence. Table 1 summarizes how risk domains are primarily mitigated through combinations of instruments rather than isolated controls. To translate proportional governance principles into practice, Table 1 specifies mandatory governance instruments according to the decision criticality of AI systems used in environmental decision-making contexts.

6.7. Implications for Algorithmic Sustainability

The governance toolkit demonstrates that algorithmic sustainability cannot be achieved through isolated interventions. Instruments must operate in combination. They must be applied in proportion to decision impact and with sensitivity to institutional context. When deployed together, these instruments align environmental assessment, ethical legitimacy, and institutional accountability within a coherent governance structure.

7. Discussion: What Algorithmic Sustainability Demands

The analysis presented in the preceding sections indicates that AI-driven environmental decision-making cannot be assessed through technical performance or sector-specific outcomes alone. Sustainability takes form as a condition of governance. It is shaped by how algorithmic systems are situated within institutional authority, material infrastructure, and ethical constraint. The concept of algorithmic sustainability developed in this paper clarifies what responsible governance of environmental AI requires. It also clarifies what such governance must resist.

7.1. Sustainability as Governance Rather Than Optimization

A central implication of the framework concerns the limits of optimization. Improvements in efficiency, predictive accuracy, or monitoring resolution do not, on their own, constitute sustainability. AI systems may reduce energy demand, improve regulatory response times, or expand environmental monitoring while also producing rebound effects, reinforcing carbon-intensive infrastructures, or redistributing burdens in uneven ways. Research on climate impact pathways shows that gains achieved at one level, such as application efficiency, can be offset by system-level dynamics that undermine longer-term environmental objectives [4].
Algorithmic sustainability therefore requires a shift in evaluation. Instead of asking whether an AI system improves a particular environmental indicator, governance must examine whether its deployment remains ecologically effective across the AI lifecycle, institutionally accountable through enforceable arrangements, and ethically legitimate in practice. Sustainability claims become unstable when any of these conditions fail, regardless of technical sophistication.

7.2. Ethical Legitimacy as a Functional Requirement

A second implication concerns ethical legitimacy. In environmental governance, legitimacy is sometimes treated as a normative ideal that can be postponed under conditions of urgency. The analysis suggests the opposite. When algorithmic systems cannot be explained in relation to their effects, challenged through meaningful procedures, or linked to identifiable decision-makers, institutional trust erodes. This erosion has practical consequences. Resistance to zoning decisions, relocation measures, or enforcement actions can delay implementation, intensify conflict, and weaken climate adaptation and mitigation efforts. From this perspective, ethical legitimacy does not operate as an external constraint on environmental action. It functions as a condition for durable governance. Algorithmic sustainability therefore requires arrangements that preserve contestability, due process, and accountability even when environmental pressures are acute. Treating legitimacy as optional risks turning technical urgency into institutional fragility.

7.3. Refusing Technocratic Exceptionalism

Algorithmic sustainability also rejects forms of technocratic exceptionalism. Such approaches assume that climate urgency justifies expanding algorithmic authority without proportional governance safeguards. While AI can support environmental decision-making by processing complex data and generating predictive insight, treating algorithmic outputs as neutral or inevitable risks normalizing forms of control that bypass democratic deliberation and pluralism.
The literature reviewed in this study highlights the risks of emergency framings that convert technical necessity into political inevitability [6,7]. Algorithmic sustainability resists this logic. It insists that decision authority remain open to challenge and that sustainability objectives do not displace foundational principles of governance. This position is especially relevant where environmental decisions reshape land rights, mobility patterns, or access to essential resources.

7.4. Geopolitical and Institutional Implications

The discussion also points to broader political economy implications. When data infrastructure, platform ownership, and governance capacity are concentrated among a limited group of actors, algorithmic sustainability becomes entangled with questions of dependency, sovereignty, and institutional autonomy. The risks of data colonialism and vendor capture identified in the ethical risk register reflect deeper asymmetries in global sustainability governance [3]. Without arrangements that support institutional learning, interoperability, and local governance capacity, AI-enabled environmental policy may reproduce existing power imbalances rather than mitigate them. Algorithmic sustainability therefore requires attention not only to system performance, but to control and authority. Questions about who defines success, who governs deployment, and who bears residual risk become central to sustainability evaluation.

7.5. Illustrative Application: AI-Assisted Climate-Risk Zoning

Consider an AI-assisted climate-risk zoning system used to classify flood risk and to inform building permit approvals and insurance eligibility in a coastal region. The system integrates historical flood data, satellite imagery, and climate projections to generate risk categories that trigger regulatory constraints. From an optimization perspective, the system appears effective. It improves predictive resolution and accelerates administrative decision-making. Applying the algorithmic sustainability framework, however, reveals several governance vulnerabilities. Epistemic risk arises when uncertainty in climate projections is translated into fixed zoning classifications without recalibration. Justice risks appear when risk categories disproportionately restrict development in lower-income areas without compensation or avenues for appeal. Accountability risks emerge where residents cannot challenge classifications or identify decision-makers responsible for outcomes. Lifecycle footprint risks remain invisible if the system’s computational and infrastructural impacts are excluded from assessment.
Applying the governance toolkit alters this evaluation. Risk-tiering classifies the system as decision-triggering, which activates heightened oversight. Documentation and auditability clarify assumptions and proxy use. Environmental AI Impact Assessment evaluates both zoning consequences and system-level environmental burdens. Disclosure requirements make computational impacts visible. Contestability mechanisms provide routes for challenge and remedy. Under these conditions, the system’s sustainability depends on governance design rather than predictive accuracy alone.

7.6. Integrating Ethics, Governance, and Environmental Assessment

The final implication concerns integration. Algorithmic sustainability rejects the separation of ethics, governance, and environmental assessment into parallel domains. These dimensions operate as interdependent components of a single evaluative problem. Environmental effectiveness without legitimacy is unstable. Legitimacy without material accountability lacks substance. Accountability without lifecycle awareness remains incomplete. This integration does not require new ethical principles. It requires institutional alignment. Existing norms, regulatory mechanisms, and assessment practices must be coordinated within governance arrangements capable of operating across decision contexts and time horizons. The framework developed in this paper offers one pathway toward such alignment, while recognizing that its effectiveness depends on political commitment, administrative capacity, and sustained oversight.

8. Conclusions

This paper addressed a widening gap between the expanding use of artificial intelligence in environmental decision-making and the governance capacity required to evaluate sustainability in a manner that remains both comprehensive and legitimate. AI systems now inform climate-risk assessment, land-use regulation, environmental monitoring, and enforcement. At the same time, dominant sustainability narratives continue to rely on narrow claims related to performance or efficiency. These narratives leave limited space to examine the institutional, ethical, and material conditions through which algorithmic systems acquire authority and persist in practice.
In response, the paper developed algorithmic sustainability as a governance-oriented framework rather than as a concept tied to technical optimization. Algorithmic sustainability was defined through the alignment of three interdependent conditions that must hold simultaneously. These conditions concern ecological effectiveness assessed across the AI lifecycle, institutional accountability supported by oversight that can be enforced, and ethical legitimacy grounded in freedom, justice, and the possibility to contest decisions. Treating sustainability as a governance condition shifts evaluation away from isolated indicators. Attention moves instead toward the durability, credibility, and responsibility of AI-enabled environmental decision systems over time.
The paper advances four contributions. First, it integrates research on climate impact pathways with ethical and governance scholarship to clarify why environmental AI cannot be evaluated through application-level performance alone [4,5]. Second, it develops a four-domain ethical risk register that consolidates epistemic and technical risks, justice and political economy risks, accountability and legitimacy risks, and risks linked to the AI lifecycle footprint. This register highlights how these domains interact in practice. Third, the paper proposes a governance toolkit that translates ethical principles into operational instruments. These instruments include proportional risk-tiering by decision criticality, documentation and auditability requirements, footprint disclosure obligations, procurement-based governance leverage, and mechanisms that enable contestation and remedy. Fourth, it introduces Environmental AI Impact Assessment as a differentiated assessment instrument that brings together the consequences of AI-mediated decisions and the environmental impacts generated by AI systems themselves.
The central conclusion concerns institutions rather than tools. The long-term viability of AI-enabled environmental governance depends less on analytical sophistication than on governance design. Systems that cannot be explained in relation to their effects, challenged through meaningful procedures, or held accountable for their material impacts are unlikely to sustain public trust or produce stable environmental outcomes, regardless of predictive accuracy. Algorithmic sustainability therefore reframes AI not as a shortcut to environmental action, but as a form of regulated infrastructure whose legitimacy requires ongoing maintenance.

8.1. Limitations

This study has several limitations. It does not evaluate individual AI systems, quantify environmental impacts, or conduct jurisdiction-specific legal compliance analysis. The framework is conceptual and structural. It is intended to support governance design rather than to replace empirical assessment, lifecycle analysis, or legal review. The qualitative comparative review emphasizes conceptual coverage over exhaustiveness. For this reason, the ethical risk register and governance toolkit should be interpreted as minimum governance requirements rather than as comprehensive prescriptions.

8.2. Future Research Directions

Future research can extend this work in several directions. Empirical studies could apply the algorithmic sustainability framework to specific environmental decision systems in order to examine how governance instruments operate in practice. Comparative research could investigate how different regulatory regimes integrate, or fail to integrate, lifecycle environmental impacts into AI governance. Additional work could operationalize Environmental AI Impact Assessment through sector-specific guidance, develop methods for evaluating contestability and accountability, or examine the political economy implications of AI procurement and platform dependency in environmental governance.
In closing, algorithmic sustainability offers a structured approach for aligning environmental effectiveness, ethical legitimacy, and institutional accountability in AI-driven environmental decision-making. Its value lies not in prescribing technological solutions, but in supporting governance arrangements capable of sustaining environmental action under conditions of complexity, uncertainty, and institutional constraint implication.

Author Contributions

Conceptualization, G.T. and K.A.; methodology, G.T.; software, G.T.; validation, G.T., K.A..; formal analysis, K.A.; investigation, K.A.; resources, G.T.; data curation, G.T.; writing—original draft preparation, G.T.; writing—review and editing, K.A.; visualization, G.T.; supervision, G.T.; project administration, K.A.; funding acquisition, G.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are included within the article and its supplementary material. Additional simulation outputs and optimization records are available from the corresponding author upon reasonable request.

Acknowledgments

The authors acknowledge IMAGINE Studios for providing technical infrastructure and research support used in conducting the simulation and optimization analyses.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AI Artificial Intelligence
EAIA Environmental Artificial Intelligence Impact Assessment
EU European Union
HVAC Heating, Ventilation, and Air Conditioning
IPCC Intergovernmental Panel on Climate Change
ITU International Telecommunication Union
OECD Organization for Economic Co-operation and Development
UN United Nations
UNEP United Nations Environment Programme
UNESCO United Nations Educational, Scientific and Cultural Organization

References

  1. Calvin, K.; Dasgupta, D.; Krinner, G.; Mukherji, A.; Thorne, P.W.; Trisos, C.; Romero, J.; Aldunce, P.; Barrett, K.; Blanco, G.; et al. IPCC, 2023: Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Core Writing Team, H. Lee and J. Romero (Eds.)]. IPCC, Geneva, Switzerland.; Arias, P., Bustamante, M., Elgizouli, I., Flato, G., Howden, M., Méndez-Vallejo, C., Pereira, J.J., Pichs-Madruga, R., Rose, S.K., Saheb, Y., Sánchez Rodríguez, R., Ürge-Vorsatz, D., Xiao, C., Yassaa, N., Romero, J., Kim, J., Haites, E.F., Jung, Y., Stavins, R., Birt, A., Ha, M., Orendain, D.J.A., Ignon, L., Park, S., Park, Y., Reisinger, A., Cammaramo, D., Fischlin, A., Fuglestvedt, J.S., Hansen, G., Ludden, C., Masson-Delmotte, V., Matthews, J.B.R., Mintenbeck, K., Pirani, A., Poloczanska, E., Leprince-Ringuet, N., Péan, C., Eds.; First.; Intergovernmental Panel on Climate Change (IPCC), 2023. [Google Scholar]
  2. Tironi, M.; Rivera Lisboa, D.I. Artificial Intelligence in the New Forms of Environmental Governance in the Chilean State: Towards an Eco-Algorithmic Governance. Technology in Society 2023, 74, 102264. [Google Scholar] [CrossRef]
  3. Nost, E. Governing AI, Governing Climate Change? Geography and Environment 2024, 11, e00138. [Google Scholar] [CrossRef]
  4. Kaack, L.H.; Donti, P.L.; Strubell, E.; Kamiya, G.; Creutzig, F.; Rolnick, D. Aligning Artificial Intelligence with Climate Change Mitigation. Nat. Clim. Chang. 2022, 12, 518–527. [Google Scholar] [CrossRef]
  5. United Nations Environment Programme. Artificial Intelligence (AI) End-to-End: The Environmental Impact of the Full AI Lifecycle Needs to Be Comprehensively Assessed - Issue Note; United Nations Environment Programme, 2024; ISBN 978-92-807-4182-7. [Google Scholar]
  6. Coeckelbergh, M. AI for Climate: Freedom, Justice, and Other Ethical and Political Challenges. AI Ethics 2021, 1, 67–72. [Google Scholar] [CrossRef]
  7. Nordgren, A. Artificial Intelligence and Climate Change: Ethical Issues. JICES 2023, 21, 1–15. [Google Scholar] [CrossRef]
  8. European Union Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act); Off. J. Eur, 2024.
  9. Batool, A.; Zowghi, D.; Bano, M. AI Governance: A Systematic Literature Review. AI Ethics 2025, 5, 3265–3279. [Google Scholar] [CrossRef]
  10. United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on the Ethics of Artificial Intelligence.
  11. Organisation for Economic Co-operation and Development (OECD). OECD OECD Legal Instruments. Available online: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 (accessed on 5 February 2026).
  12. United Nations - CEB Principles for the Ethical Use of Artificial Intelligence in the United Nations System. Available online: https://unsceb.org/principles-ethical-use-artificial-intelligence-united-nations-system (accessed on 5 February 2026).
  13. Bartmann, M. The Ethics of AI-Powered Climate Nudging—How Much AI Should We Use to Save the Planet? Sustainability 2022, 14, 5153. [Google Scholar] [CrossRef]
  14. International Telecommunication Union (ITU) The Annual AI Governance Report 2025: Steering the Future of AI. Available online: https://www.itu.int/epublications/publication/the-annual-ai-governance-report-2025-steering-the-future-of-ai (accessed on 5 February 2026).
Table 1. Proportional Governance Requirements by AI Decision Criticality in Environmental Decision Contexts.
Table 1. Proportional Governance Requirements by AI Decision Criticality in Environmental Decision Contexts.
Decision Criticality Tier Typical Environmental Decision Contexts Governance Risk Exposure Mandatory Governance Instruments
Tier 1: Informational AI Environmental monitoring dashboards, scenario visualization, internal analytics Low: direct impact; indirect influence on awareness Documentation of model purpose; basic transparency statements; internal review protocols
Tier 2:
Decision-Support AI
Land-use planning support, risk prioritization, permit evaluation assistance Moderate: institutional and ethical risk; human judgment still primary Auditability requirements; data provenance documentation; lifecycle environmental footprint disclosure; human-in-the-loop verification
Tier 3: Decision-Triggering AI Automated zoning classification, compliance enforcement triggers, eligibility or restriction determinations High: ecological, ethical, and legitimacy risk; limited discretion Environmental AI Impact Assessment (EAIA); formal accountability assignment; contestation and remedy mechanisms; justification registers
Tier 4: Delegated or Automated Authority Automated permitting decisions, enforcement actions without discretionary override Systemic governance and democratic legitimacy risk Statutory authorization; continuous auditing; external oversight; public transparency obligations; suspension and rollback mechanisms
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated