Preprint
Article

This version is not peer-reviewed.

Governing Environmental Decisions in the Age of AI: Algorithmic Sustainability as a Policy Review

Submitted:

27 February 2026

Posted:

04 March 2026

You are already at the latest version

Abstract
In recent years, artificial intelligence has been systematically integrated into public environmental decision-making. It increasingly influences risk classification, the distribution of resources, and the exercise of regulatory authority. While policy attention often focuses on predictive performance and ethical principles, less scrutiny has been directed toward the institutional conditions under which algorithmic outputs acquire decision relevance. This policy review addresses that gap by framing environmental artificial intelligence as decision-making infrastructure rather than as neutral analytical software. It introduces the concept of algorithmic sustainability, defined not as a technical property of algorithms but as a governance condition that aligns lifecycle environmental impacts, enforceable accountability, and procedural legitimacy. Drawing on international policy frameworks and regulatory developments, the review shows how current governance instruments insufficiently integrate lifecycle environmental footprints into decision justification. To operationalize algorithmic sustainability, this paper proposes environmental algorithmic impact assessment as a gatekeeping and renewal mechanism for artificial intelligence used in environmental governance. The review concludes that aligning algorithmic deployment with sustainability and the rule of law depends on institutional design choices made before and during system use rather than on technical optimization alone.
Keywords: 
;  ;  ;  ;  

1. The Governance Gap in AI-Driven Environmental Decision-Making

Environmental agencies increasingly deploy artificial intelligence (AI) in climate and disaster risk mapping, land use planning, permitting triage, remote sensing, and compliance targeting. These systems are used by public authorities to support or trigger administrative decisions that shape regulatory priorities, allocate public resources, and constrain or enable permissible environmental activity. Their uptake reflects institutional pressure to process complex data under conditions of uncertainty and time constraints. In practice, however, research on climate governance nonetheless shows that policy assessments often prioritize predictive performance while leaving two foundational concerns insufficiently examined. One relates to the environmental footprint generated by AI systems across energy use, emissions, water demand, and material supply chains. The other concerns the authority exercised when algorithmic outputs inform or constrain public decisions, including questions of transparency, accountability, and access to contestation and remedy [1,2].
Building on this evidence, this policy review applies an algorithmic sustainability perspective developed in recent research [3]. The term algorithmic sustainability is introduced by the authors as a conceptual contribution intended to clarify governance conditions specific to artificial intelligence in environmental decision-making. Because the concept is newly proposed, reference is made to the authors’ earlier work in which the foundational definition and normative framing were first articulated [3]. In this context, algorithmic sustainability refers to the governance conditions under which artificial intelligence is used in public environmental decision-making. It does not describe a technical property of algorithms or a claim about computational efficiency. Rather, it denotes an institutional framework that integrates lifecycle environmental impacts with enforceable accountability mechanisms and strengthens procedural legitimacy, including the provision of intelligible reasons for decisions and access to contestation and remedy. Unlike general responsible AI frameworks, algorithmic sustainability explicitly embeds lifecycle environmental externalities within enforceable public-sector decision controls. Environmental AI is therefore treated as regulated decision-making infrastructure rather than as a neutral technical aid. Sustainability is evaluated through three interrelated conditions. Ecological effectiveness is assessed with reference to lifecycle impacts, including computation and supporting infrastructure. Institutional accountability is examined through governance arrangements that operate in enforceable ways rather than through voluntary commitments. Ethical legitimacy is considered in relation to transparency, procedural safeguards, and the availability of viable pathways for contestation and remedy when decisions affect rights or access to resources [3].
Evidence on resource demand reinforces the urgency of this framing. Estimates from the International Energy Agency indicate that data centers consumed approximately 415 terawatt-hours of electricity in 2024, representing about 1.5 percent of global electricity use. Baseline projections suggest that demand associated with data center infrastructure will more than double by 2030, approaching roughly 900–950 terawatt-hours under baseline scenarios, with artificial intelligence identified as a major driver through model training, inference at scale, and infrastructure lock-in. In parallel, the United Nations Environment Programme emphasizes that climate and environmental interventions must be assessed across full system lifecycles and infrastructure dependencies, rather than inferred from isolated performance gains. Assessment must extend across the full system lifecycle, from model development to deployment and infrastructure operation [4,5].

1.1. Methodological Approach to the Policy Review

This article adopts a structured policy review methodology. The review focuses on international governance instruments, regulatory frameworks, multilateral reports, and technical standards published between 2020 and 2025 that address artificial intelligence in public decision-making and environmental governance. Sources were selected based on three criteria: (1) relevance to public-sector AI deployment, (2) explicit treatment of governance, risk management, or accountability mechanisms, and (3) potential implications for lifecycle environmental impacts. The analysis applies an institutional lens rather than a technical performance perspective. Specifically, instruments were examined according to two evaluative dimensions: (i) how decision authority is structured, delegated, or constrained when AI systems inform public environmental decisions; and (ii) whether lifecycle environmental impacts including energy use, emissions, water demand, and infrastructure dependencies are integrated into decision justification or oversight requirements.
This review does not provide an empirical case study or quantitative assessment. Instead, it synthesizes regulatory developments and policy frameworks to identify structural governance gaps and to propose an operational mechanism: Environmental Algorithmic Impact Assessment (EAIA). This EAIA intends to address those gaps those gaps. The objective is conceptual clarification and institutional design refinement rather than performance benchmarking.

2. Evidence Base for Governing Environmental AI

Climate policy research shows that artificial intelligence influences environmental outcomes through pathways that extend beyond application-level performance. These include energy use and emissions associated with model training, resource demand generated by inference at scale, and system-level effects such as rebound dynamics and infrastructure lock-in that shape longer-term environmental trajectories [1]. When embedded within public decision processes, these pathways position AI not only as an analytical tool but also as a contributor to cumulative environmental pressure.
For environmental governance, this distinction has direct institutional implications. In many administrative settings, AI systems used by public agencies often translate probabilistic outputs into durable administrative categories, including flood risk zones, inspection priorities, permitting thresholds, or enforcement triggers. As a result, a system may improve localized indicators, such as monitoring coverage or predictive accuracy, while increasing net environmental burdens elsewhere through electricity demand, water use, or material consumption associated with persistent deployment. Without lifecycle-based assessment and governance controls that extend beyond pilot use, such effects remain largely invisible to decision-makers [1]. Environmental effectiveness alone does not confer legitimacy. Ethical and political analyses show that uses of artificial intelligence justified by climate urgency can undermine freedom, justice, and democratic authority when opaque classifications or constrained autonomy are accepted as necessary trade-offs [6]. In public environmental decision-making, legitimacy depends on the conditions under which authority is exercised, including access to intelligible reasons for decisions, meaningful opportunities for contestation, and remedies where harm occurs.
These legitimacy requirements are reflected in international governance instruments. UNESCO’s Recommendation on the Ethics of Artificial Intelligence situates AI governance within human rights obligations and emphasizes procedural safeguards in public decision contexts. The OECD AI Principles articulate proportionality and risk-based governance where systems affect societal well-being, including environmental protection as a governance objective rather than a secondary benefit [7,8].
Binding regulatory approaches have begun to operationalize these expectations. In the European Union, the Artificial Intelligence Act establishes a risk-based framework under which systems used by public authorities for environmental management may be classified as high risk, depending on intended purpose and use context, triggering obligations related to risk management, transparency, and human oversight [9]. In parallel, the Council of Europe Framework Convention on Artificial Intelligence, opened for signature in September 2024 as a legally binding treaty text, establishes commitments to ensure that AI deployment across its lifecycle remains consistent with human rights, democracy, and the rule of law. Despite this convergence, a governance gap persists. Most instruments specify rights and procedural duties more clearly than they require standardized, auditable disclosure of lifecycle environmental footprints for AI systems used in public environmental decisions. Energy use, emissions, water demand, and material dependencies remain weakly integrated into decision justification, even as data center infrastructure and associated environmental pressure continue to expand [5].

3. From Evidence to Governance Action: Closing the Institutional Gap

Notably, the evidence reviewed above indicates that the governance gap surrounding environmental artificial intelligence is institutional rather than technical. Systems used in environmental governance do not merely generate information. They structure how public authority is exercised. Model outputs are translated into classifications, prioritization schemes, or thresholds that trigger administrative action, shaping land use constraints, inspection schedules, eligibility determinations, or enforcement responses. Environmental and social consequences follow not because models fail but because algorithmic outputs acquire decision relevance within regulatory processes [1,5].
This translation from computation to authority exposes a mismatch between use and oversight. Governance arrangements often emphasize model performance, data quality, or abstract ethical principles, while giving insufficient attention to the institutional pathways through which AI-informed outputs become binding decisions. Accountability mechanisms therefore lag behind the locus of authority, and lifecycle environmental impacts remain weakly integrated into decision justification [2,8].
Across international policy instruments, a common insight is emerging. Governance must attach to decision authority rather than to technical function alone. Ethics guidance emphasizes procedural legitimacy where AI informs public decisions. Risk-based frameworks focus on proportional oversight calibrated to societal impact. Binding regulation links obligations to uses that affect rights, access, or public resources. Together, these approaches converge on the need to govern how authority is exercised, not simply how systems perform [8,9].
Translating this insight into practice requires governance mechanisms that operate across the full decision lifecycle. Controls are needed before deployment, embedded in procurement conditions, approval processes, and impact assessment. They are also required after deployment, enabling audit, contestation, and remedy once systems influence real decisions [9]. Without this dual structure, environmental AI governance remains aspirational. Treating environmental AI as regulated decision-making infrastructure reframes the policy question from whether systems perform well to how authority is allocated, constrained, and reviewed when algorithmic outputs shape environmental outcomes [5].

4. Environmental Algorithmic Impact Assessment as a Governance Instrument

Addressing the governance gap identified above requires an instrument that operates at the point where algorithmic systems acquire decision relevance. Environmental algorithmic impact assessment (EAIA) is proposed as a governance mechanism rather than a technical evaluation, linking environmental sustainability, institutional accountability, and procedural legitimacy before and during the use of artificial intelligence in public environmental decision-making.
EAIA applies to environmental AI systems that inform decisions or exercise higher degrees of authority, including systems that support prioritization, trigger administrative action, or operate under delegated authority. This scope reflects the influence that advisory systems can exert once embedded in regulatory workflows, particularly where outputs are repeatedly relied upon or translated into standardized administrative categories. Positioning EAIA as a gatekeeping requirement aligns assessment with public authority rather than with voluntary ethical review, consistent with public sector AI governance guidance [7,8].
To function as an operational tool, EAIA must, at a minimum, specify the following content requirements. At a minimum, assessment should document the following:
  • The system’s role in decision-making, ranging from informational use to delegated authority
  • The environmental stakes involved and foreseeable distributional effects
  • Data provenance and limitations affecting classification or prioritization
  • Procedural safeguards, including intelligible reasons for decisions, access to appeal, and arrangements for human review
  • Lifecycle environmental impacts, covering energy use during training, anticipated inference at scale, emissions, water demand where relevant, and infrastructure dependencies
  • Oversight arrangements, including escalation pathways and conditions for modification, suspension, or withdrawal
These elements respond to calls for lifecycle-based assessment and enforceable accountability in environmental applications of artificial intelligence [1,5].
EAIA should operate as a condition for continued use rather than as a one-time approval. Renewal should be required on a schedule proportionate to decision criticality and following material changes to models, data sources, or deployment scale. This approach reflects the evolving nature of environmental impacts and governance risks as systems are updated or scaled (Organisation for Economic Co-operation and Development, 2024). Operational feasibility can be strengthened through proportionality mechanisms already used in public administration. The Government of Canada’s Algorithmic Impact Assessment provides a reference model by linking assessed impact levels to corresponding mitigation, documentation, and oversight requirements under the Directive on Automated Decision-Making (Treasury Board of Canada Secretariat, 2024). Adapting this logic to environmental decision contexts allows EAIA to scale administrative effort while preserving accountability.

4.1. Operational Illustration: AI-Based Flood Risk Inspection Prioritization Review

To clarify operational implications, consider a public agency deploying an AI system to prioritize flood-risk inspections in a coastal region. The model processes satellite data, hydrological records, infrastructure age, and demographic exposure indicators to classify zones according to anticipated vulnerability. Inspection resources are then allocated based on these classifications. Under an algorithmic sustainability framework, EAIA would be triggered because the system influences enforcement sequencing and resource allocation. The assessment would first document the system’s decision role, identifying whether outputs are advisory or automatically translated into inspection schedules. Second, it would evaluate environmental stakes, including the distributional consequences of inspection prioritization across communities. Lifecycle impacts would be disclosed, including anticipated energy use for model training, inference frequency, data center dependencies, and associated emissions assumptions. Procedural safeguards would be specified, such as the ability of affected municipalities to request review of classification results and the availability of human override authority. Oversight arrangements would define renewal intervals and conditions for reassessment following model updates or scaling. This illustration demonstrates that EAIA does not evaluate predictive accuracy alone. It structures governance around authority, environmental externalities, and contestability before algorithmic classifications shape administrative action.
EAIA is not intended to replace sector-specific environmental assessment, regulatory approval, or judicial review. Its role is complementary. For example, an AI system used to prioritize flood-risk inspections would trigger EAIA review where its classifications influence enforcement sequencing. By embedding EAIA within procurement processes, authorization procedures, and operational policy, public authorities can ensure that algorithmic systems meet the baseline conditions of sustainability, accountability, and legitimacy before they shape environmental decisions, consistent with emerging international expectations for AI governance in the public interest [7,8].

5. From Principles to Practice: Governing Environmental AI

Looking more closely at implementation, effective governance of environmental artificial intelligence depends on implementation capacity. Without standardized methods, interoperable metrics, and mechanisms for review, commitments related to sustainability, accountability, and legitimacy remain difficult to enforce. Implementation must therefore prioritize comparability across systems, auditability over time, and adaptability as models, environmental baselines, and deployment scales change.
Standards provide the foundation for translating governance expectations into operational requirements. For lifecycle environmental impacts, ISO 14040 and ISO 14044 define requirements for lifecycle assessment across energy use, emissions, water demand, and material dependencies from production through operation and end-of-life. Anchoring disclosure obligations in these standards enables comparison across systems using consistent assumptions rather than vendor-specific reporting practices [5,10,11]. Where greenhouse gas accounting is required, alignment with the Greenhouse Gas Protocol Product Standard allows lifecycle emissions associated with environmental AI systems to be reported in a manner compatible with public sector climate reporting. This alignment enables AI-related emissions to be assessed alongside other operational and embodied sources within decarbonization strategies [12].
Operational oversight of computational infrastructure requires metrics that capture resource intensity during use. ISO and IEC specifications define indicators for data center efficiency, including power usage effectiveness and water usage effectiveness. Reporting against these indicators allows agencies to monitor electricity and water demand as inference workloads scale and infrastructure enters long service cycles [4,13].
Governance maturity can be strengthened through lifecycle-oriented risk management. The Artificial Intelligence Risk Management Framework developed by the U.S. National Institute of Standards and Technology provides a reference for identifying, assessing, and managing AI-related risks across design, deployment, and operation. Its emphasis on documentation, monitoring, and accountability complements binding regulatory requirements in public sector contexts [7,14]. Implementation also requires mechanisms that make accountability enforceable. Documentation and logging practices should allow reconstruction of how algorithmic outputs have informed specific decisions, including model provenance, data inputs, and points of human intervention. For systems with high decision criticality, independent audits and incident reporting provide additional safeguards, enabling verification beyond formal compliance [9].
To support continuous improvement without excessive administrative burdens, monitoring should rely on a focused set of indicators. These may include the proportion of AI systems inventoried and publicly described, completion and renewal rates of environmental algorithmic impact assessments, the share of systems with verified lifecycle footprint disclosure, reporting of infrastructure efficiency metrics, the number and outcomes of appeals related to AI-informed decisions, and audit findings resolved within defined timeframes. Together, these indicators allow governance performance to be tracked over time while remaining proportionate to institutional capacity [7]. Finally, implementation capacity varies across jurisdictions. International coordination therefore plays an enabling role. Multilateral initiatives emphasize standards exchange, technical assistance, and institutional learning to reduce asymmetries in regulatory and technical capability. Effective governance depends not only on formal rules but also on the sustained capacity to apply and adapt them as environmental and digital conditions evolve [15].

6. Policy Recommendations and Action Points

Translating algorithmic sustainability into practice requires a limited set of institutionally grounded actions. The recommendations below address public environmental agencies, central procurement authorities, and sectoral regulators responsible for approving, deploying, or overseeing artificial intelligence in environmental decision-making.

6.1. Establish an Integrated Algorithmic Sustainability Regime

Public authorities adopt an integrated governance regime for artificial intelligence used in environmental decision-making, treating such systems as decision-making infrastructure governed through procurement, operational policy, and, where applicable, binding regulation. Integration reduces fragmentation in which environmental performance, accountability, and procedural legitimacy are addressed separately [7,8]. Responsible institutions: Environmental ministries, working in coordination with central digital government bodies and public procurement authorities.

6.2. Introduce Decision Criticality Tiers and a Public System Register

Environmental AI systems are classified by decision criticality, distinguishing informational tools from systems that support decisions, trigger administrative action, or exercise delegated authority. A public register records system purpose, decision role, principal data sources, provider, model and version history, and the accountable official. Governance obligations scale with decision authority, consistent with risk-based approaches [7,9]. Responsible institutions: Environmental agencies, with oversight from public sector digital governance bodies.

6.3. Mandate Environmental Algorithmic Impact Assessment as Gatekeeping and Renewal

EAIA is required for systems that inform decisions or operate at higher levels of authority. EAIA functions as a pre-deployment condition and as a requirement for continued use. Renewal occurs on a schedule proportionate to decision criticality and after material changes to models, data, or deployment scale. Proportionality is operationalized by linking assessed impact levels to required mitigation [7,16]. Responsible institutions: Regulatory authorities and public procurement bodies, supported by environmental assessment agencies.

6.4. Standardize and Verify Lifecycle Footprint Disclosure

Public authorities require standardized, auditable disclosure of lifecycle environmental impacts for environmental AI systems, including energy use during training, anticipated inference at scale, carbon intensity assumptions, water impacts where relevant, and infrastructure dependencies. Disclosure aligns with internationally recognized lifecycle assessment standards to enable comparison across systems [5,10,11]. Responsible institutions: Environmental regulators, working in collaboration with standards organizations and independent audit bodies.

6.5. Embed Traceability, Audit, and Independent Assurance

Governance arrangements require sufficient documentation and logging to reconstruct how algorithmic outputs have influenced specific decisions and to track model provenance. For systems with high decision criticality, independent audits and incident reporting provide additional safeguards consistent with regulatory expectations for public sector AI use [9,14]. Responsible institutions: Oversight bodies, supreme audit institutions, and sectoral regulators.

6.6. Guarantee Contestability and Remedy by Design

Where artificial intelligence materially influences environmental decisions, affected parties receive notice of AI involvement and access to intelligible reasons for outcomes. Appeal mechanisms provide meaningful human review by officials with the authority to revise decisions, alongside remedies that allow correction, suspension, or withdrawal where appropriate. These safeguards support democratic legitimacy and the rule of law [8,9]. Responsible institutions: Environmental agencies, administrative review bodies, and courts.

6.7. Apply Proportionality to Address Capacity and Burden Concerns

Governance requirements scale with decision criticality and environmental stakes. Tiered assessment, differentiated documentation, and targeted audit thresholds allow oversight to focus on higher-risk systems while maintaining baseline safeguards across all deployments [7]. Responsible institutions: Central government policy units responsible for regulatory design.

6.8. Support Capacity Building and International Coordination

Governments invest in training, standards exchange, and institutional learning related to environmental AI governance. International coordination reduces asymmetries in regulatory and technical capacity, particularly for agencies operating with limited resources [15,17]. Responsible institutions: National governments in coordination with international organizations.

7. Conclusion

This policy review has argued that artificial intelligence used in environmental governance should be understood and governed as decision-making infrastructure rather than as neutral software. When algorithmic systems inform, trigger, or constrain public decisions, they exercise institutional authority with material environmental and social consequences. Governance therefore needs to address not only predictive performance but also lifecycle environmental impacts, accountability arrangements, and procedural legitimacy.
Taken together, the concept of algorithmic sustainability reframes environmental AI governance around these conditions. It emphasizes that sustainability is not an attribute of algorithms themselves but a property of the governance systems within which they operate. Without mechanisms that account for energy use, emissions, water demand, and material dependencies across the full lifecycle, environmental benefits attributed to AI risk being overstated. Without enforceable accountability, transparency, and contestability, algorithmic decision-making can weaken due process and public trust, even when deployed in the name of environmental protection. Recent international frameworks signal a growing recognition of these risks, yet gaps remain between principles and practice. These gaps are primarily institutional. They arise from how algorithmic outputs are integrated into administrative processes, how authority is delegated, and how oversight mechanisms are designed. Addressing them requires governance tools that operate before deployment and throughout system use rather than relying on ex post correction alone. Environmental algorithmic impact assessment offers one such tool by linking decision authority, environmental stakes, and accountability within an operational framework.
Looking ahead, the central question is not whether artificial intelligence will continue to shape environmental decision-making but how that influence will be governed. Treating environmental AI as regulated infrastructure clarifies responsibility, enables comparison across interventions, and supports legitimate public decision-making under conditions of uncertainty. As environmental pressures intensify and digital systems become more deeply embedded in governance, the ability to align algorithmic deployment with sustainability and the rule of law will increasingly define the credibility of environmental policy.

Funding

This research received no external funding

Use of Generative Artificial Intelligence

During the preparation of this manuscript, the authors used generative AI tools, including ChatGPT, QuillBot, and Grammarly, exclusively for language refinement, grammar correction, structural editing, and limited translation support. In addition, professional editing or translation services that may employ AI-assisted tools were used solely for linguistic improvement. These tools were not used to conduct statistical analysis, perform simulations, develop methodological decisions, or create references. All scientific content, interpretations, and conclusions were developed, critically reviewed, and validated by the authors.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors acknowledge IMAGINE Studios for providing technical infrastructure.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AI Artificial Intelligence
EAIA Environmental Algorithmic Impact Assessment
EU European Union
GHG Greenhouse Gas
GSPR Global Sustainability Policy Review
IEC International Electrotechnical Commission
IEA International Energy Agency
ISO International Organization for Standardization
IT Information Technology
ITU International Telecommunication Union
LCA Life Cycle Assessment
NIST National Institute of Standards and Technology
OECD Organization for Economic Co-operation and Development
PUE Power Usage Effectiveness
UN United Nations
UNEP United Nations Environment Programme
UNESCO United Nations Educational, Scientific and Cultural Organization

References

  1. Kaack, L.H.; Donti, P.L.; Strubell, E.; Kamiya, G.; Creutzig, F.; Rolnick, D. Aligning Artificial Intelligence with Climate Change Mitigation. Nat. Clim. Chang. 2022, 12, 518–527. [Google Scholar] [CrossRef]
  2. Tironi, M.; Rivera Lisboa, D.I. Artificial Intelligence in the New Forms of Environmental Governance in the Chilean State: Towards an Eco-Algorithmic Governance. Technol. Soc. 2023, 74, 102264. [Google Scholar] [CrossRef]
  3. Ali, K.; Tintawi, G. Algorithmic Sustainability: Ethics, Governance, and Global Policies for AI-Driven Environmental Decision-Making. Preprints 2026, 202602XXXX. [Google Scholar] [CrossRef]
  4. International Energy Agency. Energy and AI (Executive Summary); International Energy Agency: Paris, France, 2025. [Google Scholar]
  5. United Nations Environment Programme. Adaptation Gap Report 2023: Underfinanced. Underprepared. Inadequate Investment and Planning on Climate Adaptation Leaves the World Exposed; UNEP: Nairobi, Kenya, 2023; ISBN 978-92-807-4092-9. [Google Scholar]
  6. Coeckelbergh, M. AI for Climate: Freedom, Justice, and Other Ethical and Political Challenges . AI Ethics 2021, 1, 67–72. [Google Scholar] [CrossRef]
  7. Organisation for Economic Co-operation and Development (OECD). OECD AI Principles. Available online: https://www.oecd.org/en/topics/ai-principles.html (accessed on 5 February 2026).
  8. United Nations Educational; Scientific and Cultural Organization (UNESCO). Recommendation on the Ethics of Artificial Intelligence; UNESCO: Paris, France, 2021. [Google Scholar]
  9. European Parliament and Council of the European Union. Regulation (EU) 2024/1689 of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). Off. J. Eur. Union 2024, L 1689. [Google Scholar]
  10. International Organization for Standardization (ISO). ISO 14040:2006 Environmental Management—Life Cycle Assessment—Principles and Framework; ISO: Geneva, Switzerland, 2006.
  11. International Organization for Standardization (ISO). ISO 14044:2006 Environmental Management—Life Cycle Assessment—Requirements and Guidelines; ISO: Geneva, Switzerland, 2006.
  12. World Resources Institute; World Business Council for Sustainable Development. Greenhouse Gas Protocol: Product Life Cycle Accounting and Reporting Standard; WRI: Washington, DC, USA; WBCSD: Geneva, Switzerland, 2011; ISBN 978-1-56973-773-6. [Google Scholar]
  13. International Organization for Standardization; International Electrotechnical Commission. ISO/IEC 30134-2:2016 Information Technology—Data Centres—Key Performance Indicators—Part 2: Power Usage Effectiveness (PUE); ISO: Geneva, Switzerland, 2016.
  14. National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF 1.0); NIST AI 100-1; U.S. Department of Commerce: Gaithersburg, MD, USA, 2023. [Google Scholar]
  15. International Telecommunication Union (ITU). The Annual AI Governance Report 2025: Steering the Future of AI ; ITU: Geneva, Switzerland, 2025; Available online: https://www.itu.int/epublications/publication/the-annual-ai-governance-report-2025-steering-the-future-of-ai (accessed on 5 February 2026).
  16. Treasury Board of Canada Secretariat. Algorithmic Impact Assessment (AIA) Tool. Available online: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (accessed on 12 February 2026).
  17. United Nations. Governing AI for Humanity: Interim Report of the High-Level Advisory Body on Artificial Intelligence; United Nations: New York, NY, USA, 2023; ISBN 978-92-1-106787-3. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated