Preprint
Article

This version is not peer-reviewed.

A Governance Framework for Urban AI in Climate-Resilient Housing and Infrastructure Prioritization

Submitted:

09 March 2026

Posted:

10 March 2026

You are already at the latest version

Abstract
Smart city governance increasingly relies on AI-enabled planning systems, digital twins, vulnerability scoring tools, and capital investment prioritization platforms to allocate climate-resilient housing and infrastructure investments. Yet existing smart-urbanism and adaptation frameworks under-specify how such systems should encode (i) well-being, (ii) equity, and (iii) climate uncertainty in the decision logic that translates urban data into ranked projects and funded portfolios. This paper develops a governance-centered framework, Caring Urban AI, through a replicable conceptual synthesis that integrates research on (a) climate risk decision-making under deep un-certainty, (b) built-environment pathways relevant to psychosocial well-being, and (c) algorithmic accountability and fairness for public-sector decision infrastructures. The framework specifies a five-layer architecture linking (1) urban form and infrastruc-ture, (2) climate exposure and environmental resources, (3) psychosocial mediators of well-being, (4) algorithmic design choices (data, objective functions, equity constraints, uncertainty handling, documentation), and (5) institutional governance (procurement, auditing, participation, redress), with explicit feedback loops. The primary outputs are: (i) the five-layer Caring Urban AI architecture operationalized as auditable decision infrastructure; (ii) eight mechanism-based propositions that render the framework empirically testable via audits and quasi-experimental policy evaluations; and (iii) an operational specification guide illustrating objective-function forms, equity con-straints, robustness logic, and documentation artifacts for prioritization workflows. The analysis concludes that aligning Urban AI with SDG 11 requires treating well-being-supportive living conditions as a decision objective, constraining optimiza-tion with equity conditions, and institutionalizing auditability and contestability to prevent distributive and psychosocial harm in climate-resilient investment planning.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Smart city governance is increasingly enacted through computational architecture rather than discrete technology projects. In practice, municipal “smartness” is now instantiated as an operational stack, data ingestion from sensors and administrative systems; urban data platforms; GIS-based analytics and dashboards; scenario modeling and forecasting; and, most consequentially, AI-enabled prioritization systems that rank, select, and schedule investments in climate-resilient housing and infrastructure.
This shift matters because some of the most consequential urban decisions are fundamentally allocation decisions: which housing stock receives resilience retrofits first, which neighborhoods obtain heat mitigation or flood protection, and which infrastructure assets are prioritized for hardening, redundancy, or relocation. Such decisions are central to SDG 11, particularly targets related to adequate housing, disaster risk reduction, inclusive planning, and equitable access to safe public and green space [1,2,3], and they sit at the core of urban adaptation practice under political contestation and fiscal scarcity [4].
In this article, “Urban AI” denotes algorithmically mediated decision infrastructures used in urban governance to translate heterogeneous urban data into ranked projects and funded portfolios for planning and capital allocation. The term is used in a governance-operational sense (not as a claim that all tools are machine learning). It covers prioritization workflows that combine one or more of the following: (i) indicator construction and vulnerability scoring (rule-based or statistical), (ii) predictive risk scoring where applicable, (iii) portfolio selection or optimization subject to budget and policy constraints, and (iv) scenario-based stress testing (often implemented in planning models or digital-twin environments). The framework targets high-stakes allocation settings where auditability, equity constraints, and contestability are governance requirements, and it does not endorse fully automated decision-making or surveillance-intensive deployments as legitimate substitutes for democratic oversight. Within this governance landscape, AI and GIS increasingly function as decision infrastructures, not merely as tools for “better information.” Vulnerability scoring tools translate heterogeneous indicators into ranked lists; digital twins and scenario environments support trade-off narratives and portfolio stress testing; capital investment prioritization systems embed objective functions and constraints that produce investment portfolios that can become de facto plans; and dashboards shape what is visible and therefore actionable [5,6,7]. While these systems often promise neutrality and evidence-based action, scholarship on smart urbanism has shown how computational governance can obscure normative choices, shift authority toward vendors and technical elites, and intensify surveillance, depoliticization, and inequity [8,9,10,11,12,13]. The practical challenge, therefore, is not simply critique. It is to develop a conceptually precise, governance-ready approach that can shape how AI-enabled prioritization systems are designed, procured, audited, and contested, especially where climate risks intersect basic urban needs.
This paper proposes such an approach through the concept of the caring city. “Caring” is used here as a governance concept rather than a moral slogan: it denotes an institutional obligation to protect stability, safety, dignity, and the everyday environmental conditions that sustain urban well-being under climate stress. Care ethics scholarship emphasizes that care is institutional and distributive, organized through infrastructures and rules that allocate attention and protection [14,15]. Translating this insight into smart city governance implies a sharp reframing: AI-enabled planning systems should be evaluated not primarily by throughput, prediction accuracy, or cost efficiency, but by whether they reliably protect well-being-supportive living conditions and do so equitably under climate uncertainty.
The relevance to urban mental well-being is not incidental. Urban planning shapes psychosocial outcomes through housing quality and stability, crowding, noise, perceived safety, access to restorative environments, and service reliability [16,17,18]. Climate risks compound these pathways: acute events and chronic hazards can destabilize housing, intensify stress burdens, disrupt social networks, and contribute to anxiety and distress [19,20,21]. Yet smart city frameworks often treat well-being as a diffuse aspiration rather than a formal decision objective, while climate-resilient planning frameworks rarely specify the algorithmic governance mechanisms. objective functions, equity constraints, robustness logics, documentation, auditability, and contestability, through which prioritization is operationalized at scale.
Accordingly, this paper develops a mechanism-level framework for governing AI-enabled prioritization systems used in climate-resilient housing and infrastructure planning. It contributes in four ways. First, it synthesizes climate risk and adaptation, built-environment mental health mechanisms, and algorithmic accountability into a single conceptual architecture for AI-enabled urban decision-support. Second, it formalizes well-being-supportive living conditions as explicit terms in decision objectives and formalizes equity as auditable constraints rather than post hoc evaluation criteria. Third, it specifies robustness-oriented prioritization under deep uncertainty, shifting attention from single-scenario optimality to portfolio performance across plausible climate futures. Fourth, it advances eight refutable propositions and an operational guide that together support empirical evaluation through audits, quasi-experiments, and policy analysis. The remainder of the article develops conceptual foundations, presents the multi-layer framework and feedback loops, articulates the propositions, and derives governance and planning implications for accountable digital urban governance.

2. Smart Urbanism and the Conceptual Gap

2.1. Smart Urbanism as a Governance Project

Smart city discourse often frames digitalization as a route to efficiency, innovation, and better service delivery. Foundational definitions emphasize technology-enabled performance across governance, environment, mobility, and living [22,23]. However, critical scholarship has shown that smart city projects are also governance projects: they configure authority, visibility, and accountability through technical systems and vendor partnerships [8,12]. Smart urbanism can operate as a “storytelling” apparatus that positions private actors as indispensable intermediaries in public decision-making [12], while platform ecosystems embed market logics into urban services and data infrastructures [24]. These dynamics matter most where smart systems cross from monitoring to prioritization, where they effectively decide who is protected first.
Smart city governance debates therefore hinge on a recurring tension: computational systems can expand analytic capacity, yet they can also narrow the space of political contestation by presenting value-laden decisions as technical outputs [6,9]. City dashboards and benchmarking projects make certain aspects of the city legible while pushing others outside the frame [6]. Vulnerability indices can standardize attention to risk but also impose classifications that may misrepresent lived vulnerability, particularly for marginalized groups [25,26]. Digital twins can support participatory visualization and scenario learning [27], yet they can also consolidate technical authority, especially when models are proprietary or difficult to contest.

2.2. The Prioritization Turns: From “Smart Services” to AI-Enabled Capital Allocation

A decisive development in smart urbanism is the emergence of AI-enabled capital allocation tools: systems that rank neighborhoods for interventions, optimize retrofit portfolios, schedule infrastructure upgrades, and allocate emergency or adaptation funds. These tools often combine GIS layers, multi-criteria decision analysis, predictive modeling, and portfolio optimization [28,29,30]. They are frequently embedded in urban data platforms and real-time dashboards, creating a pipeline from data capture to decision recommendation [7].
Climate-resilient housing and infrastructure prioritization tools are a particularly high-stakes instance of this trend. They routinely incorporate hazard layers (heat, flood), exposure and asset layers (housing stock, critical infrastructure), and socioeconomic indicators (vulnerability proxies). They may then generate ranked lists or optimized portfolios for capital budgeting and implementation sequencing. While such systems can improve consistency and transparency in principle, their legitimacy depends on whether they correctly and fairly represent vulnerability, incorporate uncertainty, and protect well-being-supportive living conditions rather than merely optimizing cost-effectiveness.

2.3. Why Existing Smart City Models Under-Theorize Psychosocial Outcomes

The smart city literature frequently invokes “quality of life,” but often without a mechanism-level account of how digital decisions shape psychosocial outcomes. This is a conceptual weakness because climate-resilient housing and infrastructure investments directly affect mediators of mental well-being: housing stability, perceived safety, thermal comfort, noise, service reliability, and access to restorative environments [16,18,31,32]. When prioritization systems ignore these mediators, they may inadvertently select portfolios that reduce aggregate hazard exposure while intensifying stress burdens, for example, by prioritizing asset value protection over housing stability, or by deploying interventions that trigger displacement without adequate safeguards [33,34].
This omission is not simply a missing variable; it is a governance failure. Without explicit representation of psychosocial mediators, the decision system lacks a formal reason to protect them. In computational terms, what is not in the objective function or constraints is structurally vulnerable to trade-off erosion. As algorithmic governance scholarship emphasizes, systems optimize what they are asked to optimize; fairness and harm reduction do not “emerge” from data quality alone [35,36,37].

2.4. Why Do Climate-Resilient Planning Frameworks Omit Algorithmic Governance Mechanisms

Climate adaptation and resilience scholarship provides robust conceptual tools for hazard–exposure–vulnerability, interdependencies, and adaptive capacity [4,38,39]. Yet much of this work treats decision-making as a human-institutional process rather than a socio-technical pipeline in which algorithms shape prioritization and therefore shape distribution. Uncertainty is acknowledged but is often operationalized through scenario narratives rather than encoded into the decision logic of prioritization tools procured and used by cities [40,41]. Consequently, the governance of AI-enabled prioritization, documentation, auditing, contestability, and redress is frequently left as an implementation detail rather than an essential component of adaptation capacity.

2.5. The Conceptual Gap

The conceptual gap can be stated precisely: smart urbanism provides powerful digital governance architectures but lacks a formal account of psychosocial well-being and equity as decision primitives; climate-resilient planning provides a mature risk vocabulary but under-specifies how algorithmic design and governance structure distribute outcomes under uncertainty. The result is a practical and theoretical mismatch: cities deploy AI-enabled prioritization systems for climate-resilient housing and infrastructure without a coherent conceptual framework linking (i) environment mechanisms relevant to mental well-being, (ii) algorithmic decision logic, and (iii) accountable governance arrangements.

2.6. Comparison Table: What Conventional Smart Urbanism Omits and What Caring Cities Adds

Table 1. Conventional smart urbanism vs. Caring Urban AI for climate-resilient housing and infrastructure prioritization.
Table 1. Conventional smart urbanism vs. Caring Urban AI for climate-resilient housing and infrastructure prioritization.
Governance Dimension Conventional Smart-Urbanism Tendency Caring Urban AI Intervention
Planning objective Efficiency, cost-effectiveness, performance metrics Well-being-supportive living conditions formalized as objective terms
Equity treatment Aspirational language, distribution evaluated post hoc Equity encoded as auditable constraints (floors, caps, equal-opportunity conditions)
Climate uncertainty Single-scenario scoring, point-estimate risk maps Robustness logic across plausible futures; stress testing and regret minimization
Data politics Data treated as neutral input Classification and representation treated as distributive choices requiring stewardship
Accountability Vendor opacity, limited explainability Documentation, auditing, contestability, and redress treated as core requirements
Participation Consultation around outputs Co-governance of problem framing, indicators, and constraints; ongoing oversight
Mental well-being Diffuse “quality of life” claims Psychosocial mediators specified and protected through objectives/constraints

3. Materials & Methods

3.1. Study Design and Framework-Construction Method

This article is a conceptual and theoretical–methodological contribution. We develop the Caring Urban AI framework using conceptual framework analysis, a qualitative theorization procedure designed to construct an interlinked network (“plane”) of concepts from multidisciplinary literature. Following conceptual framework analysis, the synthesis proceeds through an iterative set of phases: (1) mapping relevant data sources, (2) extensive reading and categorization, (3) identifying and naming concepts, (4) deconstructing and categorizing concepts (attributes, assumptions, roles), (5) integrating related concepts, (6) synthesis/resynthesis into a coherent framework, (7) validation through coherence and plausibility checks, and (8) iterative refinement (“rethinking”) of the framework.

3.2. Evidence Domains and Search Strategy

The conceptual synthesis draws on three evidence domains aligned with the governance object of the paper:
(A) climate risk decision-making under deep uncertainty and robustness-oriented adaptation planning;
(B) built-environment pathways relevant to psychosocial well-being; and
(C) algorithmic accountability and fairness for public-sector decision infrastructures.
Searches were conducted in Scopus, Web of Science, and Google Scholar and complemented by backward and forward citation tracing and complemented by backward/forward citation chaining from canonical sources in each domain. Search strings used combinations of terms such as:
Domain A: “climate adaptation,” “deep uncertainty,” “robust decision making,” “scenario stress test,” “dynamic adaptive policy pathways,” “infrastructure prioritization.”
Domain B: “built environment,” “housing stability,” “green space,” “mental health,” “psychosocial mediators,” “urban well-being.”
Domain C: “algorithmic governance,” “public sector AI,” “algorithmic accountability,” “auditing,” “impact assessment,” “fairness constraints,” “contestability,” “redress.”

3.3. Inclusion/Exclusion Logic

Inclusion prioritized sources that (i) explicitly define mechanisms or governance practices relevant to allocation decisions (e.g., objective functions, constraints, documentation and audit artifacts, uncertainty-handling logics), (ii) connect to urban planning and public-sector decision workflows, and (iii) are peer-reviewed articles, scholarly books, or major institutional frameworks. Exclusion criteria omitted (i) purely technical machine-learning papers without governance or allocation relevance, (ii) sources with no traceable definitions/mechanisms, and (iii) publications whose claims could not be triangulated against the broader domain literature.

3.4. Concept Extraction, Integration, and Derivation of the Five-Layer Architecture

Across the included sources, repeated reading was used to extract candidate concepts and to deconstruct each concept into (i) its core meaning, (ii) operational locus (urban conditions, data/indicators, decision rule, institutional governance), and (iii) governance implication (what must be specified, monitored, audited, or contestable). Concepts were then integrated by grouping concept families that address the same governance problem (e.g., “documentation,” “auditing,” and “contestability” as accountability capacities).
The five-layer architecture was derived by organizing the integrated concepts into an explicit decision-infrastructure pipeline corresponding to how climate-resilient housing and infrastructure prioritization is operationalized: (1) urban form and infrastructure, (2) climate exposure and environmental resources, (3) psychosocial mediators of well-being, (4) algorithmic design choices that translate inputs into ranked projects/portfolios, and (5) institutional governance mechanisms (procurement, auditing, participation, redress) that condition legitimacy, accountability, and correction capacity.

3.5. Operationalization Logic for Propositions and Specification Guide

The eight propositions were formulated as mechanism-based claims that connect a specific design/governance mechanism to directional expectations about prioritization outputs and distributive outcomes. Because the manuscript does not present a case study, the “results” of this work are presented as framework outputs: defined constructs, the five-layer architecture, mechanism-based propositions, and a conceptual specification guide that makes objective terms, equity constraints, robustness logic, and documentation artifacts explicit and auditable.

4. Results

This section reports the results of the conceptual framework analysis: (i) a set of defined constructs linking climate risk under deep uncertainty, psychosocial mediators of well-being, and algorithmic accountability; (ii) the Caring Urban AI five-layer architecture with explicit feedback loops; and (iii) mechanism-based propositions and a conceptual specification guide that operationalize objectives, equity constraints, robustness logic, and documentation artifacts as auditable elements of an urban decision infrastructure.

4.1. Climate Risk, Resilience, and Deep Uncertainty

Climate risk is commonly defined as the potential for adverse consequences arising from the interaction of hazard, exposure, and vulnerability, shaped by adaptive capacity and societal development pathways [4]. Urban risk is compounded by infrastructure interdependence and cascading failures [39], and by social vulnerability patterns that are historically produced and unevenly distributed [25,42,43]. For prioritization systems, a core implication is that risk is not simply a property of geography; it is co-produced by built form, institutional capacity, and social inequality.
A second implication is uncertainty. Climate and socio-economic futures are characterized by deep uncertainty: probabilities may be unknown or contested, models may be structurally incomplete, and value judgments about acceptable risk and equity are not reducible to technical parameters [41]. Planning under deep uncertainty therefore requires approaches that are robust across plausible futures rather than optimal for a single forecast [40,44]. In the context of long-lived housing and infrastructure assets, this robust orientation is especially salient because maladaptive lock-in can persist for decades.

4.2. Urban Well-Being and Mental Health as Planning-Relevant Outcomes

Urban well-being is shaped by social determinants and built environment conditions that structure stress exposure, safety, agency, and restorative opportunities [45,46]. The built environment affects mental health through pathways such as crowding, noise, housing quality, neighborhood disorder, and access to supportive and restorative spaces [16,17,18]. Green and blue spaces are linked to mental well-being via stress reduction, attention restoration, social interaction, and thermal regulation pathways [31,32,47].
Climate change adds psychosocial burdens through acute events and chronic stressors, including displacement risk, loss of predictability, and climate-related distress and anxiety [19,20,21]. A governance implication follows adaptation strategies and the tools used to prioritize them cannot be evaluated solely by physical exposure reduction; they must also be evaluated by whether they protect the mediating conditions that sustain mental well-being.

4.3. Psychosocial Mediators

A psychosocial mediator is an intermediate condition linking material urban environments to mental well-being outcomes. The present framework focuses on mediators that are theoretically grounded, relevant to climate-resilient housing and infrastructure, and plausibly representable in planning systems without claiming to “measure mental health” directly. Core mediators include:
  • Housing stability and security, including predictability of shelter and tenure-related stress burdens [17].
  • Perceived safety and environmental threat, shaped by both physical conditions and institutional trust [16].
  • Control and agency, including residents, perceived ability to manage risks and influence decisions [45].
  • Restorative access, including proximity and accessibility of green/blue spaces and heat refuge opportunities [18,47].
  • Social cohesion and support, relevant for buffering climate shocks and chronic stress [48,49].
The conceptual move is to treat these mediators as legitimate planning objects that should inform capital allocation decisions and the constraints governing them.

4.4. AI-Enabled Planning Systems as Socio-Technical Decision Infrastructures

Planning support systems (PSS) and spatial decision support systems have long sought to structure complex choices through scenario modeling, multi-criteria evaluation, and visualization [29,30,50]. Yet decades of PSS scholarship warn against the assumption that technical capacity automatically translates into planning value; implementation gaps reflect institutional, political, and communicative constraints [5,51]. Contemporary AI-enabled planning systems inherit these challenges while adding new ones: opacity of machine learning models, drift over time, and greater stakes as systems move from exploration to recommendation and ranking.
Within smart city governance, AI-enabled planning systems increasingly take the form of integrated pipelines that ingest administrative and sensor data into urban data platforms, generate indicators and vulnerability scores, and produce ranked projects or optimized portfolios for capital planning. Digital twins extend this pipeline by providing simulation and scenario environments, often anchored in 3D city models and GIS layers [27,52,53]. These systems are not neutral assistants; they structure visibility, urgency, and justification.

4.5. Objective Functions, Equity Constraints, and Robustness Logic

Because climate-resilient housing and infrastructure prioritization often involves ranking or portfolio selection, it can be conceptualized as an optimization problem even when agencies do not use formal optimization language. Three constructions are therefore essential.
An objective function specifies what the system is designed to improve, e.g., minimizing climate risk, minimizing service disruption, maximizing coverage of interventions, or minimizing cost. If psychosocial mediators are excluded, the system has no formal reason to protect them; if they are included, trade-offs become explicit and governance can interrogate them.
An equity constraint restricts the feasible set of solutions to prevent unacceptable disparities or to guarantee minimum protection. Equity constraints can take multiple forms: service floors, disparity caps, or equal-opportunity conditions when predictive models are used [54,55]. Environmental justice scholarship underscores why constraints matter: distributive injustice is not corrected by average improvements when marginalized groups remain structurally exposed [54,55].
Equity constraints are plural and can be mutually incompatible. Prominent statistical fairness conditions, such as calibration/predictive parity, equalized error rates (error-rate balance), and equal opportunity, cannot generally all be satisfied simultaneously except under restrictive special cases, particularly when outcome prevalence differs across groups. Accordingly, in public-sector prioritization systems, choosing “equity constraints” should be treated as an explicit governance decision: agencies should specify which fairness notion is prioritized when conflicts arise, justify that choice in relation to statutory duties and planning goals, and document the trade-offs accepted (including who benefits, who bears residual error, and why).
Robust logic is an uncertainty-handling approach that seeks strategies that perform acceptably across plausible futures, rather than maximizing expected performance under a single scenario [40,41,44]. Robustness is especially relevant for long-lived assets where maladaptive lock-in is costly.

4.6. Algorithmic Accountability, Documentation, Auditing, and Contestability

Algorithmic governance scholarship emphasizes that transparency alone is insufficient; accountability requires institutionalized practices of documentation, auditing, and redress [40,41,44]. Documentation frameworks such as model cards and datasheets clarify intended use, data provenance, limitations, and failure modes [60,61]. Risk management guidance increasingly stresses ongoing monitoring rather than one-time evaluation [62,63]. For public-sector systems, contestability matters: affected communities should have meaningful routes to challenge decisions, obtain explanations, and seek remedy [64]. In the context of housing and infrastructure prioritization, where decisions influence stability and safety, contestability is not peripheral; it is a legitimate condition.

5. The Caring Urban AI Framework

5.1. Core Premise

The Caring Urban AI framework reframes AI-enabled planning systems as accountable public decision infrastructures for climate-resilient housing and infrastructure prioritization. The premise is governance-centered: caring is achieved when (i) well-being-supportive living conditions are explicit decision objectives, (ii) equity is encoded as binding constraints, and (iii) uncertainty is handled through robustness-oriented prioritization rather than brittle optimality. These requirements are not moral adornments; they are design and governance commitments that can be specified in procurement, audited in operation, and contested in public deliberation.

5.2. Five-Layer Architecture

The framework is structured as a five-layer architecture to clarify mechanisms and loci of intervention.
Layer 1: Urban form and infrastructure
This layer represents housing stock characteristics, building performance, drainage and flood control systems, cooling and heat refuge infrastructure, mobility and access networks, and the interdependencies that generate cascading service failures [39,65]. For prioritization systems, Layer 1 includes both the assets to be intervened upon and the service consequences of interventions.
Layer 2: Climate exposure and environmental resources
Layer 2 captures hazards and exposure (heat, flooding, storms) and environmental resources that affect both climate risk and well-being, such as tree canopy, blue-green infrastructure, and accessible cooling opportunities [4,47]. This layer also includes spatially uneven exposure patterns shaped by historical development and inequity [25,42].
Layer 3: Psychosocial mediators
Layer 3 includes the psychosocial mediators through which housing and infrastructure conditions influence mental well-being: stability/security, perceived safety, agency, restorative access, and social cohesion [16,18,48]. Importantly, this layer clarifies that “resilience” is not only an engineering property; it is experienced through everyday conditions that structure stress and control, especially for vulnerable residents.
Layer 4: Algorithmic design layer
For governance purposes, “algorithmic design” in Layer 4 refers to any auditable rule-set or model, whether ML-based, statistical, or rule-based—that transforms inputs into prioritization outputs through explicit objective terms, constraints, uncertainty-handling rules, and documentation artifacts. Layer 4 specifies how data and models translate into prioritization outputs. It includes:
  • Data selection and classification, including the politics of categories and missingness [26].
  • Indicator construction and vulnerability scoring, including proxy choices and aggregation rules [25,66].
  • Objective function specification, including explicit inclusion of psychosocial mediators and service reliability.
  • Equity constraints, including floors, caps, and equal-opportunity conditions [54,55].
  • Robustness-oriented uncertainty handling, including scenario ranges, stress tests, and regret-based selection [40,41].
  • Explainability and documentation, enabling scrutiny and monitoring [60,61,67]. Layer 4 is where “smartness” most directly becomes governance: these design choices determine whose needs are visible, which trade-offs are permitted, and how errors can persist.
Layer 5: Governance and institutional layer
Layer 5 is the institutional envelope that shapes the legitimacy of Layers 1–4. It includes procurement rules, public accountability structures, auditing capacity, participatory oversight, and redress mechanisms [57,62,64]. It also includes data stewardship arrangements that govern permissible uses and protect against surveillance and extractive data practices that erode trust [68,69,70].

5.3. Feedback Loops and Dynamic Behavior

The framework is explicitly dynamic; it anticipates feedback loops that can either stabilize caring outcomes or amplify harm.
  • Investment-to-risk loop: capital allocation reshapes exposure and vulnerability over time; poorly chosen investments can create new lock-ins, while robust portfolios reduce future regret [40,44].
  • Investment-to-mediator loop: housing and infrastructure interventions reshape psychosocial mediators, e.g., stability and perceived safety, thus affecting well-being trajectories [17].
  • Algorithm-to-investment loop: prioritization outputs become implementation sequences, reshaping Layer 1 and redistributing protection and services.
  • Governance-to-algorithm loop: procurement, auditing, and oversight determine whether objectives/constraints are explicit, whether documentation exists, and whether contestability is meaningful [62,64].
  • Outcome-to-legitimacy loop: experienced harms and benefits shape trust and participation; legitimacy affects data quality and governance capacity, influencing future system performance [6,57].

5.4. Figure Description (Conceptual)

Figure 1. Caring Urban AI framework for climate-resilient housing and infrastructure prioritization. The figure depicts five layers stacked vertically. Layers 1–3 represent the urban material system, climate exposure/resources, and psychosocial mediators. Layer 4 (algorithmic design) sits adjacent, drawing inputs from Layers 1–3 and producing prioritization outputs that feed back into Layer 1 via investment decisions. Layer 5 (governance and institutions) encircles Layer 4 and connects to all other layers, indicating that oversight, auditing, participation, and redress shape what the system can do and whether harms are correctable.
Figure 1. Caring Urban AI framework for climate-resilient housing and infrastructure prioritization. The figure depicts five layers stacked vertically. Layers 1–3 represent the urban material system, climate exposure/resources, and psychosocial mediators. Layer 4 (algorithmic design) sits adjacent, drawing inputs from Layers 1–3 and producing prioritization outputs that feed back into Layer 1 via investment decisions. Layer 5 (governance and institutions) encircles Layer 4 and connects to all other layers, indicating that oversight, auditing, participation, and redress shape what the system can do and whether harms are correctable.
Preprints 202233 g001

5.5. Caring Urban AI within Digital Urban Governance Architectures

Smart city governance is increasingly enacted through digital architectures that integrate data, models, and decision workflows. The Caring Urban AI framework can be directly situated within four common components of contemporary digital urban governance.
(i) Urban data platforms and dashboards. Urban data platforms aggregate administrative, sensor, and third-party data into standardized schemas, enabling indicators, benchmarking, and dashboard visualization [6,7]. In conventional architectures, dashboards often privilege what is quantifiable and operationally convenient, contributing to a “visualized facts” epistemology that can conceal normative choices [6]. In a caring architecture, the platform must do more than aggregate data: it must support traceability (provenance and lineage), publish data dictionaries and indicator construction rules, and enable public scrutiny of what is counted and what is absent. Caring governance therefore treats the platform as an accountability substrate rather than a data lake.
(ii) Vulnerability scoring tools and risk indices. Many cities rely on composite indices to prioritize interventions. Such tools can bring systematic attention to risk and inequity, but they also embed classification politics and aggregation choices that can misrepresent lived vulnerability, especially where data coverage is uneven [25,26]. A caring architecture requires that vulnerability scoring tools be paired with equity constraints and participatory validation: indices should be contestable, their proxy assumptions explicit, and their error patterns audited for representational harms [35,37].
(iii) Digital twins and scenario environments. Urban digital twins extend GIS and PSS traditions by integrating 3D city models, simulation capabilities, and interactive visualization that can support scenario deliberation [27,52]. Digital twins are often framed as instruments of prediction and optimization, yet their governance risk lies in epistemic authority: when models become persuasive narratives, they can crowd out contestation. Caring Urban AI positions the digital twin not as an oracle but as a robustness engine: it should be used to stress-test portfolios across plausible climate futures, explore distributional consequences, and document uncertainty rather than presenting a single “best” answer [40,41].
(iv) Capital investment prioritization systems (CIPS). The most consequential component is the system that converts risk indicators and project options into ranked lists or portfolios. These systems often involve multi-criteria evaluation, optimization, and scheduling. In conventional smart governance, the objective function is frequently implicit (cost efficiency, aggregate risk reduction) and equity is externalized to political negotiation after outputs are produced. Caring Urban AI requires that CIPS explicitly encode (a) psychosocial mediators as decision objectives where relevant, (b) equity as constraints that define unacceptable trade-offs, and (c) robustness criteria that privilege strategies with acceptable performance under uncertainty. Documentation must specify intended use, limitations, and governance responsibilities, enabling auditing and public accountability [59,60,61].
Positioning Caring Urban AI within this architecture clarifies that the framework is not an abstract ethical overlay. It is an intervention into the digital machinery of smart governance, aimed at making AI-enabled capital allocation systems accountable to SDG 11 commitments and to well-being-supportive urban living conditions.

6. Propositional Development

The propositions below are stated in a parallel structure. Each includes (i) mechanism, (ii) directional expectation, and (iii) a concise empirical test route suitable for smart city governance research (audits, quasi-experiments, policy evaluations).
Proposition 1 (Objective alignment with psychosocial mediators)
P1. Mechanism. When prioritization systems formally include psychosocial mediators (e.g., housing stability, perceived safety, restorative access) as terms in the decision objective, these mediators become protected against trade-off erosion.
  • Directional expectation. Systems with mediator-inclusive objectives will generate investment portfolios that more consistently improve well-being-supportive living conditions than systems that optimize only hazard reduction and cost.
  • Empirical test route. Comparative policy evaluation of portfolio outputs before/after objective revisions, combined with mixed-method assessment of mediator-relevant indicators and resident-reported experiences.
  • Anchoring sources. Evans [16]; Evans et al. [17]; Hartig et al. [18]; Marmot [45].
Proposition 2 (Representation bias and invisible vulnerability)
P2. Mechanism. Uneven data coverage and proxy construction under-represent marginalized communities, leading vulnerability models to systematically mis-estimate need where lived vulnerability is poorly captured.
  • Directional expectation. Under-represented neighborhoods will be systematically under-prioritized for climate-resilient housing and infrastructure investments even when aggregate model performance appears acceptable.
  • Empirical test route. Data and model audit linking missingness and proxy bias to prioritization outcomes, supplemented by participatory validation or targeted ground-truthing of high-risk areas.
  • Anchoring sources. Barocas & Selbst [35]; Bowker & Star [26]; Cutter et al. [25]; Mehrabi et al. [37].
Proposition 3 (Efficiency-first optimization and distributive divergence)
P3. Mechanism. Optimization that maximizes aggregate risk reduction without binding equity constraints tends to concentrate investment where marginal returns are highest, which can mirror existing advantages (better data, lower costs, higher asset values).
  • Directional expectation. Unconstrained systems will widen spatial disparities in climate protection and well-being-supportive living conditions compared with constrained systems that enforce service floors or disparity caps.
  • Empirical test route. Counterfactual simulation comparing unconstrained vs. equity-constrained portfolios on the same project universe, evaluating disparity metrics and distributional coverage.
  • Anchoring sources. Hardt et al. [54]; Dwork et al. [55]; Schlosberg [43]; Walker [56].
Proposition 4 (Robustness logic reduces maladaptive lock-in)
P4. Mechanism. Robustness-oriented decision rules select strategies that perform acceptably across plausible futures, reducing regret and lock-in to brittle solutions.
  • Directional expectation. Portfolios chosen under robustness criteria will show fewer failure modes and lower regret under alternative climate scenarios than portfolios optimized for a single forecast.
  • Empirical test route. Scenario stress-testing of portfolios within a digital twin or planning model, comparing regret and service failure outcomes across plausible hazard trajectories.
  • Anchoring sources. Lempert et al. [41]; Hallegatte [40]; Haasnoot et al. [44]; IPCC [4].
Proposition 5 (Nature-based and thermal comfort pathways with anti-displacement conditions)
P5. Mechanism. Integrating restorative access and thermal comfort into prioritization objectives supports dual pathways: climate risk reduction (heat mitigation) and psychosocial restoration; however, interventions can trigger exclusion or displacement without safeguards.
  • Directional expectation. When restorative access objectives are paired with anti-displacement governance and equity constraints, portfolios will deliver larger well-being co-benefits and fewer adverse distributional effects than portfolios that treat green/blue infrastructure solely as amenity or aesthetic improvement.
  • Empirical test route. Policy evaluation comparing intervention areas with and without explicit anti-displacement safeguards, assessing changes in access, perceived comfort/safety, and displacement pressure indicators.
  • Anchoring sources. Gascon et al. [31]; Twohig-Bennett & Jones [32]; Anguelovski et al. [33,34]; WHO [47].
Proposition 6 (Participatory oversight improves indicator validity and legitimacy)
P6. Mechanism. Participatory oversight introduces lived experience into problem framing and indicator selection, exposing mis-specified proxies and strengthening contestability.
  • Directional expectation. Systems with institutionalized participatory oversight will exhibit higher legitimacy (trust and acceptance) and improved mediator relevance of indicators, reducing psychosocial harms associated with technocratic allocation.
  • Empirical test route. Comparative process evaluation across governance models, measuring changes in objectives/constraints after participation and assessing trust, contestation uptake, and perceived agency.
  • Anchoring sources. Arnstein [71]; Healey [72]; Cardullo & Kitchin [73]; Dia-kopoulos [57].
Proposition 7 (Documentation and auditing reduce persistent distributive error)
P7. Mechanism. Documentation (datasheets/model cards) and auditing create institutional capacity to detect bias drift, performance drift, and distributive harms and to implement corrective action.
  • Directional expectation. Prioritization systems with strong documentation and routine audits will show lower persistence of systematic distributive errors than systems governed only by transparency statements or one-time evaluations.
  • Empirical test route. Cross-sectional governance audit scoring documentation and monitoring practices, linked to observed correction rates and identified bias incidents over time.
  • Anchoring sources. Mitchell et al. [60]; Gebru et al. [61]; Raji et al. [59]; NIST [62].
Proposition 8 (Data stewardship increases participation and reduces surveillance-related trust erosion)
P8. Mechanism. Data stewardship arrangements that limit repurposing, clarify fiduciary responsibility, and protect residents from surveillance reduce trust erosion and enable more reliable participation and data quality.
  • Directional expectation. Cities with explicit stewardship and contestability arrangements will exhibit higher-quality data inputs for prioritization (reduced missingness in marginalized areas) and higher public cooperation than cities where data practices are opaque or vendor-dominated.
  • Empirical test route. Comparative institutional analysis of stewardship models linked to participation rates, data completeness metrics, and survey-based trust indicators in affected communities.
  • Anchoring sources. Austin & Lie [70]; Houser & Bagby [74]; Crawford [68]; Zuboff [69].

7. Governance and Planning Implications

Caring Urban AI implies that smart city governance must treat AI-enabled capital allocation as a public accountability domain rather than an IT procurement matter. This section specifies governance implications tailored to climate-resilient housing and infrastructure prioritization systems.

7.1. Procurement as a Governance Lever for Objective and Constraint Transparency

Procurement decisions often determine whether a prioritization system will be auditable or inscrutable. Caring governance requires that procurement standards mandate:
  • Objective function transparency: the system must publish and maintain a clear statement of objectives, including how psychosocial mediators and service reliability are represented where relevant.
  • Equity constraints as enforceable requirements: equity is not a reporting metric but a feasibility condition (floors, caps, equal-opportunity constraints where predictive models are used).
  • Robustness-oriented uncertainty handling: the system must demonstrate performance under scenario ranges and document how decisions change across futures.
  • Documentation deliverables: model cards/datasheets, data dictionaries, and decision logs as contractual requirements [60,61].
  • Audit interfaces: the system must support independent auditing, including reproducible evaluation pipelines and access to relevant logs under appropriate safeguards [59].
These requirements directly address critiques of vendor-driven smart city governance in which proprietary systems shape public decisions while limiting scrutiny [9,10].

7.2. Algorithmic Impact Assessment as Planning Due Diligence

Algorithmic Impact Assessment (AIA) can be positioned as a planning instrument analogous in intent, though not identical in form, to environmental impact assessment. For AI-enabled housing and infrastructure prioritization, AIA should minimally specify:
  • Decision context and scope (what decisions the system influences).
  • Affected populations and equity-relevant groups.
  • Data provenance, representational adequacy, and proxy risks.
  • Objective functions and equity constraints, including rationales and trade-offs.
  • Uncertainty handling (scenarios, stress tests, robustness criteria).
  • Monitoring and re-evaluation cycles (drift detection).
  • Contestability and redress procedures [62,64].
Institutionalizing AIAs makes caring operational: it creates a governance routine that forces explicitness about objectives, constraints, and risks.

7.3. Documentation, Explainability, and the Limits of Transparency

Smart city systems often promise “transparency” while providing little that supports accountability. Caring governance distinguishes transparency from accountability. Documentation must enable three functions:
  • Interpretability for planners: the system must allow planners to understand why a portfolio was selected and how trade-offs were handled.
  • Auditability for oversight bodies: the system must support reproducible evaluation of bias, performance, and constraint compliance.
  • Contestability for affected residents: the system must provide explanations suitable for challenge and remedy where decisions impose harm.
Explainability methods can support these aims, but they do not substitute for governance structures that define responsibilities and remedies [57,58,67]. Documentation frameworks, model cards and datasheets, are therefore treated as governance artifacts rather than academic add-ons [60,61].

7.4. Auditing and Monitoring as Continuous Governance

AI-enabled prioritization systems are not static. Data, urban conditions, and institutional priorities change; models can drift; vulnerability patterns can shift under new climate extremes. Caring governance requires continuous monitoring aligned with risk management principles [62]. Auditing should include:
  • Representation audits (coverage and missingness by neighborhood and group).
  • Constraint compliance audits (floors and caps enforced).
  • Performance audits (predictive validity where prediction is used, and error distribution).
  • Outcome audits (distribution of investments and service improvements).
  • Process audits (whether participation, contestation, and corrections occur).
This emphasis aligns smart city governance with a lifecycle view of AI risk rather than a deployment-and-forget mentality.

7.5. Participatory Oversight and Democratic Contestability

Participation in smart city projects often focuses on user experience or consultation around outputs. Caring governance requires participatory oversight at the level of decision primitives: what counts as need, which mediators are protected, which constraints are non-negotiable, and how uncertainty is handled. Arnstein’s ladder remains relevant because participation is frequently symbolic; caring governance requires institutionalized roles for affected residents in oversight rather than episodic engagement [71,72]. Smart citizenship scholarship underscores that inclusion is not guaranteed by “open data” or digital platforms; it depends on power, resources, and institutional design [73].

7.6. Box 1. Operationalizing Caring Urban AI (Conceptual Guide)

Box 1. Operationalizing Caring Urban AI (Conceptual Guide)
This box provides conceptual forms suitable for specification in design briefs, procurement requirements, and evaluation protocols. It does not report empirical results.
Objective function forms (conceptual, non-numeric):
  • Minimize combined climate harm and psychosocial stress burden subject to feasibility and budget.
  • Minimize worst-case unacceptable harm across plausible climate scenarios (robustness).
  • Maximize well-being-supportive stability (housing security, thermal comfort, service reliability, restorative access) while meeting minimum risk reduction thresholds.
Equity constraints (examples):
  • Service floors: every neighborhood must meet minimum thresholds of heat refuge access, flood protection coverage, or housing retrofit eligibility.
  • Disparity caps: limit the gap in protection or restorative access between high-deprivation and low-deprivation areas.
  • Equal-opportunity constraints: when predictive scores are used to identify “high need,” require comparable false negative rates across protected groups to reduce systematic under-prioritization [54].
Uncertainty handling (examples):
  • Scenario ranges: evaluate portfolios across ensembles of plausible hazard pathways rather than a single projection [4].
  • Robustness criteria: select portfolios that avoid catastrophic underperformance and minimize regret [41].
  • Stress tests: identify neighborhoods that become high-risk under higher-end scenarios and ensure adaptation pathways remain adjustable.
Psychosocial mediator indicators (planning-relevant examples):
  • Perceived safety and environmental threat (survey-based or participatory assessments).
  • Housing stability risk proxies (tenure precarity, displacement pressure indicators).
  • Access to restorative environments and heat refuges (walk-time accessibility; canopy/shade access) [47].
  • Reliability of essential services during hazard events (interdependency-sensitive metrics) [39].

7.7. Avoiding “Caring Washing” in Smart City Governance

A predictable risk is that caring language becomes branding while the underlying system remains efficient-first and opaque. Caring Urban AI therefore treats caring as auditable commitments: explicit objectives, enforceable constraints, robust logic, documentation, audits, and contestability. Without these, caring cannot be evaluated and becomes performative.

8. Boundary Conditions and Research Agenda

8.1. Primary Domain of Applicability

The framework is designed for urban governance contexts where the decision problem is to prioritize climate-resilient housing and infrastructure investments under constrained budgets and uncertain futures. It applies most directly when:
  • A city uses GIS and data platforms to develop risk and vulnerability layers.
  • Prioritization involves ranking projects or selecting portfolios.
  • Institutional capacity exists (or can be built) for documentation, auditing, and participatory oversight.
The framework is intentionally not a general theory of smart cities. It targets a specific and consequential smart governance function: AI-enabled allocation of climate resilience investments.

8.2. Where the Framework Should Not Be Applied

Caring for Urban AI should not be used to legitimate:
  • Fully automated prioritization without meaningful human accountability and democratic oversight.
  • Surveillance-intensive data practices elevate psychosocial threat and reduce trust, especially in marginalized communities [68,69].
  • Adaptation strategies that treat displacement as a default solution, absent strong protections and consent mechanisms [34].

8.3. Global North / Global South, Data-Poor Contexts, and Informal Settlements

Data availability and institutional capacity vary widely. In data-poor contexts, algorithmic systems can worsen invisibility: informal settlements and marginalized communities may be absent from administrative records, leading prioritization tools to misrepresent need [25]. Caring for governance in such contexts requires:
  • Participatory mapping and community-generated data with safeguards against extraction and harm [75].
  • Stronger data stewardship and contestability protections, given heightened risks of eviction or policing.
  • Robustness-oriented approaches that explicitly recognize uncertainty in both climate projections and baseline data [40,41].
The framework’s governance layer therefore becomes more, not less, central in Global South and informal settlement contexts.

8.4. Research Agenda for Smart City Governance Scholarship

The framework yields a research agenda oriented toward falsification and governance learning:
  • Algorithm audits of prioritization systems: representation bias, proxy harms, and constraint compliance [35,59].
  • Scenario stress-testing in digital twins: robustness vs. single-scenario optimization effects on portfolio regret [40,44].
Comparative institutional studies: how AIAs, procurement standards, and stewardship models shape system behavior and public trust [62,64].
Mediator-oriented evaluation: how investments change well-being-supportive living conditions, not only hazard exposure [16,18].
3.
Equity constraint design research: how constraints are negotiated, institutionalized, and contested across different political contexts [43,56].

9. Discussion and Conclusion

Smart city governance is increasingly implemented through AI-enabled planning systems that translate data into prioritized investments. This prioritization turn is a defining feature of contemporary smart urbanism because it relocates normative urban questions, who is protected first, who receives infrastructure investments, whose risks count, into computational systems that can obscure trade-offs behind technical outputs. Existing smart city frameworks provide limited conceptual tools for governing these systems when decisions materially shape climate resilience and the living conditions that structure mental well-being.
This Conceptual Analysis advances a governance-centered alternative: Caring Urban AI for climate-resilient housing and infrastructure prioritization. The framework responds to conceptual fragmentation by integrating climate risk and deep uncertainty, psychosocial mediators of urban well-being, and algorithmic governance mechanisms (objective functions, equity constraints, robustness logic, documentation, auditing, and contestability). Its five-layer architecture clarifies where harms and protections are produced, and its feedback loops explain why system behavior cannot be understood as a static mapping from data to decisions. The eight propositions translate the framework into refutable claims and specify evaluation routes consistent with smart city governance as an empirical and institutional practice.
For smart city scholarship, the central implication is that “smartness” is not a technical attribute; it is a governance arrangement. Digital twins, vulnerability scoring, dashboards, and capital prioritization systems should be treated as public institutions with distributive consequences. For practitioners, the implication is operational: caring must be specified in procurement, encoded in constraints and objectives, validated through participatory oversight, and maintained through continuous audit and redress mechanisms. Without these commitments, smart city systems risk optimizing a narrow conception of performance while intensifying inequity, psychosocial stress, and distrust, outcomes that are incompatible with SDG 11.
Reframing smart cities as caring cities does not reject digital governance; it disciplines it. Caring Urban AI offers a pathway for cities to use AI and GIS not as instruments of depoliticized optimization, but as accountable infrastructures that prioritize climate-resilient housing and infrastructure in ways that protect stability, dignity, and equitable well-being under uncertainty.

Author Contributions

Conceptualization, R.A.; methodology, R.A.; framework development, R.A.; writing—original draft preparation, R.A.; writing—review and editing, R.A. and K.G.; supervision, K.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. United Nations General Assembly. Transforming Our World: The 2030 Agenda for Sustainable Development; Resolution A/RES/70/1; United Nations: New York, NY, USA, 2015. [Google Scholar]
  2. United Nations General Assembly. New Urban Agenda; Resolution A/RES/71/256; United Nations: New York, NY, USA, 2016. [Google Scholar]
  3. United Nations Office for Disaster Risk Reduction (UNDRR). Sendai Framework for Disaster Risk Reduction 2015–2030; UNDRR: Geneva, Switzerland, 2015. [Google Scholar]
  4. Intergovernmental Panel on Climate Change (IPCC). Climate Change 2022: Impacts, Adaptation and Vulnerability; Working Group II Contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
  5. Couclelis, H. “Where Has the Future Gone?” Rethinking the Role of Integrated Land-Use Models in Spatial Planning. Environ. Plan. A 2005, 37, 1353–1371. [Google Scholar] [CrossRef]
  6. Kitchin, R.; Lauriault, T.P.; McArdle, G. Knowing and Governing Cities through Urban Indicators, City Benchmarking and Real-Time Dashboards. Reg. Stud. Reg. Sci. 2015, 2, 6–28. [Google Scholar] [CrossRef]
  7. Kitchin, R.; Maalsen, S.; McArdle, G. The Praxis and Politics of Building Urban Dashboards. Geoforum 2016, 77, 93–101. [Google Scholar] [CrossRef]
  8. Hollands, R.G. Will the Real Smart City Please Stand Up? Intelligent, Progressive or Entrepreneurial? City 2008, 12, 303–320. [Google Scholar] [CrossRef]
  9. Kitchin, R. The Ethics of Smart Cities and Urban Science. Phil. Trans. R. Soc. A 2016, 374, 20160115. [Google Scholar] [CrossRef] [PubMed]
  10. Sadowski, J.; Pasquale, F. The Spectrum of Control: A Social Theory of the Smart City. First Monday 2015, 20. [Google Scholar] [CrossRef]
  11. Shelton, T.; Zook, M.; Wiig, A. The “Actually Existing Smart City”. Camb. J. Reg. Econ. Soc. 2015, 8, 13–25. [Google Scholar] [CrossRef]
  12. Söderström, O.; Paasche, T.; Klauser, F. Smart Cities as Corporate Storytelling. City 2014, 18, 307–320. [Google Scholar] [CrossRef]
  13. Vanolo, A. Smartmentality: The Smart City as Disciplinary Strategy. Urban Stud. 2014, 51, 883–898. [Google Scholar] [CrossRef]
  14. Tronto, J.C. Moral Boundaries: A Political Argument for an Ethic of Care; Routledge: New York, NY, USA, 1993. [Google Scholar]
  15. Tronto, J.C. Caring Democracy: Markets, Equality, and Justice; New York University Press: New York, NY, USA, 2013. [Google Scholar]
  16. Evans, G.W. The Built Environment and Mental Health. J. Urban Health 2003, 80, 536–555. [Google Scholar] [CrossRef]
  17. Evans, G.W.; Wells, N.M.; Moch, A. Housing and Mental Health: A Review of the Evidence and a Methodological and Conceptual Critique. J. Soc. Issues 2003, 59, 475–500. [Google Scholar] [CrossRef]
  18. Hartig, T.; Mitchell, R.; de Vries, S.; Frumkin, H. Nature and Health. Annu. Rev. Public Health 2014, 35, 207–228. [Google Scholar] [CrossRef]
  19. Berry, H.L.; Bowen, K.; Kjellstrom, T. Climate Change and Mental Health: A Causal Pathways Framework. Int. J. Public Health 2010, 55, 123–132. [Google Scholar] [CrossRef]
  20. Clayton, S.; Manning, C.M.; Krygsman, K.; Speiser, M. Mental Health and Our Changing Climate: Impacts, Implications, and Guidance; American Psychological Association and ecoAmerica: Washington, DC, USA, 2017. [Google Scholar]
  21. Cunsolo, A.; Ellis, N.R. Ecological Grief as a Mental Health Response to Climate Change-Related Loss. Nat. Clim. Chang. 2018, 8, 275–281. [Google Scholar] [CrossRef]
  22. Albino, V.; Berardi, U.; Dangelico, R.M. Smart cities: Definitions, dimensions, performance, and initiatives. J. Urban Technol. 2015, 22, 3–21. [Google Scholar] [CrossRef]
  23. Batty, M.; Axhausen, K.W.; Giannotti, F.; Pozdnoukhov, A.; Bazzani, A.; Wachowicz, M.; Ouzounis, G.; Portugali, Y. Smart cities of the future. Eur. Phys. J. Spec. Top. 2012, 214, 481–518. [Google Scholar] [CrossRef]
  24. Barns, S. Platform Urbanism: Negotiating Platform Ecosystems in Connected Cities; Palgrave Macmillan: Singapore, 2020. [Google Scholar] [CrossRef]
  25. Cutter, S.L.; Boruff, B.J.; Shirley, W.L. Social vulnerability to environmental hazards. Soc. Sci. Q. 2003, 84, 242–261. [Google Scholar] [CrossRef]
  26. Bowker, G.C.; Star, S.L. Sorting Things Out: Classification and Its Consequences; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
  27. Dembski, F.; Wössner, U.; Letzgus, M.; Ruddat, M.; Yamu, C. Urban digital twins for smart cities and citizens: The case study of Herrenberg, Germany. Sustainability 2020, 12, 2307. [Google Scholar] [CrossRef]
  28. Malczewski, J. GIS and Multicriteria Decision Analysis; John Wiley & Sons: New York, NY, USA, 1999. [Google Scholar]
  29. Malczewski, J. GIS-based multicriteria decision analysis: A survey of the literature. Int. J. Geogr. Inf. Sci. 2006, 20, 703–726. [Google Scholar] [CrossRef]
  30. Geertman, S.; Stillwell, J. (Eds.) Planning Support Systems: Best Practice and New Methods; Springer: Dordrecht, The Netherlands, 2009. [Google Scholar] [CrossRef]
  31. Gascon, M.; Triguero-Mas, M.; Martínez, D.; Dadvand, P.; Forns, J.; Plasència, A.; Nieuwenhuijsen, M.J. Mental health benefits of long-term exposure to residential green and blue spaces: A systematic review. Int. J. Environ. Res. Public Health 2015, 12, 4354–4379. [Google Scholar] [CrossRef]
  32. Twohig-Bennett, C.; Jones, A. The health benefits of the great outdoors: A systematic review and meta-analysis of greenspace exposure and health outcomes. Environ. Res. 2018, 166, 628–637. [Google Scholar] [CrossRef] [PubMed]
  33. Anguelovski, I.; Connolly, J.J.T.; Masip, L.; Pearsall, H. Assessing green gentrification in historically disenfranchised neighborhoods: A longitudinal and spatial analysis of Barcelona. Urban Geogr. 2018, 39, 458–491. [Google Scholar] [CrossRef]
  34. Anguelovski, I.; Connolly, J.J.T.; Pearsall, H.; Shokry, G.; Checker, M.; Maantay, J.; Gould, K.; Lewis, T.; Maroko, A.; Roberts, J.T.; et al. Opinion: Why green “climate gentrification” threatens poor and vulnerable populations. Proc. Natl. Acad. Sci. USA 2019, 116, 26139–26143. [Google Scholar] [CrossRef]
  35. Barocas, S.; Selbst, A.D. Big data’s disparate impact. Calif. Law Rev. 2016, 104, 671–732. [Google Scholar] [CrossRef]
  36. Selbst, A.D.; Boyd, D.; Friedler, S.A.; Venkatasubramanian, S.; Vertesi, J. Fairness and abstraction in sociotechnical systems. In Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (FAT ’19)*, Atlanta, GA, USA, 29–31 January 2019; ACM: New York, NY, USA, 2019; pp. 59–68. [Google Scholar] [CrossRef]
  37. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. 2021, 54, 115. [Google Scholar] [CrossRef]
  38. Meerow, S.; Newell, J.P.; Stults, M. Defining urban resilience: A review. Landsc. Urban Plan. 2016, 147, 38–49. [Google Scholar] [CrossRef]
  39. Rinaldi, S.M.; Peerenboom, J.P.; Kelly, T.K. Identifying, understanding, and analyzing critical infrastructure interdependencies. IEEE Control Syst. Mag. 2001, 21, 11–25. [Google Scholar] [CrossRef]
  40. Hallegatte, S. Strategies to adapt to an uncertain climate change. Glob. Environ. Chang. 2009, 19, 240–247. [Google Scholar] [CrossRef]
  41. Lempert, R.J.; Popper, S.W.; Bankes, S.C. Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis; RAND Corporation: Santa Monica, CA, USA, 2003; MR-1626-RPC. [Google Scholar] [CrossRef]
  42. Bullard, R.D. Dumping in Dixie: Race, Class, and Environmental Quality; Westview Press: Boulder, CO, USA, 1990. [Google Scholar]
  43. Schlosberg, D. Defining Environmental Justice: Theories, Movements, and Nature; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
  44. Haasnoot, M.; Kwakkel, J.H.; Walker, W.E.; ter Maat, J. Dynamic adaptive policy pathways: A method for crafting robust decisions for a deeply uncertain world. Global Environ. Change 2013, 23, 485–498. [Google Scholar] [CrossRef]
  45. Marmot, M. Fair Society, Healthy Lives: The Marmot Review; The Marmot Review: London, UK, 2010. [Google Scholar]
  46. Dahlgren, G.; Whitehead, M. Policies and Strategies to Promote Social Equity in Health; Institute for Futures Studies: Stockholm, Sweden, 1991. [Google Scholar]
  47. World Health Organization Regional Office for Europe. Urban Green Spaces and Health: A Review of Evidence; WHO Regional Office for Europe: Copenhagen, Denmark, 2016. [Google Scholar]
  48. Galea, S.; Freudenberg, N.; Vlahov, D. Cities and population health. Soc. Sci. Med. 2005, 60, 1017–1033. [Google Scholar] [CrossRef] [PubMed]
  49. Gruebner, O.; Rapp, M.A.; Adli, M.; Kluge, U.; Galea, S.; Heinz, A. Cities and mental health. Dtsch. Arztebl. Int. 2017, 114, 121–127. [Google Scholar] [CrossRef] [PubMed]
  50. Klosterman, R.E. The What If? collaborative planning support system. Environ. Plan. B Plan. Des. 1999, 26, 393–408. [Google Scholar] [CrossRef]
  51. te Brömmelstroet, M. Performance of planning support systems: What is it and how do we report on it? Comput. Environ. Urban Syst. 2013, 41, 299–308. [Google Scholar] [CrossRef]
  52. Biljecki, F.; Stoter, J.; Ledoux, H.; Zlatanova, S.; Cöltekin, A. Applications of 3D city models: State of the art review. ISPRS Int. J. Geo-Inf. 2015, 4, 2842–2889. [Google Scholar] [CrossRef]
  53. Fuller, A.; Fan, Z.; Day, C.; Barlow, C. Digital twin: Enabling technologies, challenges and open research. IEEE Access 2020, 8, 108952–108971. [Google Scholar] [CrossRef]
  54. Hardt, M.; Price, E.; Srebro, N. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems 29 (NIPS 2016); Curran Associates, Inc.: Red Hook, NY, USA, 2016; pp. 3315–3323. [Google Scholar]
  55. Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS ’12), Cambridge, MA, USA, 8–10 January 2012; ACM: New York, NY, USA, 2012; pp. 214–226. [Google Scholar] [CrossRef]
  56. Walker, G. Environmental Justice: Concepts, Evidence and Politics; Routledge: London, UK, 2012. [Google Scholar]
  57. Diakopoulos, N. Algorithmic accountability: Journalistic investigation of computational power structures. Digit. J. 2015, 3, 398–415. [Google Scholar] [CrossRef]
  58. Pasquale, F. The Black Box Society: The Secret Algorithms That Control Money and Information; Harvard University Press: Cambridge, MA, USA, 2015. [Google Scholar]
  59. Raji, I.D.; Smart, A.; White, R.N.; Mitchell, M.; Gebru, T.; Hutchinson, B.; Smith-Loud, J.; Theron, D.; Barnes, P. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20), Barcelona, Spain, 27–30 January 2020; ACM: New York, NY, USA, 2020; pp. 33–44. [Google Scholar] [CrossRef]
  60. Mitchell, M.; Wu, S.; Zaldivar, A.; Barnes, P.; Vasserman, L.; Hutchinson, B.; Spitzer, E.; Raji, I.D.; Gebru, T. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19), Atlanta, GA, USA, 29–31 January 2019; ACM: New York, NY, USA, 2019; pp. 220–229. [Google Scholar] [CrossRef]
  61. Gebru, T.; Morgenstern, J.; Vecchione, B.; Vaughan, J.W.; Wallach, H.; Daumé, H., III; Crawford, K. Datasheets for datasets. Commun. ACM 2021, 64, 86–92. [Google Scholar] [CrossRef]
  62. National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF 1.0); NIST AI 100-1; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2023. [Google Scholar]
  63. Organisation for Economic Co-operation and Development (OECD). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449); OECD: Paris, France, 2019. [Google Scholar]
  64. Reisman, D.; Schultz, J.; Crawford, K.; Whittaker, M. Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability; AI Now Institute: New York, NY, USA, 2018. [Google Scholar]
  65. Graham, S.; Marvin, S. Splintering Urbanism: Networked Infrastructures, Technological Mobilities and the Urban Condition; Routledge: London, UK, 2001. [Google Scholar]
  66. Flanagan, B.E.; Gregory, E.W.; Hallisey, E.J.; Heitgerd, J.L.; Lewis, B. A social vulnerability index for disaster management. J. Homel. Secur. Emerg. Manag. 2011, 8, 1–22. [Google Scholar] [CrossRef]
  67. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16), San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
  68. Crawford, K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press: New Haven, CT, USA, 2021. [Google Scholar]
  69. Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power; PublicAffairs: New York, NY, USA, 2019. [Google Scholar]
  70. Austin, L.M.; Lie, D. Data trusts and the governance of smart environments: Lessons from the failure of Sidewalk Labs’ Urban Data Trust. Surveill. Soc. 2021, 19, 255–261. [Google Scholar] [CrossRef]
  71. Arnstein, S.R. A ladder of citizen participation. J. Am. Inst. Plann. 1969, 35, 216–224. [Google Scholar] [CrossRef]
  72. Healey, P. Collaborative Planning: Shaping Places in Fragmented Societies; Macmillan Press: London, UK, 1997. [Google Scholar] [CrossRef]
  73. Cardullo, P.; Kitchin, R. Being a ‘citizen’ in the smart city: Up and down the scaffold of smart citizen participation in Dublin, Ireland. GeoJournal 2019, 84, 1–13. [Google Scholar] [CrossRef]
  74. Houser, K.A.; Bagby, J.W. The data trust solution to data sharing problems. Vand. J. Entertain. Technol. Law 2023, 25, 113–180. [Google Scholar] [CrossRef]
  75. Goodchild, M.F. Citizens as sensors: The world of volunteered geography. GeoJournal 2007, 69, 211–221. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated