Submitted:
13 April 2026
Posted:
14 April 2026
You are already at the latest version
Abstract
Keywords:
1. Introduction
1.1. The Implementation Gap: From Model Skill to Operational Outcomes
1.2. Governance, Legitimacy, and Paper Compliance in Climate-Sensitive Systems
1.3. Donor-Driven Deployment Pressures and Governance Deficits
1.4. Paper Contribution
1.5. Methodological Approach
2. Conceptual Foundations: Contextual Validity, Legitimacy, and Incident-Oriented Evaluation
2.1. Construct Clarification: Contextual Validity, Legitimacy, and Contestability
- Contextual validity assesses whether an AI–IoT system’s data, model assumptions, and design are appropriate for the specific local conditions, including environmental factors, infrastructure, and institutional settings.
- Legitimacy evaluates if the system's deployment and outcomes are considered acceptable and justified by stakeholders, including affected communities and oversight bodies.
- Contestability serves as a mechanism that maintains legitimacy amid uncertainty, allowing stakeholders to challenge and review system outputs through clear pathways for escalation and grievances.
2.2. AI–IoT Water Decision Systems as Socio-Technical Infrastructure
2.3. Contextual Validity Under Climate Stress
2.4. Non-Stationarity and the Limits of Historical Validation
2.5. Legitimacy and Contestability in Water Decision Systems
2.6. Paper Compliance Risk in AI Governance for Climate Adaptation
2.7. Incident-Oriented Outcomes as the Missing Evaluation Layer
2.8. Summary of Conceptual Foundations
3. The Legitimacy Stress-Test: Design and Scoring Framework
3.1. Design Principles and Intended Use
- Ex ante application (pre-deployment / scaling):
- Ex post application (post-deployment / operational monitoring):
3.2. Unit of Analysis
- Sensing layer: IoT sensors, telemetry, remote sensing inputs, data conduits.
- Analytics layer: machine learning models, hybrid models, digital twins, uncertainty estimation.
- Decision layer: alert thresholds, allocation criteria, escalation procedures, override privileges.
- Institutional layer: governance, ownership, mandates, budgets, personnel, responsibility.
- Social layer: contestability, grievance procedures, equitable implications, trust dynamics.
- Dimensions of Contextual Validity and Legitimacy
- Climate robustness and non-stationarity readiness
- Data representativeness and measurement integrity
- Sensor reliability and maintenance sustainability
- Institutional capacity and operational readiness
- Decision rights, escalation pathways, and override governance
- Equity and exclusion risk controls
- Contestability, grievance, and redress mechanisms
- Transparency, auditability, and post-incident learning
3.4. Scoring Logic and Evidence Requirements
- 0 = Absent/high-risk (governance missing or purely symbolic)
- 1 = Partial/fragile (exists but incomplete, informal, or under-resourced)
- 2 = Adequate/functional (operationally integrated, evidence exists)
- 3 = Robust/resilient (tested under stress, monitored, and continuously improved)
3.5. Interpreting Stress-Test Outputs
- Legitimacy risk profile: a dimension-by-dimension scorecard identifying weak points.
- Incident pathway prediction: mapping low-scoring dimensions to likely operational failures.
- Mitigation priorities: a ranked set of governance and implementation improvements required before scaling.
3.6. Evidence Requirements and Inter-Rater Scoring Protocol
3.7. Why the Tool Adds Value Beyond Accuracy
4. Incident-Oriented Evaluation: Linking Contextual Legitimacy Deficits to Operational Failure Pathways
4.1. Defining Incidents in AI–IoT Water Decision Systems
4.2. Defining Incidents in AI–IoT Water Decision Systems
4.3. Defining Incidents in AI–IoT Water Decision Systems
- Technical incidents: sensor inoperability, data pipeline malfunction, model degradation, system unavailability, cyber disruption.
- Socio-institutional incidents: protracted escalation, alarm fatigue, marginalisation of vulnerable populations, failure of accountability, disputed allocation results, and deterioration of confidence.
4.4. A Practical Incident Taxonomy
- Incidents of missed events: absence of alert, delayed alert, or alert falling below the actionable threshold.
- Incidents of false alarm fatigue: excessive alerts reduce attention and compliance.
- Delayed escalation incidents: alarms are triggered, but action is deferred due to ambiguous authority or lack of cooperation.
- Exclusion incidents: marginalised groups are either not notified of notifications or unable to respond due to accessibility barriers.
- Incidents of accountability breakdown: ambiguity in responsibility, absence of audit records, or deficiencies in post-event learning.
- Operational drift incidents: the system is utilised beyond its validated parameters (e.g., altered basin conditions, new climatic regimes, revised operational aims) without appropriate governance adjustments.
4.5. Why Incident-Oriented Outcomes Are the Missing Link
- data are unreliable or biased,
- sensor networks degrade,
- institutions lack readiness,
- decision rights are unclear,
- affected groups do not trust or cannot contest outputs, or
- accountability mechanisms do not exist.
4.6. Mapping Legitimacy Stress-Test Dimensions to Incident Pathways
- Poor maintenance sustainability prolongs sensor downtime, causing missed events and operational drift.
- Weak decision rights and escalation governance cause more delayed escalation incidents despite accurate alerts.
- Weak grievance mechanisms heighten exclusion and mistrust, leading to greater non-compliance.
- Weak auditability undermines accountability, harms legitimacy, and blocks learning.
4.7. What Evaluators Should Always Collect
- i.
-
Operational performance
- response latency (alert → action)
- escalation time (alert → responsible authority engaged)
- system availability/uptime
- ii.
-
Signal quality in use
- false alarm rate in operational context
- missed event rate (including late alerts)
- alert fatigue indicators (e.g., ignored alerts over time)
- iii.
-
Inclusion and legitimacy
- alert reach by vulnerability group
- complaint rate and complaint concentration
- contestation resolution time
- iv.
-
Accountability and learning
- audit trail completeness
- post-incident review completion
- corrective action closure rate
4.8. Transitioning from Legitimacy Deficits to Predictable Incident Risk
5. Illustrative Case: Applying the Legitimacy Stress-Test to an AI–IoT Flood Early Warning System
5.1. Case Rationale: Why Flood Early Warning is an Appropriate Illustrative Setting
5.2. Illustrative System Description
- Sensing layer: river stage sensors and rainfall gauges are connected through cellular telemetry, with additional data provided by satellite precipitation products.
- Analytics layer: A hybrid forecasting system that integrates machine learning-driven rainfall–runoff predictions with rule-based thresholds to trigger alerts.
- Decision layer: Automated alert generation via SMS and dashboard notifications, including recommended escalation if predicted levels surpass predefined thresholds.
- Institutional layer: A primary water agency overseeing system operations, coordinating with disaster management authorities and local governments.
- Social layer: community-level recipients, such as high-risk populations residing in floodplains.
5.3. Applying the Stress-Test: Evidence and Scoring Approach
5.4. Results: Legitimacy Risk Profile and Predicted Incident Pathways
5.5. Interpretation: Why “Strong Models” Still Produce Weak Outcomes
5.6. Translating the Scorecard into Mitigation Priorities
- Decision-rights charter and escalation protocol (D5):
- Grievance and redress mechanism (D7):
- Maintenance and reliability plan with lifecycle budgeting (D3):
- D5 (Decision rights = 0): An alert is issued, but no agency has the authority to escalate to evacuation orders or mobilise emergency resources. While a forecast is available, action is delayed due to negotiations over who is responsible.
- D4 (Operational readiness = 1): Local authorities lack training and proper SOP integration, causing confusion in interpretation and response.
- D6 (Equity controls = 1): SMS alerts often fail to reach many households, and there is no offline distribution available.
- D8 (Auditability = 1): Logs are incomplete after the incident, and there is no corrective action register, leading to recurring failures in future events.
5.7. Summary of What the Walkthrough Demonstrates
6. Implications for Procurement, Governance, and Climate Adaptation Evaluation
6.1. Reframing “Performance” in AI–IoT Water Systems
- sensing and data pipelines remain reliable over time,
- institutions can interpret and act on outputs,
- decision rights are clearly assigned,
- alerts reach vulnerable populations, and
- post-incident learning is institutionalised.
6.2. A CRVI-Style “Legitimacy Annexe” for Procurement and Scaling
- the dimension-by-dimension scorecard (Table 2 structure),
- evidence supporting each score,
- identified incident pathways, and
- mitigation commitments required prior to scaling.
6.3. Decision Rights as a Governance Primitive in Climate-Tech
- escalation thresholds and responsible authorities,
- override rules and conditions,
- accountability mapping across agencies, and
- minimum response time expectations (service-level commitments).
6.4. Incident Registers and Post-Event Learning as Operational Governance
- missed events,
- false alarms and alert fatigue,
- delayed escalation cases,
- exclusion failures, and
- accountability breakdown events.
- what the model predicted,
- what alerts were issued,
- who acted (and when),
- where escalation failed, and
- What governance or capacity changes are required?
6.5. Contestability and Redress as Resilience Infrastructure
- clear reporting channels (digital and offline),
- defined time-to-resolution targets,
- correction procedures for erroneous alerts or exclusions, and
- feedback integration into the model and governance updates.
6.6. Equity Controls: From Inclusion Rhetoric to Measurable Reach
- alert reach by vulnerability group,
- channel redundancy (SMS, radio, community focal points),
- language accessibility, and
- action feasibility (whether recipients can act on warnings).
6.7. Donor and Public–Private Partnership Governance: Reducing “Pilotitis”
- funded maintenance and lifecycle budgets (D3),
- internal operational capacity (D4),
- decision rights and escalation protocols (D5), and
- auditability and learning cycles (D8).
6.8. Regulatory and Evaluation Implications
- climate-tech certification processes,
- disaster preparedness audits,
- public digital infrastructure governance requirements, or
- procurement standards for AI-enabled water systems.
6.9. Strengthening the Bridge Between Modelling and Implementation
7. Conclusion
References
- Alprol, A. E.; Mansour, A. T.; Ibrahim, E. M.; Ashour, M. Artificial Intelligence Technologies Revolutionising Wastewater Treatment: Current Trends and Future Prospective. Water 2024, 16, 314. [Google Scholar] [CrossRef]
- Amanatidis, P.; Lyratzis, E.; Angelopoulos, V.; Kouloumpris, E.; Skaperdas, E.; Bassiliades, N.; Vlahavas, I.; Maris, F.; Emmanouloudis, D.; Karampatzakis, D. Intelligent Water Management Through Edge-Enabled IoT, AI, and Big Data Technologies. IoT 2026, 7, 5. [Google Scholar] [CrossRef]
- Camacho-Leon, S. Emerging trends in IoT for aquatic systems: a systematic literature review. Frontiers in Water 2025, 7. [Google Scholar] [CrossRef]
- Cheong, B. C. Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics 2024, 6. [Google Scholar] [CrossRef]
- Dada, M. A.; Majemite, M. T.; Obaigbena, A.; Daraojimba, O. H.; Oliha, J. S.; Nwokediegwu, Z. Q. S.; Dada, M. A.; Majemite, M. T.; Obaigbena, A.; Daraojimba, O. H.; Oliha, J. S.; Nwokediegwu, Z. Q. S. Review of smart water management: IoT and AI in water and wastewater treatment. World Journal of Advanced Research and Reviews 2024, 21, 1373–1382. [Google Scholar] [CrossRef]
- Dahal, D.; Bhattarai, N.; Silwal, A.; Shrestha, S.; Shrestha, B.; Poudel, B.; Kalra, A. A Review on Climate Change Impacts on Freshwater Systems and Ecosystem Resilience. Water 2025, 17, 3052. [Google Scholar] [CrossRef]
- Dharmarathne, G.; Abekoon, A. M. S. R.; Bogahawaththa, M.; Alawatugoda, J.; Meddage, D. P. P. A review of machine learning and internet-of-things on the water quality assessment: Methods, applications and future trends. Results in Engineering 2025, 26, 105182. [Google Scholar] [CrossRef]
- Domínguez Hernández, A.; Perini, A. M.; Hadjiloizou, S.; Borda, A.; Mahomed, S.; Leslie, D. Towards a sociotechnical ecology of artificial intelligence: power, accountability, and governance in a global context. AI and Ethics 2025, 6. [Google Scholar] [CrossRef]
- Frimpong, V. The Challenge of Context-Free Validity: Introducing the Contextual Research Validity Index Framework for Situated Legitimacy under Socioeconomic Challenges. SocioEconomic Challenges 2026, 10, 42–49. [Google Scholar] [CrossRef]
- Frimpong, V.; Mamuti, A. Adaptive Methodologies and Bricolage Research Design in Africa’s Digital Asymmetry. International Journal of Applied Research in Business and Management 2026, 7. [Google Scholar] [CrossRef]
- Haraway, D. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies 1988, 14, 575–599. [Google Scholar] [CrossRef]
- Hashemi-Beni, L.; Puthenparampil, M.; Jamali, A. A low-cost IoT-based deep learning method of water gauge measurement for flood monitoring. Geomatics, Natural Hazards and Risk 2024, 15. [Google Scholar] [CrossRef]
- IPCC. Weather and Climate Extreme Events in a Changing Climate. In Climate Change 2021 – The Physical Science Basis: Working Group I Contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change;Chapter; Cambridge University Press: Cambridge; United Kingdom and New York, NY, USA, 2021; pp. 1513–1766. [Google Scholar]
- Guezouli, Lahcene; Guezouli, Lyamine; Baha, M.; Bentahrour, Abir. IoT and AI for Real-time Water Monitoring and Leak Detection. Journal of Renewable Energies 2024, 27. [Google Scholar] [CrossRef]
- Jambadu, Lazarus; Monstadt, Jochen; Pilo, F. The politics of tied aid: Technology transfer and the maintenance and repair of water infrastructure. World Development 2024, 175, 106476–106476. [Google Scholar] [CrossRef]
- Lyons, H.; Velloso, E.; Miller, T. Conceptualising Contestability. Proceedings of the ACM on Human-Computer Interaction 2021, 5(CSCW1), 1–25. [Google Scholar] [CrossRef]
- Maharjan, S.; Li, W.; Bolten, J. D.; El-Askary, H. The future intensification of hydrological extremes and whiplashes in the contiguous United States increases community vulnerability. Communications Earth & Environment 2025, 6. [Google Scholar] [CrossRef]
- Meyer, J.; Rowan, B. Institutionalised Organisations: Formal Structure as Myth and Ceremony. American Journal of Sociology 1977, 83, 340–363. [Google Scholar] [CrossRef]
- Mukhamejanova, A.; Kazhimkanuly, D.; Utepov, Y.; Aniskin, A. Modern technologies for monitoring waterlogged areas and their impact on infrastructure. Bulletin of L N Gumilyov Eurasian National University Technical Science and Technology Series 2025, 150, 226–248. [Google Scholar] [CrossRef]
- OECD. Infrastructure for a Climate-Resilient Future; OECD Publishing: Paris, 2024. [Google Scholar] [CrossRef]
- Papagiannidis, E.; Mikalef, P.; Conboy, K. Responsible artificial intelligence governance: A review and research framework. The Journal of Strategic Information Systems 2025, 34. [Google Scholar] [CrossRef]
- Pasupuleti, M. K. AI for Climate Resilience: Predictive Analytics for Global Risk Reduction and Sustainable Development. International Journal of Academic and Industrial Research Innovations(ijairi) 2025, 54–66. [Google Scholar] [CrossRef]
- Pawson, R.; Tilley, N. An introduction to scientific realist evaluation. In Evaluation for the 21st century: A handbook; Chelimsky, E., Shadish, W. R., Eds.; Sage Publications, Inc., 1997; pp. 405–418. [Google Scholar] [CrossRef]
- Pfeffer, J.; Salancik, G. The External Control of Organisations: A Resource Dependence Perspective; Harper & Row: New York, 1978. [Google Scholar]
- Samadi, S. The convergence of AI, IoT, and big data for advancing flood analytics research. Frontiers in Water 2022, 4. [Google Scholar] [CrossRef]
- Schwartz, R.; Chowdhury, R.; Kundu, A.; Frase, H.; Fadaee, M.; David, T.; Waters, G.; Taik, A.; Briggs, M.; Hall, P.; Jain, S.; Yee, K.; Thomas, S.; Bhandari, S.; Duncan, P.; Thompson, A.; Carlyle, M.; Lu, Q.; Holmes, M.; Skeadas, T. Reality Check: A New Evaluation Ecosystem Is Necessary to Understand AI’s Real World Effects. ArXiv.org. 2025. Available online: https://arxiv.org/abs/2505.18893.
- Kulkarni, Shashank Dattatray. Artificial Intelligence and Machine Learning Applications in Precision Agriculture: Enhancing Crop Yield Prediction, Disease Detection, and Resource Optimisation. International Journal of Applied Mathematics 2025, 38(11s), 1874–1887. [Google Scholar] [CrossRef]
- Shrimpton, E. A.; Balta-Ozkan, N. A Systematic Review of Socio-Technical Systems in the Water–Energy–Food Nexus: Building a Framework for Infrastructure Justice. Sustainability 2024, 16, 5962. [Google Scholar] [CrossRef]
- Suchman, M. C. Managing Legitimacy: Strategic and Institutional Approaches. The Academy of Management Review 1995, 20, 571–610. [Google Scholar] [CrossRef]
- Tahroudi, M. N. Comprehensive global assessment of precipitation trend and pattern variability considering their distribution dynamics. Scientific Reports 2025, 15. [Google Scholar] [CrossRef] [PubMed]
- Tiggeloven, T.; Pfeiffer, S.; Matanó, A.; van den Homberg, M.; Thalheimer, L.; Reichstein, M.; Torresan, S. The Role of Artificial Intelligence for Early Warning Systems: Status, Applicability, Guardrails and Ways Forward. IScience 2025, 113689. [Google Scholar] [CrossRef]
- Twinomuhangi, M. B.; Bamutaze, Y.; Kabenge, I.; Wanyama, J.; Kizza, M.; Gabiri, G.; Egli, P. E. Analysis of stationary and non-stationary hydrological extremes under a changing environment: A systematic review. HydroResearch 2025, 8, 332–350. [Google Scholar] [CrossRef]
- Walther, C. C. How AI Exclusion Impacts Humankind. Knowledge at Wharton; Knowledge@Wharton. 28 January 2025. Available online: https://knowledge.wharton.upenn.edu/article/how-ai-exclusion-impacts-humankind/.
- Wang, Q.; Abdelrahman, W. High-Precision AI-Enabled Flood Prediction Integrating Local Sensor Data and 3rd Party Weather Forecast. Sensors 2023, 23, 3065. [Google Scholar] [CrossRef]
- Frimpong, V.; Tawk, C.; Mamuti, A. Human–AI Handovers: A Dynamic Authority Reversal (DAR) Framework for Trust Calibration and Transitional Accountability. In Proceedings of the IX. International Applied Social Sciences (C-IASOS-2025) Congress, Rome, Italy, 13–15 October 2025. [Google Scholar]
| Construct | Working definition (This paper) |
Why it matters in AI–IoT water systems | Illustrative operational indicators |
|---|---|---|---|
| AI–IoT water decision system | An integrated water/climate management system using sensing (IoT/remote sensing), analytics (AI/ML), and decision processes (Pasupuleti, 2025; Gamit et al., 2024) | These systems serve as governance infrastructure, not merely forecasting tools. | Sensor uptime, model update frequency, and alert-to-action workflow clarity |
| Contextual validity | The degree to which system outputs stay relevant, reliable, and practical within local institutional, infrastructural, and climate contexts (Frimpong, 2026). | Determines if model skill results in operational usefulness. | Data completeness, maintenance, climate non-stationarity stress-testing. |
| Legitimacy | Perceived acceptability of the system's decisions and processes is influenced by fairness, accountability, transparency, and contestability (Suchman, 1995). | Conditions: adoption, compliance, and trust in warnings/allocations | Stakeholder acceptance, grievance use, and responsibility clarity. |
| Contestability | Affected parties can challenge outputs, request explanations, seek corrections, and access redress (Lyons et al., 2021). | Prevents error persistence and minimises the risk of exclusion. | Appeal channel existence, time-to-resolution, and correction rate. |
| Paper compliance risk | The risk of strong governance in documentation but weak in practice (Papagiannidis, 2025). | Produces systems that “look governed” but fail in incidents. | Governance artefacts lack SOP integration, have unclear escalation rights, and are missing post-event learning. |
| Incident-oriented outcomes | Real-world failures and near misses during system use (Papagiannidis, 2025). | Captures failure modes that model metrics miss. | Missed events, false alarms, delayed escalation, exclusion, and accountability issues. |
| Decision rights clarity | The extent to which authority for action, override, and escalation is explicitly assigned (Pasupuleti, 2025; Gamit et al., 2024). | Determines whether forecasts lead to timely action | Who can issue alerts, override frequency, and escalation latency? |
| Auditability | Traceability of system outputs, decisions, and actions for accountability and learning (Papagiannidis, 2025). | Enables post-incident analysis and assignment of responsibility. | Logs, decision records, and completed post-incident reviews. |
| Equity/exclusion risk | The probability that system design or implementation disadvantages certain groups (Haraway, 1988). | Water and climate decisions often spread risk and scarcity. | Alert on group reach, access gaps, and inclusion metrics. |
| Dimension | 0 = Absent / High-risk | 1 = Partial / Fragile | 2 = Adequate / Functional | 3 = Robust / Resilient |
|---|---|---|---|---|
| D1. Climate robustness and non-stationarity readiness | Assumes stationarity; no stress-testing for climate shifts; model considered stable. | Acknowledges non-stationarity but relies on ad hoc adjustments; limited stress tests. | Includes stress-testing with multiple climate/extreme scenarios and documented update triggers. | Continuous drift monitoring, regime-change protocol, and model recalibration integrated into operations. |
| D2. Data representativeness and measurement integrity | Training and operational data are biased, incomplete, or unknown, with no quality controls. | Basic data cleaning with partial coverage and unclear bias/exclusion risks. | Outlined data QA/QC, assessed representativeness, and documented limitations. | Systematic monitoring, bias controls, adaptive sampling, and validation. |
| D3. Sensor reliability and maintenance sustainability | Sensors are deployed without maintenance plans, resulting in frequent downtime and inadequate lifecycle budgeting. | A maintenance plan exists but is unfunded; spare parts procurement is weak; calibration is irregular. | Funded maintenance plan; uptime monitored; calibration schedule implemented. | Resilient operations include redundancy, quick repairs, tamper detection, and secured lifecycle funding. |
| D4. Institutional capacity and operational readiness | No trained operators, unclear ownership, no SOP integration, and reliance on external consultants. | Some training and SOPs exist, but capacity remains fragile and reliant on key individuals. | Clear ownership, staffing, SOPs, internal competence, and defined roles. | Institutionalised capacity: training pipeline, turnover resilience, cross-agency coordination, performance oversight. |
| D5. Decision rights, escalation, and override governance | Alerts exist but lack an authority pathway. Unclear decision rights and overrides are informal. | Some escalation rules; overrides possible but undocumented; responsibility ambiguous | Decision-rights charter, escalation thresholds, and override rules are documented and operationally integrated. | Tested in drills; override logs are reviewed; escalation performance is monitored and continuously improved. |
| D6. Equity and exclusion risk controls | No analysis of exclusion, the system presumes equal access, and vulnerable groups remain invisible. | Equity mentioned but not implemented; limited outreach; no monitoring. | Vulnerable groups are identified; alert reach and coverage are tracked; mitigation plans are in place. | Equity built in: multilingual channels, offline redundancy, targeted dissemination, and ongoing inclusion monitoring. |
| D7. Contestability, grievance and redress mechanisms | No channel for challenge or correction; complaints ignored. | Informal feedback is possible but inconsistent, with slow or unclear resolution. | Formal grievance process with a set resolution time and documented correction procedures. | High-functioning contestability: rapid correction, feedback integrated into model/governance updates, transparency to complainants. |
| D8. Transparency, auditability and post-incident learning | No logs, decision traceability, or post-event learning. | Partial logs; learning depends on individuals; reviews are irregular. | Audit trail exists, post-incident reviews done, corrective actions tracked. | ML system: routine reviews, accountability mapping, continuous improvement. |
| Stress-test dimension | Failure mechanism (How legitimacy breaks) | Likely incident outcomes | Minimal measurable indicators (ex post) |
|---|---|---|---|
| D1. Climate robustness and non-stationarity readiness | Model becomes invalid during regime shifts; drift isn't monitored; thresholds become uncalibrated. | Missed events, operational drift, repeated forecast failures during extremes. | Drift detection, recalibration lag, forecast errors during extremes, threshold update intervals. |
| D2. Data representativeness and measurement integrity | Bias or missingness causes systematic error, leading to model underperformance in certain zones/groups. | Missed events, exclusion incidents, uneven risk detection | Errors by sub-region, missingness rate, bias audit results, and coverage map gaps. |
| D3. Sensor reliability and maintenance sustainability | Sensor issues like downtime, tampering, or calibration failures degrade data quality and timeliness. | Missed events, false alarms, system downtime | Sensor uptime %, calibration adherence, repair time, redundancy, and data latency. |
| D4. Institutional capacity and operational readiness | System is not integrated into SOPs, relying on individuals; staff cannot interpret or act. | Delayed escalation, accountability issues, and inconsistent responses. | Response latency, staff training, SOP compliance, and handover continuity. |
| D5. Decision rights, escalation, and override governance | Alerts fail to trigger action; authority is vague; overrides happen informally; blame is diffused. | Delayed escalation, missed events, and accountability breakdown. | Time-to-escalation, override logs and reason, drill performance, decision chain audit. |
| D6. Equity and exclusion risk controls | Vulnerable groups not reached; digital divide; warnings not actionable; unequal allocation impacts. | Exclusion incidents, trust erosion, and non-compliance | Alert reach by group; channel coverage (SMS, radio, community); surveys on action feasibility; complaint concentration by group. |
| D7. Contestability, grievance and redress mechanisms | Errors cannot be challenged; complaints unresolved; no correction loop | Exclusion incidents, persistent errors, and a legitimacy crisis. | Existence of grievance channel, resolution time, correction rate, and repeat complaint rate. |
| D8. Transparency, auditability and post-incident learning | Decisions lack traceability and accountability, with recurring failures and superficial governance. | Accountability issues, recurring incidents, and erosion of trust. | Log completeness, post-incident review rate, corrective action closure, and recurrence of audit findings. |
| Dimension | Illustrative score (0–3) | Key evidence assumptions | Primary incident risk |
|---|---|---|---|
| D1. Climate robustness and non-stationarity readiness | 1 | Model validated on historical data; limited extreme stress-testing. | Missed events under extreme regimes. |
| D2. Data representativeness and measurement integrity | 2 | Adequate basin coverage; some gaps in upstream rural zones. | Uneven detection; sub-region blind spots. |
| D3. Sensor reliability and maintenance sustainability | 1 | Downtime occurs, calibration is inconsistent, and spare parts procurement is slow. | Missed events; false alarms due to sensor noise |
| D4. Institutional capacity and operational readiness | 1 | System operated by a small team; SOPs exist but are weakly embedded. | Delayed escalation; inconsistent response |
| D5. Decision rights, escalation, and override governance | 0 | Alert issued, but no clear authority to declare evacuation or mobilise resources. | Delayed escalation; accountability breakdown |
| D6. Equity and exclusion risk controls | 1 | SMS alerts reach owners, but offline channels are weak, and language barriers exist. | Exclusion incidents; unequal warning reach |
| D7. Contestability, grievance and redress mechanisms | 0 | No formal mechanism to challenge false alarms or missed alerts. | Persistent errors; trust erosion |
| D8. Transparency, auditability and post-incident learning | 1 | Partial logs, inconsistent post-event review, and untracked corrective actions. | Repeated incidents; blame diffusion |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).