Submitted:
16 July 2025
Posted:
16 July 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
1.1. Background
1.2. The Problem of Accountability
1.3. Importance of Responsibility Attribution
1.4. Research Objectives and Questions
- Who are the principal actors involved in the AI-cloud ecosystem?
- What legal and ethical frameworks currently exist for responsibility attribution?
- How can responsibility be distributed in a way that is fair, transparent, and enforceable?
1.5. Structure of the Paper
- Section 2 presents a review of relevant literature on ethical, legal, and technical dimensions of responsibility in AI and cloud systems.
- Section 3 outlines the research methodology and analytical framework.
- Section 4 details the results of case studies and responsibility mapping.
- Section 5 discusses the implications of the findings.
- Section 6 concludes the paper with recommendations for future research and policy development.
2. Literature Review
2.1. Autonomous AI and Decision-Making
2.2. Ethical Perspectives
- Consequentialism, which evaluates outcomes of AI actions;
- Deontology, which considers adherence to rules and duties in system design;
- Virtue ethics, which focuses on the character and intentions of developers and deploying organizations.
2.3. Legal Frameworks
2.4. Cloud Infrastructure and Responsibility Challenges
2.5. Governance and AI Regulation
2.6. Summary of the Research Gap
3. Methodology
3.1. Research Design
3.2. Case Selection Criteria
- A cloud-based medical diagnostic AI that produced false diagnoses.
- A financial AI deployed via a cloud platform that failed to detect fraudulent transactions.
3.3. Data Sources
- Academic Literature: Peer-reviewed journal articles, legal commentaries, and ethical analyses related to AI, cloud computing, and accountability.
- Regulatory Documents: EU AI Act proposals, OECD and IEEE guidelines, and national policy whitepapers.
- Technical Reports: Documentation and service agreements from major cloud providers (e.g., AWS, Microsoft Azure, Google Cloud).
- Case Studies: Publicly documented incidents involving AI failures in cloud-hosted environments.
3.4. Analytical Framework
- AI Design & Development
- Cloud Deployment & Configuration
- System Operation & Monitoring
- Post-decision Impact Management
- Level of control
- Knowledge of potential risks
- Ability to mitigate harm
3.5. Ethical Considerations
3.6. Limitations
- The study is limited by the availability of public data on private AI deployments.
- Responsibility models may not generalize across all jurisdictions due to variations in legal and regulatory environments.
- The analysis focuses on high-level frameworks rather than technical code-level auditing.
4. Results
4.1. Case Study 1: AI-Powered Diagnostic Tool on a Cloud Platform
- The model was trained on biased data.
- The cloud provider did not ensure transparency in model updates.
- The medical staff over-relied on AI without proper validation.
- AI Developer: Designed and trained the model
- Cloud Provider: Hosted and updated the system
- Healthcare Institution: Deployed and relied on the tool
- Regulator: Had no specific guidelines for cloud-based AI tools in diagnostics
4.2. Case Study 2: Cloud-Hosted Financial AI for Fraud Detection
- The financial institution had minimal insight into the AI’s decision-making logic.
- The cloud provider failed to maintain audit trails for transactions processed during peak loads.
- The developer did not update the fraud detection algorithms to adapt to evolving fraud tactics.
- Developer: Responsible for model maintenance
- Cloud Provider: Provided serverless infrastructure with limited traceability
- Financial Institution: Integrated the AI into its core transaction workflow
- Auditors/Regulators: Discovered gaps post-incident
4.3. Responsibility Attribution Matrix (RAM)
| STAGE | DEVELOPER | CLOUD PROVIDER | USER ORGANIZATION | REGULATOR |
| High | Low | Low | Medium | |
| CLOUD DEPLOYMENT | Medium | High | Medium | Low |
| Low | High | High | Medium | |
| IMPACT MANAGEMENT | Medium | Medium | High | High |
4.4. Key Findings
- Responsibility is Distributed: No single actor holds full accountability across the system lifecycle.
- Opacity is a Barrier: Cloud abstraction limits visibility into AI operations, especially for end-users.
- Lack of Regulation: There is a policy vacuum regarding responsibility assignment in multi-party AI systems.
- Auditability Gaps: Both cloud and AI systems lack built-in features for comprehensive logging and traceability.
5. Discussion
5.1. Interpretation of Findings
5.2. Ethical Implications
5.3. Legal Consequences
5.4. Technical and Operational Barriers
5.5. Proposed Responsibility Attribution Framework
- Traceability – Each decision point in the AI lifecycle must be auditable through logs, version histories, and access trails.
- Role Clarity – Stakeholders must define and disclose their roles and obligations through transparent Service Level Agreements (SLAs).
- Explainability – AI agents should include explainable components to justify decisions in human-understandable terms.
- Shared Accountability Contracts – Legal documents should be co-signed by developers, cloud providers, and users outlining conditions for fault, redress, and data protection.
- Regulatory Oversight – External audit bodies must be empowered to enforce compliance and investigate failures.
5.6. Comparison with Current Regulatory Approaches
- EU AI Act: Focuses on high-risk AI systems but does not sufficiently address cloud deployment models.
- NIST AI Risk Management Framework: Emphasizes governance and lifecycle risks but remains voluntary.
- OECD AI Principles: Advocates for transparency and accountability but lacks enforcement mechanisms.
6. Conclusions
- The development of enforceable regulatory guidelines tailored to cloud-hosted AI.
- The integration of technical tools (e.g., audit logs, explainable AI) to support traceable decision-making.
- Formalized contracts between all stakeholders that outline shared roles and liabilities.
- International cooperation to harmonize responsibility standards across jurisdictions.
References
- Ryan Binns, “Algorithmic Accountability and Public Reason,” Philosophy & Technology 31, no. 4 (2018): 543–556. [CrossRef]
- Nick Bostrom and Eliezer Yudkowsky, “The Ethics of Artificial Intelligence,” in The Cambridge Handbook of Artificial Intelligence, ed. Keith Frankish and William M. Ramsey (Cambridge: Cambridge University Press, 2014), 316–334.
- Ryan Calo, “Robotics and the Lessons of Cyberlaw,” California Law Review 103, no. 3 (2015): 513–563.
- Cloud Security Alliance, “Shared Responsibility Model for Cloud Security,” Cloud Security Alliance, 2020. https://cloudsecurityalliance.org/research/shared-responsibility-model.
- European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (AI Act), 2021, https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence.
- Luciano Floridi and Josh Cowls, “A Unified Framework of Five Principles for AI in Society,” Harvard Data Science Review, 2019. [CrossRef]
- Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi, “The Ethics of Algorithms: Mapping the Debate,” Big Data & Society 3, no. 2 (2016). [CrossRef]
- Ugo Pagallo, The Laws of Robots: Regulating Autonomous Artificial Agents (Cham: Springer, 2013).
- Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th ed. (Pearson, 2021).
- Islam, R., Rivin, M. A. H., Sultana, S., Asif, M. A. B., Mohammad, M., & Rahaman, M. (2025). Machine learning for power system stability and control. Results in Engineering, 105355. [CrossRef]
- Ahmed, K. R., Islam, R., Alam, M. A., Rivin, M. A. H., Alam, M., & Rahman, M. S. (2024, September). A Management Information Systems Framework for Sustainable Cloud-Based Smart E-Healthcare Research Information Systems in Bangladesh. In 2024 Asian Conference on Intelligent Technologies (ACOIT) (pp. 1-5). IEEE.
- Technologies (ACOIT), pp. 1-5. IEEE, 2024.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).