Submitted:
29 October 2024
Posted:
30 October 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
- Define the ethical framework, deciding which ethical framework the agents will follow. The common frameworks include utilitarianism (maximizing overall happiness or reducing harm) [8]; deontology (following moral rules or duties and respecting rights or autonomy) [9]; virtue ethics (emphasizing traits like fairness, honesty and empathy) [10]; and ethical pluralism (combining ethical theories) [11].
- Design ethical decision-making algorithms from moral decision models means developing algorithms that can process ethical rules and principles, creating models that identify stakeholders, evaluate consequences and resolve conflicts.
- Implement ethical constraints involves dealing with hard constraints (strict rules that cannot break) and soft constraints (rules that should be strived to follow but can compromise on where necessary).
- Contextual and situational awareness deals with context sensitivity, sensing and interpreting data as well as understanding legal, cultural and social norms.
- Learning and adapting ethical behavior in reinforcement learning (scenarios are based on feedback (reward or penalty) for ethical or unethical actions), supervised learning (datasets include ethically annotated situations)and human-in-the-loop (allowing humans to intervene or provide feedback).
- Transparency and explainability means designing systems to be able to explain their ethical reasoning process in human-understandable terms and ensuring that the systems can provide clear justifications for their actions.
- Simulating moral dilemmas is the process of subjecting agents to various moral dilemmas in controlled environments, adjusting the ethical decision-making algorithms based on performance to better align with moral outcomes.
- Incorporate ethical review boards are committees that involves interdisciplinary teams to assess the ethical behavior and ensures that the systems adhere to standards, laws and regulations.
- Continuous monitoring and updates are essential for ongoing ethics supervision and self-correction mechanisms.
2. Literature Review
2.1. Overview Of Exiting Work
2.2. Theoretical Framework
- Utilitarianism, as originally proposed by Jeremy Bentham [8] and later expanded by John Stuart Mill [19], advocates for decisions that maximize overall happiness or minimize harm. This approach has been integrated into AI systems where the objective is to calculate and optimize outcomes based on predicted consequences. However, one challenge with utilitarian approaches in AI is that they often fail to capture the complexities of individual rights and justice.
- Deontological ethics, based on Immanuel Kant’s work [9], focuses on the adherence to moral rules or duties. AI systems that follow deontological frameworks are programmed to prioritize rules over outcomes. This has been explored in logic programming, as demonstrated by [16], where rules-based systems are designed to follow strict ethical guidelines. The limitation here is that such systems might struggle in scenarios where strict rule-following conflicts with other moral considerations, such as context-sensitive judgment.
- Virtue ethics, which stems from Aristotle’s philosophy [10], focuses on the development of moral character and emphasizes virtues like fairness, honesty, and empathy. While less commonly implemented in AI, this framework is critical in understanding how AI systems should act in ways that mirror human ethical behavior. Integrating virtue ethics into logic programming remains an open challenge due to the difficulty in codifying abstract traits like empathy into logical rules. In modern applications, virtue ethics is used to encourage individuals to develop good character traits and think about ethical behavior as part of their overall moral growth, rather than as isolated acts of right or wrong.
- Ethical pluralism, by combining different ethical theories, such as utilitarianism, deontology, and virtue ethics. The framework, known as principlism, offers a pluralistic approach to ethical decision-making in healthcare by balancing multiple ethical principles like autonomy, beneficence, non-maleficence, and justice [11].
2.3. Identification of Gaps
- Lack of Transparency in Non-Symbolic Methods: Machine learning systems, though powerful, often lack the transparency needed for ethical decision-making, especially in high-stakes areas like healthcare and autonomous systems. Current explainability tools, like LIME, are limited in their ability to provide comprehensive insight into the decision-making process. This paper addresses this gap by proposing CLP as more transparent alternatives that offer traceability in ethical reasoning.
- Insufficient Application of Symbolic Methods in Real-World Scenarios: While logic programming has been widely discussed as a theoretical framework for ethical AI, there is a notable gap in practical applications. Much of the existing research, such as the work of [16], has focused on hypothetical moral dilemmas or simplified simulations, but few studies have applied these models to real-world systems. This paper bridges this gap by applying CLP to concrete examples in healthcare, showcasing how these methods can be operationalized in practice.
- Over-reliance on Rigid Ethical Frameworks: Many AI systems that employ symbolic methods, such as rule-based systems, rely heavily on rigid ethical frameworks that may not adapt well to complex, real-world situations where exceptions or contextual factors need to be considered. This paper addresses this limitation by proposing the use of defeasible rules in CLP, allowing for more context-sensitive decision-making that can better handle ambiguous or conflicting ethical scenarios.
- Limited Exploration of Multi-disciplinary Collaboration: While there is widespread acknowledgment of the importance of interdisciplinary collaboration in computational ethics, many studies still adopt a siloed approach, focusing on either the technical or philosophical aspects of the problem. This paper emphasizes the integration of philosophy, computer science, and real-world stakeholder engagement, advocating for a more holistic approach to ethical AI development.
3. Methodology
- :
- :
- :
- :
- :
- :
- :
- :
- Information: Information has quality, how to select information, how to evaluate the quality of information and overstep the problems with information that is incomplete, ambiguous, contradictory or nebulous?
- Knowledge: Is knowledge complete?
- Decision: Is decision morally acceptable?
- a Program
- and two different answer sets of
- and the extensions of predicates p in and
- not not ; and
- not not
- T, ¬ T→ demo(T, unknown).
- Program 1 - The extended logic program for predicate
- {
-
→.
-
.... }
- Program 2 - program for predicate
-
{,→
-
→...}
- Program 3 - The extended logic program for predicate
-
{→.
-
}
- Program 4 - The extended logic program for predicate
- {
- →.
-
}
- = 0 (N 0)
- ,
- x
- Case 1: Mr. PD, an 81-year-old man with a history of cardiopathy and diabetes, is admitted to ICU with Acute Respiratory Distress Syndrome (ARDS). Despite advancements, his chances of survival are low, and his quality-adjusted life expectancy post-ARDS is expected to be poor. During a medical meeting, the assistant physician asks whether ICU resources should continue to be used on Mr. PD, given the survival rates, treatment costs, and expected quality of life.
- Case 2: Mrs. GB, a 36-year-old woman, is hospitalized after a car accident and diagnosed with sepsis, Acute Lung Injury (ALI), and a Glasgow coma scale of 3. She requires ICU care, but there are limited beds, meaning Mr. PD would need to be transferred. While moving Mr. PD poses risks due to his fragile state, Mrs. GB’s younger age and better prognosis suggest a higher likelihood of recovery with better quality of life.
-
{ (not survival-rate(X, Y ) and not abducible(survival-rate(X, Y)) →¬ survival-rate(X, Y)),(survival-rate(X, unknown-survival-rate) → abducible(survival-rate(X, Y))),(ards(X) and pao2(X, low) and evaluate(X,Y) → survival-rate(X, Y)),(abducible(survival-rate(gb, 0.5))),?((abducible(survival-rate(X,Y)) or abducible(survival-rate(X,Z))) and¬ (abducible(survival-rate(X,Y)) andabducible(survival-rate(X, Z)))/This invariant states that the exceptions to the predicate survival-rate follow an exclusive or/}
-
not exception(survival-quality(X, Y)) →¬survival-quality(X, Y)),(survival-rate(X, unknown-survival-quality) → abducible(survival-quality(X, Y))),(survival-quality(gb, 0.8)),(abducible(survival-quality(pd, 0.1))),(?((exception(survival-quality(X,Y)) or exception(survival-quality(X,Z)))and ¬(exception(survival-quality(X,Y)) and exception(survival-quality(X, Z))))} The continuous logic program for predicate cost:
-
{ (not cost(X, Y ) and not abducible(cost(X, Y)) →¬ cost(X,Y)),(abducible(cost(X, Y)) ← cost(X, unknown-cost)),(cost(gb, unknown-cost)),(cost(pd, unknown-cost)),(?((exception(cost(X,Y)) or exception(cost(X,Z))) and¬(exception(cost(X,Y)) and exception(cost(X, Z))))}
4. Ethical Challenges In Logic Programming
5. Conclusions
Acknowledgements
References
- Bostrom, N.; Yudkowsky, E. The ethics of artificial intelligence. In The Cambridge Handbook of Artificial Intelligence; Frankish, K.; Ramsey, W.M., Eds.; Cambridge University Press, 2014; pp. 316–334.
- Floridi, L.; Cowls, J. A unified framework of five principles for AI in society. Harvard Data Science Review 2019. [Google Scholar] [CrossRef]
- Bryson, J.J.; Diamantis, M.E.; Grant, T.D. Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law 2017, 25, 273–291. [Google Scholar] [CrossRef]
- Asaro, P.M. What should we want from a robot ethic? International Review of Information Ethics 2006, 6, 9–16. [Google Scholar] [CrossRef]
- Van de Poel, I. Embedding values in artificial intelligence (AI) systems. Minds and Machines 2020, 30, 385–409. [Google Scholar] [CrossRef]
- Anderson, M.; Anderson, S.L. Machine Ethics; Cambridge University Press, 2011.
- Cavalcante, J.V.; Pereira, L.M. Cognitive agents for machine ethics. Proceedings of the 18th Brazilian Symposium on Artificial Intelligence, 2019, pp. 345–354.
- Bentham, J. An Introduction to the Principles of Morals and Legislation; Clarendon Press, Oxford, 1789.
- Kant, I. Groundwork of the Metaphysics of Morals, revised edition ed.; Cambridge University Press: Cambridge, 1785. First published in 1785. [Google Scholar]
- Aristotle. Nicomachean Ethics; Cambridge University Press, 2009. Original work published ca. 350 BCE.
- Beauchamp, T.L.; Childress, J.F. Principles of Biomedical Ethics, 7th ed.; Oxford University Press: New York, NY, 2012. [Google Scholar]
- Winfield, A.F.; Michael, K.; Pitt, J.; Evers, V. Machine ethics: The design and governance of ethical AI and autonomous systems. Proceedings of the IEEE 2019, 107, 509–517. [Google Scholar] [CrossRef]
- Neves, J.; Martins, M.R.; Vilhena, J.; Neves, J.; Gomes, S.; Abelha, A.; Machado, J.; Vicente, H. A Soft Computing Approach to Kidney Diseases Evaluation. Journal of Medical Systems 2015, 39. [Google Scholar] [CrossRef] [PubMed]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. "Why should I trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
- Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall, 2010.
- Pereira, L.M.; Saptawijaya, A. Programming Machine Ethics; Springer, 2016.
- Topol, E.J. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again; Basic Books, 2019.
- Bonnefon, J.F.; Shariff, A.; Rahwan, I. The social dilemma of autonomous vehicles. Nature 2016, 536, 425–427. [Google Scholar] [CrossRef] [PubMed]
- Mill, J.S. Utilitarianism; Parker, Son, and Bourn, London, 1863.
- Wallach, W.; Allen, C. Moral Machines: Teaching Robots Right from Wrong; Oxford University Press, 2009.
- Kakas, A.C.; Moraitis, P. Argumentation based decision making for autonomous agents 2003. pp. 883–890.
- Miranda, M.; Machado, J.; Abelha, A.; Pontes, G.; Neves, J. A step towards medical ethics modeling. IFIP Advances in Information and Communication Technology 2010, 335, 27–36. [Google Scholar]
- Barredo Arrieta, A.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; Chatila, R.; Herrera, F. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
- Kakas, A.C.; Toni, F. Computing argumentation in logic programming. Journal of Logic and Computation 1998, 9, 515–562. [Google Scholar] [CrossRef]
- Pereira, L.M.; Anh, H.P. Agent morality via counterfactuals in logic programming. Journal of Applied Logic 2009, 7, 523–534. [Google Scholar]
- Neves, J. A logic interpreter to handle time and negation in logic data bases. Proceedings of the 1984 ACM Annual Conference on Computer Science: The fifth generation challenge, San Francisco, CA, USA, October 1984; Muller, R.L.; Pottmyer, J.J., Eds. ACM, 1984, pp. 50–54.
- Kakas, A.C.; Sadri, F. (Eds.) Computational Logic: Logic Programming and Beyond: Essays in Honour of Robert A. Kowalski, Part I; Vol. 2407, Lecture Notes in Computer Science, Springer: Berlin, Heidelberg, 2002. [Google Scholar]
- Machado, J.; Miranda, M.; Pontes, G.; Abelha, A.; Neves, J. Morality in Group Decision Support Systems in Medicine. Intelligent Distributed Computing IV - Proceedings of the 4th International Symposium on Intelligent Distributed Computing - IDC 2010, Tangier, Morocco, September 2010; Essaaidi, M.; Malgeri, M.; Badica, C., Eds., 2010, Vol. 315, pp. 191–200.
- Neves, J.; Machado, J.; Analide, C.; Abelha, A.; Brito, L. The Halt Condition in Genetic Programming. Progress in Artificial Intelligence, 13th Portuguese Conference on Aritficial Intelligence, EPIA 2007, Guimarães, Portugal, December 3-7, 2007, Proceedings; Neves, J.; Santos, M.F.; Machado, J., Eds. Springer, 2007, Vol. 4874, Lecture Notes in Computer Science, pp. 160–169.
- Oliveira, D.; Ferreira, D.; Abreu, N.; Leuschner, P.; Abelha, A.; Machado, J. Prediction of COVID-19 diagnosis based on openEHR artefacts. Scientific Reports 2022, 12. [Google Scholar] [CrossRef] [PubMed]
- Bickley, L.S.; Szilagyi, P.G. Bates’ Guide to Physical Examination and History Taking, 11th ed.; Lippincott Williams & Wilkins, 2012. SOAP format is discussed extensively as part of clinical history documentation.
- Portela, F.; Cabral, A.; Abelha, A.; Salazar, M.; Quintas, C.; Machado, J.; Neves, J.; Santos, M.F. Knowledge acquisition process for intelligent decision support in critical health care. Information Systems and Technologies for Enhancing Health and Social Care 2013, p. 55 – 68.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
