Submitted:
11 October 2024
Posted:
15 October 2024
You are already at the latest version
Abstract
Keywords:
I. Introduction
II. Related Work
A. Neurosymbolic AI
1) Challenges and Opportunities
B. Responsible AI
1) Explainability
2) Bias or Fairness
3) Robustness or Safety
4) Interpretability or Transparency
5) Privacy
6) Challenges and Opportunities
III. Method
A. Search Strategies
B. Inclusion and Exclusion Criteria
C. Information Extraction
IV. Findings
V. Conclusion
A. Inferences
B. Limitations
C. Future Work
References
- Garcez, A.d., M. Gori, L.C. Lamb, L. Serafini, M. Spranger, and S.N. Tran. 2019. Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning. arXiv preprint arXiv:1905.06088 arXiv:1905.06088 2019. [Google Scholar]
- LeCun, Y., Y. Bengio, and G. Hinton. 2015. Deep learning. nature 521: 436–444. [Google Scholar] [CrossRef] [PubMed]
- Wan, Z., C.K. Liu, H. Yang, C. Li, H. You, Y. Fu, C. Wan, T. Krishna, Y. Lin, and A. Raychowdhury. 2024. Towards cognitive ai systems: a survey and prospective on neuro-symbolic ai. arXiv preprint arXiv:2401.01040 arXiv:2401.01040 2024. [Google Scholar]
- Krizhevsky, A., I. Sutskever, and G.E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25. [Google Scholar] [CrossRef]
- Hinton, G.E., S. Osindero, and Y.W. Teh. 2006. A fast learning algorithm for deep belief nets. Neural computation 18: 1527–1554. [Google Scholar] [CrossRef]
- Hitzler, P.; Sarker, M.K. Neuro-symbolic artificial intelligence: The state of the art; IOS press, 2022.
- Campagner, A.; Cabitza, F. Back to the feature: A neural-symbolic perspective on explainable AI. Machine Learning and Knowledge Extraction: 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Dublin, Ireland, August 25–28, 2020, Proceedings 4. Springer, 2020, pp. 39–55.
- Inala, J.P. Neurosymbolic Learning for Robust and Reliable Intelligent Systems. PhD thesis, Massachusetts Institute of Technology, 2022.
- Wagner, B.J., and A. d’Avlia Garcez. 2024. A neurosymbolic approach to AI alignment. Neurosymbolic Artificial Intelligence, 1–12. [Google Scholar] [CrossRef]
- Mitchell, M.; Wu, S.; Zaldivar, A.; Barnes, P.; Vasserman, L.; Hutchinson, B.; Spitzer, E.; Raji, I.D.; Gebru, T. Model cards for model reporting. Proceedings of the conference on fairness, accountability, and transparency, 2019, pp. 220–229.
- Speith, T. A review of taxonomies of explainable artificial intelligence (XAI) methods. Proceedings of the 2022 ACM conference on fairness, accountability, and transparency, 2022, pp. 2239–2250.
- Hort, M., Z. Chen, J.M. Zhang, M. Harman, and F. Sarro. 2024. Bias mitigation for machine learning classifiers: A comprehensive survey. ACM Journal on Responsible Computing 1: 1–52. [Google Scholar] [CrossRef]
- Upreti, R., P.G. Lind, A. Elmokashfi, and A. Yazidi. 2024. Trustworthy machine learning in the context of security and privacy. International Journal of Information Security 23: 2287–2314. [Google Scholar] [CrossRef]
- Kim, S., J.Y. Cho, and B.G. Lee. 2024. An Exploratory Study on the Trustworthiness Analysis of Generative AI. Journal of Internet Computing and Services 25: 79–90. [Google Scholar]
- Hamilton, K., A. Nayak, B. Božić, and L. Longo. 2022. Is neuro-symbolic ai meeting its promises in natural language processing? a structured review. Semantic Web, 1–42. [Google Scholar] [CrossRef]
- Delong, L.N., R.F. Mir, M. Whyte, Z. Ji, and J.D. Fleuriot. 2023. Neurosymbolic ai for reasoning on graph structures: A survey. arXiv preprint arXiv:2302.07200 2. [Google Scholar]
- Cheng, L., K.R. Varshney, and H. Liu. 2021. Socially responsible ai algorithms: Issues, purposes, and challenges. Journal of Artificial Intelligence Research 71: 1137–1181. [Google Scholar] [CrossRef]
- Besold, T.R., A.d. Garcez, K. Stenning, L. van der Torre, and M. van Lambalgen. 2017. Reasoning in non-probabilistic uncertainty: Logic programming and neural-symbolic computing as examples. Minds and Machines 27: 37–77. [Google Scholar] [CrossRef]
- Dong, H., J. Mao, T. Lin, C. Wang, L. Li, and D. Zhou. 2019. Neural logic machines. arXiv preprint arXiv:1904.11694 arXiv:1904.11694 2019. [Google Scholar]
- d’Avila Garcez, A., and L.C. Lamb. 2020. Neurosymbolic AI: The 3rd wave. arXiv e-prints. arXiv–2012. [Google Scholar]
- Li, T., and V. Srikumar. 2019. Augmenting neural networks with first-order logic. arXiv preprint arXiv:1906.06298 arXiv:1906.06298 2019. [Google Scholar]
- Cunnington, D., M. Law, J. Lobo, and A. Russo. 2022. Neuro-symbolic learning of answer set programs from raw data. arXiv preprint arXiv:2205.12735 arXiv:2205.12735 2022. [Google Scholar]
- Eskov, V.M., M.A. Filatov, G. Gazya, and N. Stratan. 2021. Artificial intellect with artificial neural networks. Russian Journal of Cybernetics 2: 44–52. [Google Scholar]
- Shakarian, P., and G.I. Simari. Extensions to generalized annotated logic and an equivalent neural architecture. 2022 Fourth International Conference on Transdisciplinary AI (TransAI). IEEE, 2022, pp. 63–70.
- Jobin, A., M. Ienca, and E. Vayena. 2019. The global landscape of AI ethics guidelines. Nature machine intelligence 1: 389–399. [Google Scholar] [CrossRef]
- Arrieta, A.B., N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins, and others. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58: 82–115. [Google Scholar] [CrossRef]
- Mehrabi, N., F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan. 2021. A survey on bias and fairness in machine learning. ACM computing surveys (CSUR) 54: 1–35. [Google Scholar] [CrossRef]
- Floridi, L., J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum, C. Luetge, R. Madelin, U. Pagallo, F. Rossi, and others. 2018. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and machines 28: 689–707. [Google Scholar] [CrossRef] [PubMed]
- Mittelstadt, B.D., P. Allo, M. Taddeo, S. Wachter, and L. Floridi. 2016. The ethics of algorithms: Mapping the debate. Big Data & Society 3: 2053951716679679. [Google Scholar]
- Radclyffe, C., M. Ribeiro, and R.H. Wortham. 2023. The assessment list for trustworthy artificial intelligence: A review and recommendations. Frontiers in artificial intelligence 6: 1020592. [Google Scholar] [CrossRef]
- Lu, Q., L. Zhu, X. Xu, J. Whittle, D. Zowghi, and A. Jacquet. 2024. Responsible AI pattern catalogue: A collection of best practices for AI governance and engineering. ACM Computing Surveys 56: 1–35. [Google Scholar] [CrossRef]
- Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine bias. In Ethics of data and analytics; Auerbach Publications, 2022; pp. 254–264.
- Dastin, J. Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of data and analytics; Auerbach Publications, 2022; pp. 296–299.
- Wörsdörfer, M. 2023. The EU’s artificial intelligence act: an ordoliberal assessment. AI and Ethics, 1–16. [Google Scholar]
- Khan, A.A.; Badshah, S.; Liang, P.; Waseem, M.; Khan, B.; Ahmad, A.; Fahmideh, M.; Niazi, M.; Akbar, M.A. Ethics of AI: A systematic literature review of principles and challenges. Proceedings of the 26th International Conference on Evaluation and Assessment in Software Engineering, 2022, pp. 383–392.
- Alzubaidi, L., A. Al-Sabaawi, J. Bai, A. Dukhan, A.H. Alkenani, A. Al-Asadi, H.A. Alwzwazy, M. Manoufali, M.A. Fadhel, A. Albahri, and others. 2023. Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements. International Journal of Intelligent Systems 2023: 4459198. [Google Scholar] [CrossRef]
- Prem, E. 2023. From ethical AI frameworks to tools: a review of approaches. AI and Ethics 3: 699–716. [Google Scholar] [CrossRef]
- Liu, H., Y. Wang, W. Fan, X. Liu, Y. Li, S. Jain, Y. Liu, A. Jain, and J. Tang. 2022. Trustworthy ai: A computational perspective. ACM Transactions on Intelligent Systems and Technology 14: 1–59. [Google Scholar] [CrossRef]
- Hoffman, R.R., S.T. Mueller, G. Klein, and J. Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 arXiv:1812.04608 2018. [Google Scholar]
- Blasch, E., T. Pham, C.Y. Chong, W. Koch, H. Leung, D. Braines, and T. Abdelzaher. 2021. Machine learning/artificial intelligence for sensor data fusion–opportunities and challenges. IEEE Aerospace and Electronic Systems Magazine 36: 80–93. [Google Scholar] [CrossRef]
- Hendrycks, D., and K. Gimpel. 2016. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136 arXiv:1610.02136 2016. [Google Scholar]
- Goodfellow, I.J., J. Shlens, and C. Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 arXiv:1412.6572 2014. [Google Scholar]
- Raji, I.D., and R. Dobbe. 2023. Concrete problems in AI safety, revisited. arXiv preprint arXiv:2401.10899 arXiv:2401.10899 2023. [Google Scholar]
- Doshi-Velez, F., and B. Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 arXiv:1702.08608 2017. [Google Scholar]
- Gilpin, L.H., D. Bau, B.Z. Yuan, A. Bajwa, M. Specter, and L. Kagal. Explaining explanations: An overview of interpretability of machine learning. 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 2018, pp. 80–89.
- Sweeney, L. 2002. k-anonymity: A model for protecting privacy. International journal of uncertainty, fuzziness and knowledge-based systems 10: 557–570. [Google Scholar] [CrossRef]
- Shokri, R., and V. Shmatikov. Privacy-preserving deep learning. Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015, pp. 1310–1321.
- Whittlestone, J., and J. Clark. 2021. Why and how governments should monitor AI development. arXiv preprint arXiv:2108.12427 arXiv:2108.12427 2021. [Google Scholar]
- Fjeld, J., N. Achten, H. Hilligoss, A. Nagy, and M. Srikumar. 2020. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication. [Google Scholar] [CrossRef]
- Lundberg, S., and S.I. Lee. 2016. An unexpected unity among methods for interpreting model predictions. arXiv preprint arXiv:1611.07478 arXiv:1611.07478 2016. [Google Scholar]
- Papernot, N., P. McDaniel, S. Jha, M. Fredrikson, Z.B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, 2016, pp. 372–387.
- Abadi, M., A. Chu, I. Goodfellow, H.B. McMahan, I. Mironov, K. Talwar, and L. Zhang. 308. Deep learning with differential privacy. Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 308–308. [Google Scholar]
- Wei, K., J. Li, M. Ding, C. Ma, H. Su, B. Zhang, and H.V. Poor. 2021. User-level privacy-preserving federated learning: Analysis and performance optimization. IEEE Transactions on Mobile Computing 21: 3388–3401. [Google Scholar] [CrossRef]
- Wagner, B., and A. d’Avila Garcez. 2021. Neural-symbolic integration for fairness in AI. CEUR Workshop Proceedings 2846. [Google Scholar]
- Amado, L., R.F. Pereira, and F. Meneguzzi. Robust neuro-symbolic goal and plan recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, Vol. 37, pp. 11937–11944.
- Pisano, G., G. Ciatto, R. Calegari, and A. Omicini. ; others. Neuro-symbolic computation for XAI: Towards a unified model. CEUR WORKSHOP PROCEEDINGS. Sun SITE Central Europe, RWTH Aachen University, 2020, Vol. 2706, pp. 101–117.
- Oltramari, A., J. Francis, C. Henson, K. Ma, and R. Wickramarachchi. Neuro-symbolic architectures for context understanding. In Knowledge Graphs for eXplainable Artificial Intelligence: Foundations, Applications and Challenges; IOS Press, 2020; pp. 143–160.
- Venugopal, D., V. Rus, and A. Shakya. Neuro-symbolic models: A scalable, explainable framework for strategy discovery from big edu-data. Proceedings of the 2nd Learner Data Institute Workshop in Conjunction with The 14th International Educational Data Mining Conference, 2021.
- Himmelhuber, A.; Grimm, S.; Zillner, S.; Joblin, M.; Ringsquandl, M.; Runkler, T. Combining sub-symbolic and symbolic methods for explainability. Rules and Reasoning: 5th International Joint Conference, RuleML+ RR 2021, Leuven, Belgium, September 13–15, 2021, Proceedings 5. Springer, 2021, pp. 172–187.
- Bellucci, M. Symbolic approaches for explainable artificial intelligence. PhD thesis, Normandie Université, 2023.
- Dwivedi, R., D. Dave, H. Naik, S. Singhal, R. Omer, P. Patel, B. Qian, Z. Wen, T. Shah, G. Morgan, and others. 2023. Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Computing Surveys 55: 1–33. [Google Scholar] [CrossRef]
- Mileo, A. Towards a neuro-symbolic cycle for human-centered explainability. Neurosymbolic Artificial Intelligence, 1–13. [CrossRef]
- Thota, S.R., and S. Arora. 2024. Neurosymbolic AI for Explainable Recommendations in Frontend UI Design-Bridging the Gap between Data-Driven and Rule-Based Approaches 2024. [Google Scholar]
- Xie, X., K. Kersting, and D. Neider. 2022. Neuro-symbolic verification of deep neural networks. arXiv preprint arXiv:2203.00938 arXiv:2203.00938 2022. [Google Scholar]
- Padalkar, P., N. Ślusarz, E. Komendantskaya, and G. Gupta. 2024. A Neurosymbolic Framework for Bias Correction in CNNs. arXiv preprint arXiv:2405.15886 arXiv:2405.15886 2024. [Google Scholar]
- Hooshyar, D., and Y. Yang. 2021. Neural-symbolic computing: a step toward interpretable AI in education. Bulletin of the Technical Committee on Learning Technology (ISSN: 2306-0212) 21: 2–6. [Google Scholar]
- Bennetot, A., G. Franchi, J. Del Ser, R. Chatila, and N. Diaz-Rodriguez. 2022. Greybox XAI: A Neural-Symbolic learning framework to produce interpretable predictions for image classification. Knowledge-Based Systems 258: 109947. [Google Scholar] [CrossRef]
- Smirnova, A., J. Yang, D. Yang, and P. Cudre-Mauroux. 2022. Nessy: A Neuro-Symbolic System for Label Noise Reduction. IEEE Transactions on Knowledge and Data Engineering 35: 8300–8311. [Google Scholar] [CrossRef]
- Piplai, A., A. Kotal, S. Mohseni, M. Gaur, S. Mittal, and A. Joshi. 2023. Knowledge-enhanced neurosymbolic artificial intelligence for cybersecurity and privacy. IEEE Internet Computing 27: 43–48. [Google Scholar] [CrossRef]
- Zeng, Z. Neurosymbolic Learning and Reasoning for Trustworthy AI. PhD thesis, UCLA, 2024.
- Agiollo, A., and A. Omicini. Measuring Trustworthiness in Neuro-Symbolic Integration. 2023 18th Conference on Computer Science and Intelligence Systems (FedCSIS). IEEE, 2023, pp. 1–10.
- Kosasih, E., E. Papadakis, G. Baryannis, and A. Brintrup. 2023. Explainable Artificial Intelligence in Supply Chain Management: A Systematic Review of Neurosymbolic Approaches. International Journal of Production Research. [Google Scholar] [CrossRef]
- Gaur, M., and A. Sheth. 2024. Building trustworthy NeuroSymbolic AI Systems: Consistency, reliability, explainability, and safety. AI Magazine 45: 139–155. [Google Scholar] [CrossRef]
- Selbst, A.D., D. Boyd, S.A. Friedler, S. Venkatasubramanian, and J. Vertesi. Fairness and abstraction in sociotechnical systems. Proceedings of the conference on fairness, accountability, and transparency, 2019, pp. 59–68.
- Christoph, M. Interpretable machine learning: A guide for making black box models explainable; Leanpub, 2020.
- Müller, V.C. Ethics of artificial intelligence and robotics. In The Stanford Encyclopedia of Philosophy; Stanford University, 2020.
- Di Maio, P. Neurosymbolic knowledge representation for explainable and trustworthy ai 2020.


| Search String | Count |
|---|---|
| explainability AND (neurosymbolic AI OR symbolic AI OR NSAI) | 3,040 |
| explainability AND ((machine learning OR ML) OR AI) | 157,000 |
| (bias OR fairness) AND (neurosymbolic AI OR symbolic AI OR NSAI) | 3,260 |
| (bias OR fairness) AND ((machine learning OR ML) OR AI) | 3,280,000 |
| (robustness OR safety) AND (neurosymbolic AI OR symbolic AI OR NSAI) | 3,310 |
| (robustness OR safety) AND ((machine learning OR ML) OR AI) | 4,490,000 |
| (interpretability or transparency) AND (neurosymbolic AI OR symbolic AI OR NSAI) | 1,420 |
| (interpretability or transparency) AND ((machine learning OR ML) OR AI) | 72,200 |
| privacy AND (neurosymbolic AI OR symbolic AI OR NSAI) | 2,540 |
| privacy AND ((machine learning OR ML) OR AI) | 5,140,000 |
| Search String | Count |
|---|---|
| explainability AND (neurosymbolic AI OR symbolic AI OR NSAI) | 11 |
| explainability AND ((machine learning OR ML) OR AI) | 2001 |
| (bias OR fairness) AND (neurosymbolic AI OR symbolic AI OR NSAI) | 3 |
| (bias OR fairness) AND ((machine learning OR ML) OR AI) | 2001 |
| (robustness OR safety) AND (neurosymbolic AI OR symbolic AI OR NSAI) | 7 |
| (robustness OR safety) AND ((machine learning OR ML) OR AI) | 192 |
| (interpretability or transparency) AND (neurosymbolic AI OR symbolic AI OR NSAI) | 2 |
| (interpretability or transparency) AND ((machine learning OR ML) OR AI) | 182 |
| privacy AND (neurosymbolic AI OR symbolic AI OR NSAI) | 1 |
| privacy AND ((machine learning OR ML) OR AI) | 2001 |
| Principle | Reference | Year | Technique |
|---|---|---|---|
| Explainability | Pisano et al. (2020) | 2020 | prototype integrating symbolic logic into sub-symbolic systems |
| Oltramari et al. (2020) | 2020 | hybrid system combining data-driven perception with logical reasoning | |
| Campagner and Cabitza (2020) | 2020 | proof of concept using Logic Tensor Networks and rule-based systems | |
| Venugopal et al. (2021) | 2021 | framework providing uncertainty estimates for its predictions | |
| Himmelhuber et al. (2021) | 2021 | a fidelity metric using graph neural networks and symbolic logic | |
| d’Avila Garcez and Lamb (2020) | 2023 | fidelity and soundness measures based on distributed and local symbols | |
| Bellucci (2023) | 2023 | ontology-based image classifier using a structured knowledge base | |
| Dwivedi et al. (2023) | 2023 | data visualization, feature importance analysis, and partial dependence plots (PDPs) | |
| permutation feature importance and SHAP values | |||
| counterfactuals and contrastive explanations | |||
| post-hoc interpretations of model predictions with LIME | |||
| software libraries such as Skater or AIX360 | |||
| Mileo) | 2024 | framework to integrate human feedback, causal reasoning, and knowledge injection | |
| Wagner and d’Avlia Garcez (2024) | 2024 | framework for a logic-based querying system | |
| Thota and Arora (2024) | 2024 | human-centric interactive interface with knowledge graph integration | |
| Bias | Wagner and d’Avila Garcez (2021) | 2021 | framework using SHAP measure with demographic parity and disparate impact metrics |
| Xie et al. (2022) | 2022 | Neuro-Symbolic Assertion Language to formalize fairness properties enforced with specification networks | |
| Padalkar et al. (2024) | 2024 | NeSyBiCor framework using Answer Set Programming with semantic similarity measure | |
| Interpretability | Hooshyar and Yang (2021) | 2021 | framework focused on knowledge representation, symbolic constraints, and knowledge extraction |
| Bennetot et al. (2022) | 2022 | Greybox XAI framework with deep neural network (DNN) building Explainable Latent Space | |
| Robustness | Smirnova et al. (2022) | 2022 | Nessy system uses expectation regularization and data sampling |
| Inala (2022) | 2022 | use of state machines and neurosymbolic transformers for formal verification | |
| Amado et al. (2023) | 2023 | Predictive Plan Recognition (PPR) framework removes noise and gaps | |
| Privacy | Piplai et al. (2023) | 2023 | framework combining differential privacy, secure multi-party computation, and synthetic data generation |
| Trustworthiness | Zeng (2024) | 2024 | framework integrating differentiable learning with graph neural network rewiring |
| Principle | Reference | Year | Technique |
|---|---|---|---|
| Trustworthiness | Agiollo and Omicini (2023) | 2023 | NeSy system combining various RAI principles |
| Kosasih et al. (2023) | 2023 | hybrid architecture using neural network data-driven learning and the symbolic rules | |
| Gaur and Sheth (2024) | 2024 | CREST framework combining procedural and graph-based knowledge with neural network capabilities | |
| Robustness | Amado et al. (2023) | 2023 | Predictive Plan Recognition (PPR) framework removes noise and gaps |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).