Submitted:
24 January 2025
Posted:
27 January 2025
Read the latest preprint version here
Abstract
Keywords:
1. Introduction: Trustworthy AI for Whom?
- (i)
- Recent advances in digital watermarking present a scalable solution for distinguishing AI-generated content from human-authored material. SynthID-Text, a watermarking algorithm discussed by Dathathri et al. [70], provides an effective way to mark AI-generated text, ensuring that content remains identifiable without compromising its quality. This watermarking framework offers a pathway for managing AI’s outputs on a massive scale, potentially curbing the spread of misinformation. However, questions of accessibility and scalability remain, particularly in jurisdictions where trust infrastructures are underdeveloped. SynthID-Text’s deployment exemplifies how watermarking can help maintain trust in AI content, yet its application primarily serves contexts where technological infrastructure supports high computational demands, leaving out communities with limited resources.
- (ii)
- The concept of “personhood credentials” (PHCs) provides another lens for exploring trust. According to Adler et al. [71], PHCs allow users to authenticate as real individuals rather than AI agents, introducing a novel method for countering AI-powered deception. This system, based on zero-knowledge proofs, ensures privacy by verifying individuals’ authenticity without exposing personal details. While promising, PHCs may inadvertently centralize trust among issuing authorities, which could undermine local, decentralized trust systems. Additionally, the adoption of PHCs presents ethical challenges, particularly in regions where digital access is limited, raising further questions about inclusivity in digital spaces purportedly designed to be “trustworthy.”
- (iii)
- In the context of decentralized governance, Poblet et al. [61] highlight the role of blockchain-based oracles as tools for digital democracy, providing external information to support decision-making within blockchain networks. Oracles serve as intermediaries between real-world events and digital contracts, enabling secure, decentralized information transfer in applications like voting and community governance. Their use in digital democracy platforms has demonstrated potential for enhancing transparency and collective decision-making. Yet, this approach is not without challenges; the integration of oracles requires robust governance mechanisms to address biases and inaccuracies, especially when scaling across diverse socio-political landscapes. Thus, oracles provide valuable insights into building trustworthy systems, but their implementation remains context-dependent, raising critical questions about the universality of digital trust.
- (iv)
- Lastly, the discourse on digital sovereignty, as discussed by Fratini et al. [72], is integral to understanding the layers of trust in decentralized Web3 ecosystems. Their research outlines various digital sovereignty models, illustrating how governance frameworks vary from state-based to rights-based approaches [73,74,75,76,77,78,79]. The rights-based model emphasizes protecting user autonomy and data privacy, resonating with democratic ideals but facing practical challenges in globalized digital economies. In contrast, state-based models prioritize national security and centralized control, often clashing with decentralized ethos. These sovereignty models underscore the need for adaptable governance structures that consider the diversity of trust needs across regions, reflecting the complexities of fostering “trustworthy” AI in decentralized contexts.
2. Method: The State-of-the-Art of the European Trustworthy AI Policy Analysis Through AI Act and Draghi Report
2.1. AI Act at the Crossroads of Innovation and Responsibility
2.1.1. Risk Classification [43,95]: A Unified Framework with Tailored Enforcement
2.1.2. Human Oversight [101,102,103,104,105,106,107]: Enhancing Governance in Critical Sectors
2.1.3. Innovation Sandboxes: Bridging Compliance and Creativity
2.1.4. Sector-Specific Priorities: Aligning AI with Regional Significance
2.1.5. A Unified Vision with Localized Flexibility
2.1.6. Toward a Balanced Future?
2.2. Draghi Report
2.2.1. Trustworthiness Beyond Technological Robustness
2.2.2. Economic Competitiveness vs. Ethical Equity
2.2.3. Trustworthiness in High-Stakes Sectors
2.2.4. Toward a Participatory and Inclusive Vision
2.3. Trustworthy AI for Whom: Approaching from Decentralized Web3 Ecosystem Perspective
2.3.1. The Challenges of Detection Techniques for Trust Through Decentralized Web3 Ecosystems
2.3.2. GenAI and Disinformation/Misinformation [11]: A Perfect Storm?
2.3.3. Ethical AI and Accountability in Decentralized Systems
2.3.4. The Role of Blockchain in AI Content Authentication
2.3.5. Transdisciplinary Approaches to AI Governance
2.3.6. Addressing the Elephant in the Room
2.4. Justification for the Relevance and Rigor of the Methodology
2.4.1. Bridging Policy and Practice for Technological Communities
2.4.2. The AI Act as a Framework for Risk Classification and Ethical Safeguards
2.4.3. The Draghi Report as a Vision for Strategic Resilience
2.4.4. Policy Relevance in Decentralized Web3 Ecosystems
2.4.5. Advancing Detection Techniques of Trust
2.4.6. A Transdisciplinary Perspective for a Complex Problem
3. Results: Seven Detection Techniques of Trust ThSrough Decentralized Web3 Ecosystems
| Techniques | Description |
|---|---|
| T1. Federated Learning for Decentralized AI Detection | Collaborative AI model training across decentralized platforms, preserving privacy without sharing raw data. |
| T2. Blockchain-Based Provenance Tracking | Blockchain technology records content creation and dissemination, enabling transparent tracking of content authenticity. |
| T3. Zero-Knowledge Proofs for Content Authentication | Cryptographic method to verify content authenticity without revealing underlying private data. |
| T4. Decentralized Autonomous Organizations (DAOs) for Crowdsourced Verification | Crowdsourced content verification through DAOs, allowing communities to collectively vote and verify content authenticity. |
| T5. AI-Powered Digital Watermarking | Embedding unique identifiers into AI-generated content to trace and authenticate its origin. |
| T6. Explainable AI (XAI) for Content Detection | Provides transparency in AI model decision-making [164], explaining why content was flagged as AI-generated. |
| T7. Privacy-Preserving Machine Learning (PPML) for Secure Content Verification | Enables secure detection and verification of content while preserving user privacy, leveraging homomorphic encryption and other techniques. |
3.1. Federated Learning for Decentralized AI Detection (T1)
3.2. Blockchain-Based Provenance Tracking (T2)
3.3. Zero-Knowledge Proofs (ZKPs) for Content Authentication (T3)
3.4. DAOs for Crowdsourced Verification (T4)
3.5. AI-Powered Digital Watermarking (T5)
3.6. Explainable AI (XAI) for Content Detection (T6)
3.7. Privacy-Preserving Machine Learning (PPML) for Secure Content Verification (T7)
4. Discussion and Conclusion
4.1. Discussions and Conclusions
4.2. Limitations
- (i)
- Technical and Operational Challenges: Many of the techniques discussed, such as federated learning and PPML, require advanced computational infrastructure (Quantum Computing) and significant technical expertise. Their deployment in resource-constrained environments may be limited, perpetuating global inequalities in digital access and trust frameworks.
- (ii)
- Ethical and Governance Gaps: While tools like DAOs and blockchain foster transparency and decentralization, they raise ethical concerns regarding power concentration among technologically savvy elites [128]. As recently noted by Calzada [128] and supported by AI hype approach by Floridi [176], decentralization does not inherently equate to democratization; instead, it risks replicating hierarchical structures in digital contexts.
- (iii)
- Regulatory Alignment and Enforcement: The AI Act and the Draghi Report provide robust policy frameworks, but their enforcement mechanisms remain uneven across EU member states. This regulatory fragmentation may hinder the uniform implementation of the detection techniques proposed.
- (iv)
- Public Awareness and Engagement: A significant barrier to adoption lies in the public’s limited understanding of decentralized technologies. As Medrado and Verdegem highlight [168], there is a need for more inclusive educational initiatives to bridge the knowledge gap and promote trust in AI governance systems.
- (v)
- Emergent Risks of AI: GenAI evolves rapidly, outpacing regulatory and technological safeguards. This dynamism introduces uncertainties about the long-term effectiveness of the proposed detection techniques.
4.3. Future Research Avenues
- (i)
- Context-Specific Adaptations: Further research is needed to tailor decentralized Web3 tools to diverse regional and cultural contexts. This involves integrating local governance norms and socio-political dynamics into the design and implementation of detection frameworks.
- (ii)
- Inclusive Governance Models: Building on the principles of participatory governance discussed by Mejias and Couldry [169], future studies should examine how multistakeholder frameworks can be institutionalized within decentralized ecosystems. Citizen assemblies, living labs, and co-design workshops offer promising methods for inclusive decision-making.
- (iii)
- User-Centric Design: Enhancing UX for detection tools such as digital watermarking and blockchain provenance tracking is crucial. Future research should focus on creating user-friendly interfaces that simplify complex functionalities, fostering greater public engagement and trust.
- (iv)
- Ethical and Legal Frameworks: Addressing the ethical and legal challenges posed by decentralized systems requires interdisciplinary collaboration. Scholars in law, ethics, and social sciences should work alongside technologists to develop governance models that balance innovation with accountability.
- (v)
- AI Literacy Initiatives: Expanding on Sieber et al. [174], there is a need for targeted educational programs to improve public understanding of AI technologies. These initiatives could focus on empowering marginalized communities, ensuring equitable access to the benefits of AI.
- (vi)
- Monitoring and Evaluation Mechanisms: Future studies should investigate robust metrics for assessing the efficacy of detection techniques in real-world scenarios. This includes longitudinal studies to monitor their impact on trust, transparency, and accountability in decentralized systems.
- (vii)
- Emergent Technologies and Risks: Finally, research should anticipate the future trajectories of AI and Web3 ecosystems, exploring how emerging technologies such as quantum computing or advanced neural networks may impact trust frameworks.
- (viii)
- Learning from Urban AI: A potentially prominent field is emerging around the concept of Urban AI, which warrants further exploration. The question "Trustworthy AI for whom?" echoes the earlier query "Smart City for whom?", suggesting parallels between the challenges of integrating AI into urban environments and the broader quest for trustworthy AI [177,178,179,180,181,182]. Investigating the evolution of Urban AI as a distinct domain could provide valuable insights into the socio-technical dynamics of trust, governance, and inclusivity within AI-driven urban systems [183,184,185].
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Alwaisi, S. , Salah Al-Radhi, M. & Németh, G., (2023) Automated child voice generation: Methodology and implementation. 2023 International Conference on Speech Technology and Human-Computer Dialogue (SpeD). [CrossRef]
- Alwaisi, S. & Németh, G., (2024) Advancements in expressive speech synthesis: A review. Infocommunications Journal. [CrossRef]
- European Commission. The Future of European Competitiveness: A Competitiveness Strategy for Europe. European Commission, September 2024. Available online: https://ec.europa.eu (accessed on 18 November 2024).
- European Parliament and Council. Regulation (EU) 2024/1689 of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations and Directives. Official Journal of the European Union. 2024, L 1689, 1–144. Available online: http://data.europa.eu/eli/reg/2024/1689/oj (accessed on 18 November 2024).
- Yang, F. , Goldenfein, J., & Nickels, K. (2024). GenAI Concepts. Melbourne: ARC Centre of Excellence for Automated Decision-Making and Society RMIT University, and OVIC. [CrossRef]
- Insight & Foresight (2024). How Generative AI Will Transform Strategic Foresight.
- Amoore, L.; Campolo, A.; Jacobsen, B.; Rella, L. A world model: On the political logics of generative AI. Political Geography 2024, 113, 103134 Available online:. [Google Scholar] [CrossRef]
- Chafetz, H. , Saxena, S., & Verhulst, S.G., 2024. A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI, /: Available from: https, 2405. [Google Scholar]
- Delacroix, S. (2024) Sustainable data rivers? Rebalancing the data ecosystem that underlies generative AI. Critical AI. [CrossRef]
- Gabriel, I.; et al. (2024) The ethics of advanced AI assistants. arXiv preprint. https://arxiv.org/abs/2404.16244.
- Shin, D. , Koerber, A., & Lim, J.S. (2024) Impact of misinformation from generative AI on user information processing: How people understand misinformation from generative AI. New Media & Society. [CrossRef]
- Tsai, L. L. , Pentland, A., Braley, A., Chen, N., Enríquez, J. R., & Reuel, A. (2024) An MIT Exploration of Generative AI: From Novel Chemicals to Opera. MIT Governance Lab. Available from. [CrossRef]
- Weidinger, L. , et al. (2023) Sociotechnical Safety Evaluation of Generative AI Systems. arXiv preprint, 2310. [Google Scholar]
- Allen, D. & Weyl, E.G. (2024) The Real Dangers of Generative AI. Journal of Democracy. 35(1): 147-162.
- Kitchin, R. The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences; Sage: London, UK, 2014. [Google Scholar]
- Cugurullo, F.; Caprotti, F.; Cook, M.; Karvonen, A.; McGuirk, P.; Marvin, S. (Eds.) Artificial Intelligence and the City: Urbanistic Perspectives on AI; Routledge: Abingdon, UK, 2024. [Google Scholar] [CrossRef]
- Farina, M. , Yu, X. & Lavazza, A. (2023). Ethical considerations and policy interventions concerning the impact of generative AI tools in the economy and in society. AI and Ethics. [CrossRef]
- Calzada, I. (2021), Smart City Citizenship, Cambridge, Massachusetts: Elsevier Science Publishing Co Inc. [ISBN (Paperback): 978-0-12-815300-0]. [CrossRef]
- Aguerre, C. , Campbell-Verduyn, M. & Scholte, J.A. (2024) Global Digital Data Governance: Polycentric Perspectives. Abingdon, UK: Routledge.
- Angelidou, M.; Sofianos, S. The Future of AI in Optimizing Urban Planning: An In-Depth Overview of Emerging Fields of Application. International Conference on Changing Cities VI: Spatial, Design, Landscape, Heritage & Socio-economic Dimensions. Rhodes Island, Greece, 24-. 28 June.
- Polanyi, K. (1944) The Great Transformation: The Political and Economic Origins of Our Time. New York: Farrar & Rinehart.
- Solaiman, I. , et al. (2019) Release Strategies and the Social Impacts of Language Models. arXiv preprint. https://arxiv.org/abs/1908.09203.
- Calzada, I. (2024), Artificial Intelligence for Social Innovation: Beyond the Noise of Algorithms and Datafication. Sustainability, 8638. [Google Scholar] [CrossRef]
- Fang, R. , et al. (2024). LLM Agents can Autonomously Hack Websites. arXiv preprint. https://arxiv.org/abs/2402.06664.
- Farina, M. , Lavazza, A., Sartori, G. & Pedrycz, W. (2024). Machine learning in human creativity: status and perspectives. AI & Society. [CrossRef]
- Abdi, I.I. Digital Capital and the Territorialization of Virtual Communities: An Analysis of Web3 Governance and Network Sovereignty. 2024.
- Calzada, I. (2024) (Libertarian) Decentralized Web3 Map: In Search of a Post-Westphalian Territory. SSRN. [CrossRef]
- Calzada, I. (2024) Decentralized Web3 Reshaping Internet Governance: Towards the Emergence of New Forms of Nation-Statehood? Future Internet,. [CrossRef]
- Calzada, I. (2024) From data-opolies to decentralization? The AI disruption amid the Web3 Promiseland at stake in datafied democracies, in Visvizi, A., Corvello, V. and Troisi, O. (eds.) Research and Innovation Forum. Cham, Switzerland: Springer.
- Calzada, I. (2024) Democratic erosion of data-opolies: Decentralized Web3 technological paradigm shift amidst AI disruption. Big Data and Cognitive Computing. [CrossRef]
- Calzada, I. (2023) Disruptive technologies for e-diasporas: Blockchain, DAOs, data cooperatives, metaverse, and ChatGPT, Futures, 154(C), p. 10 3258. [CrossRef]
- Calzada, I. (2020) Democratising smart cities? Penta-helix multistakeholder social innovation framework. Smart Cities, 1145. [Google Scholar]
- Allen, D. , Frankel, E., Lim, W., Siddarth, D., Simons, J. & Weyl, E.G. (2023) Ethics of Decentralized Social Technologies: Lessons from Web3, the Fediverse, and Beyond, Harvard University Edmond & Lily Safra Center for Ethics. Available from: https://myaidrivecom/view/file-A5rvW7aJ8emgJMG8wKH3WDTz (Accessed ). 1 September.
- De Filippi, P.; Cossar, S.; Mannan, M.; Nabben, K.; Merk, T.; Kamalova, J.; Report on Blockchain Governance Dynamics. Project Liberty Institute and BlockchainGov, May 2024. Available online: https://www.projectliberty.io/institute (accessed on 20 November 2024).
- Daraghmi, E.; Hamoudi, A.; Abu Helou, M. Decentralizing Democracy: Secure and Transparent E-Voting Systems with Blockchain Technology in the Context of Palestine. Future Internet 2024, 16, 388. [Google Scholar] [CrossRef]
- Liu, X.; Xu, R.; Chen, Y. A. Decentralized Digital Watermarking Framework for Secure and Auditable Video Data in Smart Vehicular Networks. Future Internet 2024, 16, 390. [Google Scholar] [CrossRef]
- N: Moroni, Revisiting subsidiarity, 2024. [CrossRef]
- Van Kerckhoven, S. & Chohan, U.W. (2024) Decentralized Autonomous Organizations: Innovation and Vulnerability in the Digital Economy. Oxon, UK: Routledge.
- Singh, A. , Lu, C., Gupta, G., Chopra, A., Blanc, J., Klinghoffer, T., Tiwary, K., & Raskar, R. (2024). A perspective on decentralizing AI. MIT Media Lab.
- Mathew, A.J. The myth of the decentralised internet. Internet Policy Review, /: (3). https.
- Zook, M. (2023) Platforms, blockchains and the challenges of decentralization. Cambridge Journal of Regions, Economy and Society.
- Kneese, T. & Oduro, S. (2024) AI Governance Needs Sociotechnical Expertise: Why the Humanities and Social Sciences are Critical to Government Efforts. Data & Society Policy Brief. 1-10.
- OECD. Assessing Potential Future Artificial Intelligence Risks, Benefits and Policy Imperatives. OECD Artificial Intelligence Papers. No. 27, November 2024. Available online: https://oecd.ai/site/ai-futures (accessed on 20 November 2024).
- Nabben, K.; De Filippi, P. Accountability protocols? On-chain dynamics in blockchain governance. Internet Policy Review. [CrossRef]
- Nanni, R. , Bizzaro, P. G., & Napolitano, M. (2024). The false promise of individual digital sovereignty in Europe: Comparing artificial intelligence and data regulations in China and the European Union. Policy & Internet. [CrossRef]
- Schroeder, R. (2024). Content moderation and the digital transformations of gatekeeping. Policy & Internet. [CrossRef]
- Gray, J.E. , Hutchinson, J., Stilinovic, M. and Tjahja, N. (2024), The pursuit of ‘good’ Internet policy. Policy Internet. [CrossRef]
- Pohle, J.; Santaniello, M. From multistakeholderism to digital sovereignty: Toward a new discursive order in internet governance. Policy & Internet. [CrossRef]
- Viano, C. , Avanzo, S., Cerutti, M., Cordero, A., Schifanella, C. & Boella, G. (2022) Blockchain tools for socio-economic interactions in local communities. Policy and Society. [CrossRef]
- Karatzogianni, A. , Tiidenberg, K., & Parsanoglou, D. (2022). The impact of technological transformations on the digital generation: Digital citizenship policy analysis (Estonia, Greece, and the UK). DigiGen Policy Brief, 22. 20 April. [CrossRef]
- Gerlich, Michael. 2024. Societal Perceptions and Acceptance of Virtual Humans: Trust and Ethics across Different Contexts. Social Sciences 13: 516. [CrossRef]
- Waldner, D. & Lust, E. (2018) Unwelcome change: Coming to terms with democratic backsliding. Annual Review of Political Science. 21(1): 93-113.
- Roose, K. (2024) Available from: https://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html (Accessed on 1 Sept 2024).
- Kolt, N. (2024) ‘Governing AI Agents.’ Available at SSRN. [CrossRef]
- Calzada, I. (2024c) Data (un)sustainability: Navigating utopian resistance while tracing emancipatory datafication strategies in Certomá, C., Martelozzo, F. and Iapaolo, F. (eds.) Digital (Un)Sustainabilities: Promises, Contradictions, and Pitfalls of the Digitalization-Sustainability Nexus. Routledge: Oxon, UK. [CrossRef]
- Benson, J. (2024) Intelligent Democracy: Answering The New Democratic Scepticism. Oxford, UK: Oxford University Press.
- Coeckelbergh, M. (2024) Artificial intelligence, the common good, and the democratic deficit in AI governance. AI Ethics. [CrossRef]
- García-Marzá, D. & Calvo, P. (2024) Algorithmic Democracy: A Critical Perspective Based on Deliberative Democracy. Cham, Switzerland: Springer Nature.
- KT4Democracy. Available at: https://kt4democracy.eu/ (Accessed ). 1 January.
- Levi, S. (2024) Digitalización Democrática: Soberanía Digital para las Personas. Barcelona, Spain: Rayo Verde.
- Poblet, M.; Allen, D. W. E.; Konashevych, O.; Lane, A. M.; Diaz Valdivia, C. A. From Athens to the Blockchain: Oracles for Digital Democracy. Front. Blockchain 2020, 3, 575662 Available online:. [Google Scholar] [CrossRef]
- De Filippi, P. , Reijers, W. & Morshed, M. (2024) Blockchain Governance. Boston, USA: MIT Press.
- Visvizi, A.; Malik, R.; Guazzo, G.M.; Çekani, V. The Industry 5.0 (I50) Paradigm, Blockchain-Based Applications and the Smart City. Eur. J. Innov. Manag. [CrossRef]
- Roio, D. , Selvaggini, R., Bellini, G. & Dintino, A. (2024) SD-BLS: Privacy preserving selective disclosure of verifiable credentials with unlinkable threshold revocation. 2024 IEEE International Conference on Blockchain (Blockchain). [CrossRef]
- Viano, C. , Avanzo, S., Boella, G., Schifanella, C. & Giorgino, V. (2023) Civic blockchain: Making blockchains accessible for social collaborative economies. Journal of Responsible Technology. [CrossRef]
- Ahmed, S. , et al. (2024) Field-building and the epistemic culture of AI safety. First Monday.
- Tan, J.; et al. 2024, Open Problems in DAOs. https://arxiv.org/abs/2310. 1920. [Google Scholar]
- Petreski, Davor and Cheong, Marc, "Data Cooperatives: A Conceptual Review" (2024). ICIS 2024 Proceedings. 15. https://aisel.aisnet. 2024.
- Stein, J. , Fung, M.L., Weyenbergh, G.V. & Soccorso, A. (2023) Data cooperatives: A framework for collective data governance and digital justice', People-Centered Internet. Available from: https://myaidrivecom/view/file-ihq4z4zhVBYaytB0mS1k6uxy (Accessed ). 1 September.
- Dathatri, S.; et al. Scalable watermarking for identifying large model outputs. Nature 2024, 634, 818–823. [Google Scholar] [CrossRef] [PubMed]
- Adler, et al. (2024) Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online, arXiv. Available from: https://arxiv.org/abs/2408.07892 (Accessed ). 1 September.
- Fratini, Samuele and Hine, Emmie and Novelli, Claudio and Roberts, Huw and Floridi, Luciano, Digital Sovereignty: A Descriptive Analysis and a Critical Evaluation of Existing Models (, 2024). Available at SSRN: https://ssrn. 21 April 4816. [CrossRef]
- Hui, Yuk. Machine and Sovereignty for a Planetary Thinking. University of Minnesota Press: Minneapolis and London.
- New America. From Digital Sovereignty to Digital Agency. New America Foundation, 2023. Available online: https://www.newamerica.org/planetary-politics/briefs/from-digital-sovereignty-to-digital-agency/ (accessed on 20 November 2024).
- Glasze, G.; et al. Contested Spatialities of Digital Sovereignty. Geopolitics. [CrossRef]
- The Conversation (2023) Elon Musk’s feud with Brazilian judge is much more than a personal spat – it’s about national sovereignty, freedom of speech, and the rule of law. Available from: https://theconversation.com/elon-musks-feud-with-brazilian-judge-is-much-more-than-a-personal-spat-its-about-national-sovereignty-freedom-of-speech-and-the-rule-of-law-238264 (Accessed ). 20 September.
- The Conversation (2023) Albanese promises to legislate minimum age for kids’ access to social media. Available from: https://theconversation.com/albanese-promises-to-legislate-minimum-age-for-kids-access-to-social-media-238586 (Accessed ). 20 September.
- Calzada, I. Data Co-operatives through Data Sovereignty. Smart Cities 2021, 4, 1158–1172. [Google Scholar] [CrossRef]
- Belanche, D. , Belk, R.W., Casaló, L.V. & Flavián, C. (2024) The dark side of artificial intelligence in services. Service Industries Journal.
- European Parliament. (2023). EU AI Act: First Regulation on Artificial Intelligence. Available online: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (accessed on 23 November 2024).
- Yakowitz Bambauer, Jane R. and Zarsky, Tal, Fair-Enough AI (, 2024). Forthcoming in the Yale Journal of Law & Technology, Available at SSRN: https://ssrn. 08 August 4924. [CrossRef]
- Dennis, C.; et al. (2024). What Should Be Internationalised in AI Governance? Oxford Martin AI Governance Initiative.
- Ghioni, R.; Taddeo, M.; Floridi, L.; Open Source Intelligence and AI: A Systematic Review of the GELSI Literature. SSRN. Available online: https://ssrn.com/abstract=4272245 (accessed on 18 November 2024).
- Bullock, S.; Ajmeri, N.; Batty, M.; Black, M.; Cartlidge, J.; Challen, R.; Chen, C.; Chen, J.; Condell, J.; Danon, L.; Dennett, A.; et al. Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy. 2024. Available online: https://ai4ci.ac.uk (accessed on 20 November 2024).
- Alon, I.; Haidar, H.; Haidar, A.; Guimón, J. The future of artificial intelligence: Insights from recent Delphi studies. Futures 2024, 103514. [Google Scholar] [CrossRef]
- Ben Dhaou, S. , Isagah, T., Distor, C., & Ruas, I.C. (2024). Global Assessment of Responsible Artificial Intelligence in Cities: Research and recommendations to leverage AI for people-centred smart cities. Nairobi, Kenya. United Nations Human Settlements Programme (UN-Habitat).
- Narayanan, A. (2023) Understanding Social Media Recommendation Algorithms'. Knight First Amendment Institute, 1-49.
- Settle, J.E. (2018) Frenemies: How Social Media Polarizes America. Cambridge University Press.
- European Commission, Joint Research Centre, Lähteenoja, V., Himanen, J., Turpeinen, M., and Signorelli, S. The landscape of consent management tools - a data altruism perspective. 2024. [CrossRef]
- Fink, A. (2024). Data cooperative. Internet Policy Review. [CrossRef]
- Nabben, K. (2024). AI as a Constituted System: Accountability Lessons from an LLM Experiment.
- Von Thun, M. , Hanley, D.A. (2024) Stopping Big Tech from Becoming Big AI. Open Markets Institute and Mozilla.
- Rajamohan, R. (2024) Networked Cooperative Ecosystems. https://paragraph.
- Ananthaswamy, A. (2024) Why Machines Learn: The Elegant Math Behind Modern AI. London, UK: Penguin.
- Bengio, Y. (2023) AI and catastrophic risk. Journal of Democracy. 34(4): 111-121.
- European Parliament. Social approach to the transition to smart cities, L: Parliament, 2023.
- Magro, A. , (2024) Emerging digital technologies in the public sector: The case of virtual worlds. Luxembourg: Publications Office of the European Union.
- Estévez Almenzar, M. , Fernández Llorca, D., Gómez, E., & Martínez Plumed, F., 2022. Glossary of human-centric artificial intelligence, L: Office of the European Union. [CrossRef]
- Calzada, I. & Almirall, E. (2020) Data Ecosystems for Protecting European Citizens’ Digital Rights, Transforming Government: People, Process and Policy (TGPPP) 14(2): 133-147. [CrossRef]
- Calzada, I.; Pérez-Batlle, M.; Batlle-Montserrat, J. People-Centered Smart Cities: An Exploratory Action Research on the Cities’ Coalition for Digital Rights. Journal of Urban Affairs 2021, 43, 1–26. [Google Scholar] [CrossRef]
- Mitchell, M. , Palmarini, A.B. & Moskvichev, A. (2023) Comparing Humans, GPT-4, and GPT-4V on abstraction and reasoning tasks. arXiv preprint.
- Gasser, U. & Mayer-Schönberger, V. (2024) Guardrails: Guiding Human Decisions in the Age of AI. Princeton, USA: Princeton University Press.
- United Nations High-level Advisory Body on Artificial Intelligence (2024) Governing AI for Humanity: Final Report. United Nations, New York.
- Vallor, S. (2024) The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. NYC, USA: OUP.
- Buolamwini, J. (2023) Unmasking AI: My Mission to Protect What is Human in a World of Machines. London, UK: Random House.
- McCourt, F.H. Our Biggest Fight: Reclaiming Liberty, Humanity, and Dignity in the Digital Age. Crown Publishing: London, 2024.
- Muldoon, J. , Graham, M. & Cant, C. (2024) Feeding the Machine: The Hidden Human Labour Powering AI. Edinburgh, UK: Cannongate.
- Burkhardt, S. & Rieder, B. (2024) Foundation models are platform models: Prompting and the political economy of AI. Big Data & Society. [CrossRef]
- Finnemore, M. & Sikkink, K. (1998) International Norm Dynamics and Political Change. International Organization. 52: 887 - 917.
- Lazar, S. (2024, forthcoming) Connected by Code: Algorithmic Intermediaries and Political Philosophy. Oxford: Oxford University Press.
- Hoeyer, K. (2023) Data Paradoxes: The Politics of Intensified Data Sourcing in Contemporary Healthcare. Cambridge, MA, USA: MIT Press.
- Hughes, T. (2024) The political theory of techno-colonialism. European Journal of Political Theory. [CrossRef]
- Srivastava, S. Algorithmic Governance and the International Politics of Big Tech. Cambridge University Press: Cambridge, USA, 2021.
- Utrata, A. (2024) Engineering territory: Space and colonies in Silicon Valley. American Political Science Review, 1097. [Google Scholar] [CrossRef]
- Waldner, D. & Lust, E. (2018) Unwelcome change: Coming to terms with democratic backsliding. Annual Review of Political Science. 21(1): 93-113.
- Guersenzvaig, A. & Sánchez-Monedero, J. (2024). AI research assistants, intrinsic values, and the science we want. AI & Society. [CrossRef]
- Wachter-Boettcher, S. (2018) Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threat of Toxic Tech. London, UK: WW Norton & Co.
- D’Amato, K. (2024). ChatGPT: towards AI subjectivity. AI & Society. [CrossRef]
- Shavit, Y. , et al. (2023) Practices for governing agentic AI systems. OpenAI.
- Bibri, S.E.; Allam, Z. The Metaverse as a Virtual Form of Data-Driven Smart Urbanism: On Post-Pandemic Governance through the Prism of the Logic of Surveillance Capitalism. Smart Cities 2022, 5, 715–727. [Google Scholar] [CrossRef]
- Bibri, S.E.; Visvizi, A.; Troisi, O. Advancing Smart Cities: Sustainable Practices, Digital Transformation, and IoT Innovations; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
- A: Sharifi, Zaheer Allam, Simon Elias Bibri, Amir Reza Khavarian-Garmsir, Smart cities and sustainable development goals (SDGs), 2024. [CrossRef]
- Singh, A. Advances in Smart Cities: Smarter People, Governance, and Solutions. Journal of Urban Technology. [CrossRef]
- Reuel, A. , et al. (2024) Open Problems in Technical AI Governance. arXiv preprint. https://arxiv.org/abs/2407.14981.
- Aho, B. Data communism: Constructing a national data ecosystem. Big Data & Society. [CrossRef]
- Valmeekam, K. , et al. (2023). On the Planning Abilities of Large Language Models—A Critical Investigation. arXiv preprint. https://arxiv.org/abs/2305.15771.
- Yao, S. , et al. (2022) ReAct: Synergizing reasoning and acting in language models. arXiv preprint. https://arxiv.org/abs/2210.03629.
- Calzada, I. , The Illusion of the Web3 Decentralization: Distributing Power or Creating a New Tech-Savvy Elite? SSRN, 2024,. [CrossRef]
- Lazar, S. & Pascal, A. (2024) AGI and Democracy. Allen Lab for Democracy Renovation.
- Ovadya, A. (2023) Reimagining Democracy for AI. Journal of Democracy. 34(4): 162-170.
- Ovadya, A.; Thorburn, L.; Redman, K.; Devine, F.; Milli, S.; Revel, M.; Konya, A.; Kasirzadeh, A.; Toward Democracy Levels for AI. Pluralistic Alignment Workshop at NeurIPS 2024. Available online: https://arxiv.org/abs/2411.09222 (accessed on 14 November 2024).
- Alnabhan, M.Q.; Branco, P. BERTGuard: Two-Tiered Multi- Domain Fake News Detection with Class Imbalance Mitigation. Big Data Cogn. Comput. 2024, 8, 93. [Google Scholar] [CrossRef]
- Gourlet, P. , Ricci, D. and Crépel, M. (2024) Reclaiming artificial intelligence accounts: A plea for a participatory turn in artificial intelligence inquiries. Big Data & Society. [CrossRef]
- Spathoulas, G. , Katsika, A., & Kavallieratos, G. (2024) Privacy preserving and verifiable outsourcing of AI processing for cyber-physical systems. Norwegian University of Science and Technology, University of Thessaly.
- Abhishek, T. & Varda, M. (2024) Data hegemony: The invisible war for digital empires'. Internet Policy Review, /: from: https, 1 September 1789. [Google Scholar]
- Alaimo, C. & Kallinikos, J. (2024) Data Rules: Reinventing the Market Economy. Cambridge, MA, USA: MIT Press.
- OpenAI, GPT-4 Technical Report. 2023.
- Dobbe, R. (2022) System safety and artificial intelligence. In ’ in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency.
- Bengio, Y. , et al. (2024) International Scientific Report on the Safety of Advanced AI: Interim Report.
- World Digital Technology Academy (WDTA). 2024. Large Language Model Security Requirements for Supply Chain, -03.
- AI4GOV. Available at: https://ai4gov-project.eu/2023/11/14/ai4gov-d3-1/ (Accessed ). 1 January.
- Cazzaniga, M. , Jaumotte, F., Li, L., Melina, G., Panton, A.J., Pizzinelli, C., Rockall, E., & Tavares, M.M., 2024. Gen-AI: Artificial Intelligence and the Future of Work, 2024. [Google Scholar]
- ENFIELD (2024) Available from: https://www.enfield-project.eu/about (Accessed ). Call: oc1-2024-TES-01. SGA: oc1-2024-TES-01-01. Democracy in the Age of Algorithms: Enhancing Transparency and Trust in AI-Generated Content through Innovative Detection Techniques (PI: Prof Igor Calzada). Grant Agreement Number: 101120657. https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/competitive-calls-cs/6083? 1 September 3109.
- Palacios, S.; et al. (2022). AGAPECert: An Auditable, Generalized, Automated, Privacy-Enabling Certification Framework with Oblivious Smart Contracts. Journal of Defendable and Secure Computing.
- GPAI Algorithmic Transparency in the Public Sector (2024) A State-of-the-Art Report of Algorithmic Transparency Instruments. Global Partnership on Artificial Intelligence. Available from www.gpai.ai. (Accessed ). 1 September.
- Lazar, S. & Nelson, A. (2023) AI safety on whose terms? Science, 6654. [Google Scholar]
- HAI (2024) Artificial Intelligence Index Report 2024. Palo Alto, USA: HAI.
- Nagy, P. & Neff, G. (2024) Conjuring algorithms: Understanding the tech industry as stage magicians. New Media & Society, 4954. [Google Scholar]
- Kim, E. , Jang, G.Y. & Kim, S.H. (2022) How to apply artificial intelligence for social innovations. Applied Artificial Intelligence. [CrossRef]
- Calzada, I.; Cobo, C. Unplugging: Deconstructing the Smart City. Journal of Urban Technology, 2015, 22, 23–43. [Google Scholar] [CrossRef]
- Visvizi, A.; Godlewska-Majkowska, H. Not Only Technology: From Smart City 1.0 through Smart City 4.0 and Beyond (An Introduction). In Smart Cities: Lock-In, Path-dependence and Non-linearity of Digitalization and Smartification; Visvizi, A., Godlewska-Majkowska, H., Eds.; Routledge: London, UK, 2025; pp. 3–16. [Google Scholar]
- Troisi, O.; Visvizi, A.; Grimaldi, M. The Different Shades of Innovation Emergence in Smart Service Systems: The Case of Italian Cluster for Aerospace Technology. J. Bus. Ind. Mark. 2024, 39, 1105–1129. [Google Scholar] [CrossRef]
- Visvizi, A.; Troisi, O.; Corvello, V. Research and Innovation Forum 2023: Navigating Shocks and Crises in Uncertain Times—Technology, Business, Society; Springer Nature: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
- Federico Caprotti, Federico Cugurullo, Matthew Cook, Andrew Karvonen, Simon Marvin, Pauline McGuirk & Alan-Miguel Valdez (27 Mar 2024): Why does urban Artificial Intelligence (AI) matter for urban studies? [CrossRef]
- T: Caprotti, Catalina Duarte, Simon Joss, The 15-minute city as paranoid urbanism, 2024. [CrossRef]
- Cugurullo, F.; Caprotti, F.; Cook, M.; Karvonen, A.; McGuirk, P.; Marvin, S. The rise of AI urbanism in post-smart cities: A critical commentary on urban artificial intelligence. Urban Studies. [CrossRef]
- Sanchez, T.W.; Fu, X.; Yigitcanlar, T.; Ye, X. The Research Landscape of AI in Urban Planning: A Topic Analysis of the Literature with ChatGPT. Urban Sci. 2024, 8, 197. [Google Scholar] [CrossRef]
- Kuppler, A.; Fricke, C. Between innovative ambitions and erratic everyday practices: urban planners’ ambivalences towards digital transformation. [CrossRef]
- Eubanks, V. (2019) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. London: Picador.
- Lorinc, J. Dream States: Smart Cities, Technology, and the Pursuit of Urban Utopias. Toronto: Coach House Books, 2022.
- T: Leffel, Ben Derudder, Michele Acuto, Jeroen van der Heijden, Not so polycentric, 2023. [CrossRef]
- Luccioni, S. , Jernite, Y. & Strubell, E. (2024) Power hungry processing: Watts driving the cost of AI deployment? in The 2024 ACM Conference on Fairness, Accountability, and Transparency.
- Gohdes, A.R. (2023) Repression in the Digital Age: Surveillance, Censorship, and the Dynamics of State Violence. Oxford, UK: Oxford University Press.
- Seger, E. , et al. (2020) Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world.
- Burton, J.W. , Lopez-Lopez, E., Hechtlinger, S. et al. ( 2024) How large language models can reshape collective intelligence. Nat Hum Behav 8, 1643–1655. [CrossRef]
- Lalka, R. (2024) The Venture Alchemists: How Big Tech Turned Profits into Power. New York, NY, USA: Columbia University Press.
- Li, F.-F. (2023) The Worlds I See: Curiosity, Exploration, and Discovery and the Dawn of AI. London, UK: Macmillan.
- Medrado, A. & Verdegem, P. (2024) Participatory action research in critical data studies: Interrogating AI from a South–North approach. Big Data & Society.
- Mejias, U.A.; Couldry, N. Data Grab: The New Colonialism of Big Tech (and How to Fight Back). WH Allen: London, 2024.
- Murgia, M. (2024) Code Dependent: Living in the Shadow of AI. London, UK: Henry Holt and Co.
- Johnson, S. & Acemoglu, D. (2023) Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. London, UK: Basic Books.
- J: Rella, Kristian Bondo Hansen, Nanna Bonde Thylsturp, Malcolm Campbell-Verduyn, Alex Preda, Daivi Rodima-Taylor, Ruowen Xu & Till Straube (22 Oct 2024): Hybrid materialities, power, and expertise in the era of general purpose technologies, Distinktion, 2024. [CrossRef]
- Merchant, B. (2023) Blood in the Machine: The Origins of the Rebellion Against Big Tech. London, UK: Little, Brown and Company.
- Sieber, R. , Brandusescu, A., Adu-Daako, A. & Sangiambut, S. (2024) Who are the publics engaging in AI? Public Understanding of Science. [CrossRef]
- Tunç, A. (2024). Can AI determine its own future? AI & Society. [CrossRef]
- Floridi, Luciano, Why the AI Hype is Another Tech Bubble (, 2024). Available at SSRN: https://ssrn. 18 September 4960.
- Batty, M. The New Science of Cities; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
- Batty, M. Inventing Future Cities; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Batty, M. Urban Analytics Defined. Environment and Planning B: Urban Analytics and City Science 2019, 46, 403–405. [Google Scholar] [CrossRef]
- Marvin, S.; Luque-Ayala, A.; McFarlane, C. Smart Urbanism: Utopian Vision or False Dawn? Routledge: New York, NY, USA, 2016. [Google Scholar]
- Marvin, S.; Graham, S. Splintering Urbanism: Networked Infrastructures, Technological Mobilities, and the Urban Condition; Routledge: London, UK, 2001. [Google Scholar]
- Marvin, S.; Bulkeley, H.; Mai, L.; McCormick, K.; Palgan, Y.V. Urban Living Labs: Experimenting with City Futures. European Urban and Regional Studies 2018, 25, 317–333. [Google Scholar] [CrossRef]
- Kitchin, R. Code/Space: Software and Everyday Life; MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
- Kitchin, R.; Lauriault, T.P.; McArdle, G. Knowing and Governing Cities through Urban Indicators, City Benchmarking, and Real-Time Dashboards. Regional Studies, Regional Science 2015, 2, 6–28. [Google Scholar] [CrossRef]
- Calzada, I. (2020) Platform and data co-operatives amidst European pandemic citizenship. Sustainability, 8309. [Google Scholar] [CrossRef]
- Monsees, L. Crypto-Politics: Encryption and Democratic Practices in the Digital Era. O: Routledge, 2020. [Google Scholar]
- Visvizi, A.; Kozlowski, K.; Calzada, I.; Troisi, O. ; Troisi, O. Multidisciplinary Movements in AI and Generative AI: Society, Business, Education. 2025. [Google Scholar]
- Calzada. I. Datafied Democracies Unplugged, 2025.
- Palacios, S.; Ault, A.; Krogmeier, J.V.; Bhargava, B.; Brinton, C.G. AGAPECert: An Auditable, Generalized, Automated, Privacy-Enabling Certification Framework with Oblivious Smart Contracts. IEEE Trans. Dependable Secur. Comput. 2023, 20, 3269–3286. [Google Scholar] [CrossRef]
- Hossain, S.T.; Yigitcanlar, T.; Local Governments Are Using AI without Clear Rules or Policies, and the Public Has No Idea. QUT Newsroom. Available online: https://www.qut.edu.au/news/realfocus/local-governments-are-using-ai-without-clear-rules-or-policies-and-the-public-has-no-idea (accessed on 9 January 2025).
- Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 2025, 15, 6. [Google Scholar] [CrossRef]
- Bousetouane, F.; Agentic Systems: A Guide to Transforming Industries with Vertical AI Agents. arXiv 2025, [cs.MA] 2501.00881. Available online: https://arxiv.org/abs/2501.00881 (accessed on 9 January 2025).
- Fontana, S.; Errico, B.; Tedesco, S.; Bisogni, F.; Renwick, R.; Akagi, M.; Santiago, N. AI and GenAI Adoption by Local and Regional Administrations. European Union, Commission for Economic Policy, 2024. 9: ISBN, 3679. [Google Scholar] [CrossRef]
- Hossain, S.T.; Yigitcanlar, T.; Nguyen, K.; Xu, Y. Cybersecurity in Local Governments: A Systematic Review and Framework of Key Challenges. Urban Governance, in Press. Available online. [CrossRef]
- Laksito, J.; Pratiwi, B.; Ariani, W. Harmonizing Data Privacy Frameworks in Artificial Intelligence: Comparative Insights from Asia and Europe. PERKARA – Jurnal Ilmu Hukum dan Politik. [CrossRef]
- Nature. Science for Policy: Why Scientists and Politicians Struggle to Collaborate. Nature, 2024. Available online: https://www.nature.com/articles/science4policy (accessed on 9 January 2025).

| Aspect | EU-Wide Application Under AI Act |
Country-Specific Focus [3,4] |
|---|---|---|
| Risk Classification | AI systems are classified as unacceptable, high, limited, or minimal risk. | Individual states may prioritize specific sectors (e.g., healthcare in Germany, transportation in the Netherlands) where high-risk AI applications are more prevalent. |
| High-Risk AI Requirements | Mandatory requirements for data quality, transparency, robustness, and oversight. | Enforcement and oversight approaches may vary, with some countries opting for stricter testing and certification processes. |
| Transparency Obligations | Users must be informed when interacting with AI (e.g., chatbots, deepfakes). | Implementation might vary, with some countries adding requirements for specific sectors like finance (France) or public services (Sweden). |
| Data Governance | Data used by AI systems must be free from bias and respect privacy. | States with stronger data protection laws, like Germany, may adopt stricter data governance and audit practices. |
| Human Oversight | High-risk AI requires mechanisms for human intervention and control. | Emphasis may vary, with some states prioritizing human oversight in sectors like education (Spain) or labor (Italy). |
| Compliance and Penalties | Non-compliance can result in fines up to 6% of global turnover. | While fines are harmonized, enforcement strategies may differ based on each country's regulatory framework. |
| Innovation Sandboxes | Creation of sandboxes to promote safe innovation in AI. | Some countries, like Denmark and Finland, have existing sandbox initiatives and may expand them to further support AI development. |
| National AI Strategies | Member States align their AI strategies with the AI Act's principles. | Countries may adapt strategies to their economic strengths (e.g., robotics in Czechia, AI-driven fintech in Luxembourg). |
| Public Sector AI Applications | Public services using AI must comply with the Act’s requirements. | Some countries prioritize transparency and ethics in government AI applications, with additional guidelines (e.g., Estonia and digital services). |
| Dimension | Key Insights | Implications |
|---|---|---|
| Trustworthiness Definition | Encompasses transparency, accountability, ethical integrity. | Calls for participatory governance to ensure inclusivity and co-construction of trust. |
| Economic Competitiveness | Tension between fostering innovation and maintaining ethical standards. | Uneven playing fields for SMEs and grassroots initiatives; innovation sandboxes as a potential equalizer. |
| High-Stakes Sectors | Focus on healthcare, law enforcement, energy; risks of bias and misuse. | Continuous monitoring and inclusive frameworks to ensure systems empower rather than oppress vulnerable populations. |
| Participatory Governance | Advocates for inclusion via citizen assemblies, living labs, and co-design workshops. | Encourages diverse stakeholder engagement to align technological advancements with democratic values. |
| Regulatory Frameworks | Balances economic growth with societal equity. | Promotes innovation while safeguarding against tech concentration and ethical oversights. |
| Challenges in Decentralization | Risks of bias, misinformation, and reduced accountability in decentralized ecosystems. | Emphasizes blockchain and other tech as solutions to enhance accountability without compromising user privacy. |
| Equitable Innovation | Highlights disparities in economic benefits across industries and societal groups. | Need for policies that ensure AI benefits reach marginalized communities and foster equity. |
| Technological vs. Societal Context | Debate over prioritizing technological robustness vs. societal inclusivity in trustworthiness. | Shift required towards frameworks addressing underrepresented groups. |
| Technique | European Example | Response to the Research Question |
Trustworthy AI for Whom? |
|---|---|---|---|
| T1. Federated Learning for Decentralized AI Detection |
GAIA-X initiative promoting secure and decentralized data ecosystems www.gaia-x.eu |
Supports user-centric data sharing and privacy compliance across Europe | End Users and Citizens: Projects like GAIA-X (federated learning) focus on user-centric designs that prioritize transparency and data privacy. |
| T2. Blockchain-Based Provenance Tracking |
OriginTrail project ensuring data and product traceability www.originaltrail.io |
Enhances product authenticity and trust in supply chains for consumers and industries | Communities and Organizations: Tools like OriginTrail (blockchain-based provenance tracking) ensures that organizations and consumers can trust the authenticity of data and products. |
| T3. Zero-Knowledge Proofs for Content Authentication |
European Blockchain Services Infrastructure (EBSI) for credential verification https://digital-strategy.ec.europa.eu/en/policies/european-blockchain-services-infrastructure |
Ensures privacy and security for credential verification in education and public services | Regulators and Policymakers: By embedding EU principles into operational frameworks, initiatives like the European Blockchain Services Infrastructure (EBSI) demonstrate that Trustworthy AI aids regulators in enforcing compliance while maintaining transparency and inclusivity across borders. |
| T4. DAOs for Crowdsourced Verification |
Aragon platform enabling collaborative decentralized governance https://www.aragon.org/ |
Empowers communities with participatory governance and collaborative decision-making | Communities and Organizations: Tools like Aragon (DAOs) empower decentralized decision-making, fostering collaborative governance among community members. |
| T5. AI-Powered Digital Watermarking |
C2PA initiative embedding metadata and watermarks in digital media https://c2pa.org/ |
Improves traceability and content authenticity for media and journalism | Industry and Innovation Ecosystems: Projects like C2PA (digital watermarking) support industrial and media ecosystems by providing robust frameworks. These initiatives promote innovation while adhering to ethical guidelines. |
| T6. Explainable AI (XAI) for Content Detection |
Horizon 2020 Trust-AI project developing explainable AI models www.trustai.eu |
Enhances transparency and trust in AI decision-making for users and professionals | End Users and Citizens: Projects like Trust-AI (XAI) focus on user-centric designs that prioritize transparency and data privacy. Citizens gain trust in AI systems when these systems explain their decisions, safeguard personal data, and remain accountable. |
| T7. Privacy-Preserving Machine Learning (PPML) for Secure Content Verification |
MUSKETEER project creating privacy-preserving machine learning frameworks www.musketeer.eu |
Ensures secure AI training and compliance with privacy laws for industry stakeholders | Industry and Innovation Ecosystems: Projects like MUSKETEER (PPML) support industrial ecosystems by providing robust frameworks for privacy-preserving analysis and content authentication. These initiatives promote innovation while adhering to ethical guidelines. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
