Submitted:
22 June 2025
Posted:
24 June 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. What Alinsky Would Think of AI
2.1. Alinsky and the Logic of Systemic Power
- Who builds these systems, and for what purpose?
- Who is made invisible by them?
- How can those most impacted by AI take action to regain control?
2.2. Related Work: Participatory AI and Civic Tech Scholarship
3. Framing and Previewing the Ten Rules
3.1. Critique of Compliance-Driven Ethics
3.2. Why Rules for Radicals Still Apply to AI
3.3. A Radical AI Is a Democratic AI
4. Rules for Radical AI
- Reveal the Code
- Illuminate the Data
- 3.
- Disrupt AI Neutrality
- 4.
- Organize a Data Strike
- 5.
- Invert Default Settings
- 6.
- Build Cross-Sector Alliances
- 7.
- Elevate Community Voices
- 8.
- Practice Reflexive Critique
- 9.
- Seize Policy Windows
- 10.
- Institutionalize Civic Oversight
5. Methodological Approach: Mapping Alinsky’s Rules to AI
- Textual Analysis: We analyzed Saul Alinsky’s Rules for Radicals, highlighting key tactics such as community organizing, pressure application, and symbolic actions (see Figure 2).
- Contextual Translation: These principles were discussed in a session with experts in AI ethics, civic tech, and AI enthusiasts who were tasked with evaluating their relevance to algorithmic power dynamics and proposed changes to improve digital infrastructures and reduce algorithmic opacity.
- Validation and Refinement: We tested the draft rules using current AI cases, such as surveillance bans and data strikes, discussed in the next section, to assess their relevance. Feedback from practitioners helped us consolidate overlapping principles and refine rule definitions to address AI-specific issues such as data asymmetry and model explainability.
6. Illustrative Public Cases
Banning Facial Recognition in San Francisco
Data Strike by Gig Workers in New York City
7. Scope & Limits
- Backlash and Escalation: Radical tactics, such as public shaming or direct disruption, can lead to legal retaliation or increased surveillance from powerful entities. Activists should assess risks, build broad coalitions, and establish exit strategies to minimize harm.
- Ethical Boundaries: Data strikes and unauthorized data releases can pose risks to privacy, consent, and unintended harm to others. Practitioners must practice ethical reflexivity, consult with affected communities, and adhere to principles of transparency and accountability.
- Reputation and Public Perception: Using ridicule or pressure can damage public trust and is seen as unfair. Activists should focus on clear messaging, seek neutral endorsements, and align their efforts with common societal values.
- Co-optation and Dilution: Institutional actors might use radical language or tactics to justify minor reforms. To maintain momentum, it is essential to continuously engage with grassroots groups, regularly assess outcomes, and be prepared to adjust strategies in response to any potential co-optation.
8. Implications and New Research Directions
8.1. Challenging the Technocratic Ethos
- Who determines ethical priorities in AI development?
- Whose harms are acknowledged, and whose are overlooked?
- What kinds of epistemic justice are ignored in mainstream AI discussions?
8.2. Reclaiming Civic Agency in AI Systems
- Participatory AI Design: How can communities collaboratively design or manage algorithmic systems?
- Algorithmic Dissent: What methods (like reverse engineering, data strikes, and adversarial testing) can citizens use to challenge AI harms?
- Tech Activism: How do artists, hackers, and organizers disrupt AI systems and narratives?
8.3. From Ethical AI to Algorithmic Resistance
- Whistleblowing and Professional Ethics: What risks and moral duties do insiders face when opposing unethical AI practices?
- Slow Tech and Degrowth: Can AI development be integrated within post-growth or anti-extractive frameworks?
- Decolonizing AI: How can indigenous, feminist, or postcolonial perspectives reshape our understanding of "intelligence" and "governance"?
8.4. Institutional Realignment
- Build cross-border algorithmic governance coalitions in the Global South.
- Establish community-led audit and redress mechanisms as alternatives to state or platform oversight.
- Highlight successful resistance case studies, such as facial recognition bans, data justice movements, and non-state actor algorithmic audits.
8.5. Emergent Research Questions from the Framework
- How can grassroots organizations influence the design of algorithms and policy in local governance?
- What strategies help communities resist algorithmic surveillance?
- How do civic actors address power imbalances in AI use, particularly in low-resource settings?
- What does practical algorithmic refusal entail, and how can it be legally and institutionally protected?
- Is it possible to create a model of AI accountability that prioritizes counter-power over compliance?
9. Conclusion
References
- Ayana, G.; Dese, K.; Daba Nemomssa, H.; Habtamu, B.; Mellado, B.; Badu, K.; Yamba, E.; Dzevela, Kong Jude. Decolonizing global AI governance: assessment of the state of decolonized AI governance in Sub-Saharan Africa. Royal Society Open Science 2024, 11(8). [Google Scholar] [CrossRef] [PubMed]
- Birhane, A. Algorithmic injustice: a Relational Ethics Approach. Patterns 2021, 2(2), 100205. [Google Scholar] [CrossRef] [PubMed]
- Calzada, I. The Right to Have Digital Rights in Smart Cities. Sustainability 2021, 13(20), 11438. [Google Scholar] [CrossRef]
- Conger, K.; Fausset, R.; Kovaleski, S. F. San Francisco Bans Facial Recognition Technology. The New York Times. 14 May 2019. Available online: https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html.
- Crawford, K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press, 2021. [Google Scholar]
- Delgado, F.; Yang, S.; Madaio, M.; Yang, Q. The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. 2023. [Google Scholar] [CrossRef]
- Eubanks, V. Automating Inequality: How high-tech tools profile, police, and punish the poor; St. Martin’s Press, 2018. [Google Scholar]
- Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; Schafer, B.; Valcke, P.; Vayena, E. An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines 2018, 28(4), 689–707. [Google Scholar] [CrossRef] [PubMed]
- Naumann, Friedrich; Stiftung. AI Governance: How Public Participation Can Improve AI Governance: vTaiwan’s Initiatives. Friedrich Naumann Foundation. 22 October 2024. Available online: https://www.freiheit.org/taiwan/how-public-participation-can-improve-ai-governance-vtaiwans-initiatives.
- Greene, D.; Hoffmann, A. L.; Stark, L. Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences; 2019. [Google Scholar] [CrossRef]
- New America AI for the People, By the People. 2023. Available online: https://www.newamerica.org/the-thread/artificial-intelligence-governance/.
- Noble, S. U. Algorithms of Oppression: How Search Engines Reinforce Racism; NYU Press, 2018. [Google Scholar] [CrossRef]
- OECD. Tackling Civic Participation Challenges with Emerging Technologies; OECD Publishing, 2025; Available online: https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/04/tackling-civic-participation-challenges-with-emerging-technologies_bbe2a7f5/ec2ca9a2-en.pdf?
- Alinsky, Saul David. Rules for radicals: A practical primer for realistic radicals; New York: Vintage, 1971. [Google Scholar]
- Tacheva, Z.; Ramasubramanian, S. AI Empire: Unraveling the interlocking systems of oppression in generative AI’s global order. Big Data & Society 2023, 10(2). [Google Scholar] [CrossRef]
- Toupin, S. Shaping feminist artificial intelligence. New Media & Society 2023, 26(1), 146144482211507. [Google Scholar] [CrossRef]
- Venkatachalapathy, R. Participatory Engineering of Algorithmic Social Technologies: An Extended Book Review of David G. Robinson’s Voices in the Code. Harvard Data Science Review 2025, 7(2). [Google Scholar] [CrossRef]
- Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs. 2019.


| Mainstream Framework | Radical Reinterpretation |
| Bias and Discrimination | Algorithmic oppression rooted in systemic inequalities |
| Transparency | Strategic exposure for public accountability and counter-power |
| Explainability | Contestability is a civic right, not a corporate design feature. |
| Responsible Innovation | Ethical refusal: saying no to harmful systems |
| Human-in-the-Loop | Power re-entry: Reclaiming agency, not just oversight |
| Tech Ethics Frameworks | Performative compliance without structural change |
| AI Governance Principles | Technocratic paternalism vs. democratic negotiation |
| Alinsky’s Original Rule | AI Adaptation |
| Power is not solely determined by what one possesses, but also by what one's adversary perceives one possesses. | Leverage civic pressure (e.g., audits, exposure, boycotts) to challenge AI development norms. |
| Ridicule is often regarded as the most powerful weapon at the disposal of mankind. | Use satire and public critique to expose myths of AI neutrality and ethical posturing. |
| Hold the adversary accountable to their principles rules. | Hold corporations accountable to their own AI principles and public ethics declarations. |
| An effective strategy is one that is well-received by your team. | Design accessible, creative resistance strategies (e.g., adversarial art, protest games). |
| Maintain consistent pressure. Refrain from easing your efforts. | Sustain attention to AI harm through campaigns, FOIA requests, and public testimonies. |
| Select the target, immobilize it, customize it to individual specifications, and create a polarization effect. | Target opaque systems or decision-makers (e.g., facial recognition vendors) for scrutiny. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).