Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Mitigating Age-related Bias in Predictive Policing Algorithms

Version 1 : Received: 22 November 2023 / Approved: 23 November 2023 / Online: 24 November 2023 (02:42:46 CET)

How to cite: Almasoud, A.S. Mitigating Age-related Bias in Predictive Policing Algorithms. Preprints 2023, 2023111534. https://doi.org/10.20944/preprints202311.1534.v1 Almasoud, A.S. Mitigating Age-related Bias in Predictive Policing Algorithms. Preprints 2023, 2023111534. https://doi.org/10.20944/preprints202311.1534.v1

Abstract

This study addressed algorithmic bias in predictive policing, focusing on the Chicago Police Department's Strategic Subject List (SSL) dataset. We specifically focused on identifying and mitigating age-related biases, a notably underexplored area in prior research. Our research introduced Conditional Score Recalibration as a bias mitigation strategy alongside the well-established Class Balancing technique. Conditional Score Recalibration involved reassessing and adjusting risk scores for individuals initially assigned moderately high-risk scores in the dataset. This recalibration marked such individuals as low risk if they met three conditions, namely: no prior arrests for violent offenses, no previous arrests for narcotic offenses, and having never been involved in shooting incidents. These fairness strategies were implemented on the Random Forest model, and the fairness metrics employed included Equality of Opportunity Difference, Average Odds Difference, and Demographic Parity. The results showed a significant improvement in model fairness, particularly for age biases, without compromising the model's accuracy. These findings challenged the often-assumed trade-off between fairness and accuracy, underscoring the feasibility of achieving fairness without compromising accuracy.

Keywords

predictive policing; algorithms; fairness; age bias; strategic subject list

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.