Preprint
Review

This version is not peer-reviewed.

Exploring the Integration of a Nuptial Bond Between Neuroprediction and AI in Criminal Justice: A Review Study Conducted for Indian Judiciary

A peer-reviewed article of this preprint also exists.

Submitted:

07 May 2024

Posted:

09 May 2024

Read the latest preprint version here

Abstract
The prognostic abilities of Artificial Intelligence and Neuroscience in the forensics and the criminal justice system stand as a reformatory paradigm for understanding any criminal conduct. While the use of Artificial Intelligence has been labeled as having transformational data analytical capabilities, neural predictive approaches also enable an intricate understanding of culpability and criminal propensities. Literature on the complex nature of Neuroprediction and Artificial Intelligence, its ethical deliberations and its usability in curving recidivism are analyzed. This review study elucidates their complex interplay, nuptial relationship and convergence of such in the quest for Justice. Consequences of not protecting individual rights in the criminal justice system are surveyed using grounded theory. Degree of acceptability and dependability of AI-generated evidences in legal proceedings are also reviewed. All these topics are yet to be contemplated under one roof to offer an argumentative view. The author expects to prompt readers and new commers to embrace more sociolegal and technological researches before incorporating such in Indian Judiciary. The review focuses on the quandary of whether to blame such technology inclusion wholly or rather to prioritize the acquisition of bias-free pretrained datasets and processing models.
Keywords: 
;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Law

INTRODUCTION

The combination of Neuroprediction with other AI1 and ML2 tools give rise to significant ethical considerations regarding privacy, autonomy, and the possible improper exploitation of delicate neurological information. There is a likelihood of sociolegal repercussions over benefits per se on the involvement of the same in the criminal investigation and justice system. Neuroprediction in criminal justice incorporates the application of neuroscience to predict possible criminal conduct, while AI utilizes machine learning tools for data analysis and decision-making ( Fernando et al.,2023). Such algorithms are designed to transform the criminal justice system by delivering predictive insights into human behavior and decision-making processes. (Kanwel et al.,2023). As this convergence develops, integrating technical breakthroughs with ethical concerns and legal protections becomes important for harnessing revolutionary potential while protecting basic rights and ethical norms in the criminal justice realm (Morse,2015)(Jones et al.,2014).
The use of AI and ML algorithms for predictive policing and deterministic judgments is increasingly gaining prominence. Predicting the risk of recidivism in the criminal justice system has been of paramount importance. This is especially true for stages involving pretrial, bail, and sentencing on acquittal on the plea of innocence, conviction or even parole (Gijs van Dijck, 2022). This review finds these numbing issues- Firstly, biasness majorly exits in data sets or training model (Mark MacCarthy, 2017). Second is evaluation of in-built processing models to cease added biasness from subsequent HMI3 (Jiaming Zeng et al, 2017). And finally, the default in the deterministic or predictive models leading to a hallucinating or imperfect AI system (Anthony W. Flores et al, 2016). The use of advanced technologies to comprehend the Recidivism Risk Assessment Scale [GRRS4 V. VRRS5] (Northpointe, 2016), together with the implementation of fairness models and ethical data preservation, presents a significant challenge in achieving a flawless AI algorithm. The use of artificial intelligence (AI) in the context of predictive policing has been the subject of substantial scientific research analysis. One example of algorithmic bias in predictive police models was highlighted in this research (Lum and Isaac, 2016). The study revealed the existence of possible racial discrepancies in crime predictions, hence raising issues about the fairness and accuracy of such algorithms.
Evaluating the precision and efficacy of Neuroprediction and AI technologies in forecasting behavior or assisting investigations may have a substantial influence on law enforcement procedures, sentencing, and case results. An essential task is to analyze the existence of biases in the AI algorithms to be used in criminal justice systems. Anticipating progress in the amalgamation of Neuroprediction and AI will help in planning for possible problems and associated possibilities, leading to continued research and development in the area. Facilitating cooperation among neuroscientists, ethicists, policymakers, and legal professionals is essential for developing inclusive strategies that harmonize technical advancement with ethical deliberations in the judicial system. This paper includes an existing literature survey between 2013 to 2023, to frame a summary idea in line with predictive policing, recidivism risk assessment, incorporated technologies, along with their sociolegal and ethical repercussions.

RESEARCH OBJECTIVES

1. Assessing the effectiveness of current Neuroprediction and AI technologies in enhancing criminal investigations and influencing judicial decision-making processes.
2. Analyzing their convergence within the criminal justice system, focusing on aspects such as fairness, bias mitigation, data storage, and processing techniques etc.
3. Exploring global public perceptions regarding their adoption in predictive policing and deterministic judgements while analyzing the associated ethical and legal implications.
4. Investigating potential future trajectories and collaborative opportunities for their methodologies and tools within the context of Indian Judiciary.

RESEARCH QUESTIONS

1. What is the optimal prioritization strategy: verifying humanly biased pretrained datasets or evaluating algorithmic learning/ training models?
2. Should processing models in AI technologies undergo scrutiny alongside the algorithms and training datasets to guarantee freedom from biases likely to get introduced by subsequent human-machine interactions?
3. What contributes more to the increase in false positives and false negatives in deterministic/ predictive methods: pretrained data sets or the default settings of the algorithmic training model?

REVIEW ANALYSIS

State of the Art- AI-Based Neuroimaging Technology: Neuroprediction is the use of structural or functional brain characteristics to forecast the results of therapy, prognoses, and behavioral predictions. Use of Neurovariables, though a new technology doesn’t raise ethical issues till a certain period (Morse, 2015). Effective brain-mapping technologies are likely to overcome a number of challenges, such as the challenge of continually observing and changing neural activity. Also, simple open-loop neurostimulation devices having a closed-loop approach describes the moment-to-moment state of the brain (Herron et al., 2017) . Novel experimental frameworks leveraging clever computational approaches that can rapidly perceive, understand, and modify vast volumes of data from behaviorally important brain circuits are required (Redish and Gordon, 2016). AI/ML in computational psychiatry, and other emerging approaches are such examples.
Explainable artificial intelligence, a relatively new set of methodologies, combines sophisticated AI and ML algorithms with potent explanatory methodologies to produce explainable solutions that have been successful in a variety of domains (Fellous et al., 2019). Recent researches show basic brain circuit changes and therapeutic interventions may be guided by XAI6 (Holzinger et al., 2017; Langlotz et al., 2019). XAI for neurostimulation in mental health is a development of the BMI7 design (Vu et al., 2018). Data analysis in the nature of multivoxel pattern analysis is the study of multivoxel patterns in the human brain to distinguish between more delicate cognitive activities or subject areas, combining data from several voxels within a region (Ombao et al., 2017). Noninvasive anatomical and functional neuroimaging technologies have advanced significantly over the last 10 years, providing a significant quantity of data and statistical software. High-dimensional dataset modeling and learning approaches are crucial for employing statistical machine learning techniques for neuroimaging of enormous volume of Neuronal data with increasing accuracy, and high-dimensional dataset modeling (Alexandre et al., 2014). BMI intervention may stop movement up to 200 ms after it has started in the instance of motor decision-making both before and during movement execution. The introduction of MVPA8 methods has gained popularity in neuroimaging of health and clinical research ( Hampshire and Sharp, 2015). Neural data existing in populations relating to veto self-initiated movements after being triggered within 200 ms can be utilized to decode (Schultze-Kraft et al., 2016). Some extent—distinguish between intentions, perceptual states, and healthy and diseased brains via lie- detection methods (Blitz, 2017). Clinical applications are focused on neurological disorders due to the broad agreement- of the response inhibition as an emergent property of a network of distinct brain regions (Jiang et al., 2019).
Behavioral traits can be associated with aspects of the human brain opening up new opportunities for predictive algorithms to be constructed, allowing the prediction of the criminal dispositions of an individual (Mirabella and Lebedev, 2017). The validity of prediction models is judged by their ability to generalize; for most learning algorithms, the standard practice is to estimate the generalization performance. The adoption of Neuroprediction- as had been defined, needs approaches to frame inference from group-level to individual predictions (Tortora et al., 2020). Scientific advancements have played a crucial role in shaping our understanding of the world. The progress of neuroimaging in conjunction with AI, particularly the use of ML techniques, such as brain mapping, fMRI9, CNN10, NLP11 and speech recognition techniques, has resulted in the development of brain-reading gadgets with cloud-based neuro biomarker banks. Potential future applications of these technologies may include the areas of deception detection, neuromarketing, and BCI12. Some of these are possibly used in the field of forensic psychiatry (Meynen, G. 2019). The prospective use of fMRI has seen in forecasting rates of recidivism among individuals with criminal backgrounds (Aharoni et al., 2013).Thus researches have generated interest in the use of neural data for prediction functions within the field of criminal justice.
Convergence of AI and Neuroprediction in Forensics: Structural and functional neuromarkers of personality disorders whose main characteristic is persistent antisocial conduct, such as ASPD13 and psychopathy as they are most correlated with high rates of recidivism. (Umbach et al., 2015). A need to collect biomarkers of the "criminal" brain and such integration of Neuro-biology, Neuro-prediction should aid in socio-rehabilitation strategies rather curbing individual rights (Coppola, 2018). By using various techniques, the accuracy of risk evaluations and uncover effective therapies in the field of forensic psychiatry can be improved. This method, known as "A.I. Neuroprediction" (Zico Junius Fernando et al, 2023), involves identifying neurocognitive factors that might predict the likelihood of reoffending. It is necessary to identify the enduring effects of these tools while recognizing the contributions of neuroscience and artificial intelligence to the assessment of the risk of violence (Bzdok, D., and Meyer-Lindenberg, A., 2018).
The combination of Neuroprediction and AI shows potential for supporting law enforcement and judicial institutions in early risk assessment, intervention, and rehabilitation initiatives (Gaudet et al., 2016) ( (Jackson et al., 2017) (Greely & Farahany, 2019) (Hayward & Maas, 2020). However, this confluence also presents ethical, legal, and privacy problems. Like, Privacy (Farayola et al., 2023), Bias and Discrimination (Ntoutsi et al., 2020) (Srinivasan & Chander, 2021) (Belenguer, 2022) (Shams et al., 2023), Consent and Coercion (Ghandour et al., 2013) (Klein & Ojemann, 2016)(Rebers et al., 2016), Cognitive Libertry (Muñoz, 2023)(Shah et al., 2021)(Daly et al., 2019)(Lavazza, 2018)(Ienca & Andorno, 2017)(Sommaggio et al., 2017)(Ienca, 2017). The ethical consequences of anticipating criminal propensities and the possible exploitation of such insights underscore the necessity for rigorous ethical frameworks and strict laws (Poldrack et al., 2018) (Eickhoff & Langner, 2019). Moreover, guaranteeing openness, accountability, and fairness in the employment of these technologies inside the criminal justice system becomes crucial (Meynen, 2019). The use of AI-powered brain-mapping technology (L. Belenguer, 2022) to predict acts of violence and subsequent rearrests is a cause for concern and distress. Such methodologies may be used in the future within the fields of forensic psychiatry and criminal justice however, dilution of the right to privacy (Ligthart SLTJ, 2019) can lead to potential ethical and legal consequences.
Technologies used in Crime Detection, Investigation and Prediction: This section includes traditional AI, computer vision, data mining and AI-decision-making models in the criminal justice system. In recent years, between 2018 and 2023, there has been a large influx of literature reviews across interdisciplinary domains discussing various such technologies and software instruments that are used in the Criminal justice System (Varun Mandalpu et al., 2023). The field of machine learning is a subset of artificial intelligence, while deep learning and data mining methods are a subset of the ML. Machine learning uses various statistical models and algorithms to first analyze and then predict from a set of data. On the other hand, deep learning uses neural networks with multiple layers to make complex and intricate relationships between the inputs and outputs (C.Janiesch et al., 2021) (W.safat et al., 2021). ML techniques involve training datasets, which are achieved mainly through supervised and unsupervised learning methods. Traditional AI and ML technologies such as support vector machines, models like decision trees, random forests and logistic regression have been heavily exploited for analysis of the facts of the crime, and identification of the pattern to further predict similar criminal activities (S. Kim et al. 2018). Such traditional AI tools also achieve very high case accuracy in anomaly detection and crime data analysis with limited datasets ( S.Goel et al., 2021). A few notable examples of ML regression techniques include the use of the ARIMAX14 (E.P.Utomo et al., 2018) method in the city of Yogyakarta, with an RMSE of 6.68; the use of crime data (C.Catlett et al., 2019) via ARIMA15, (RF)16 mRepTree and ZeroR (D.M.Raza et al, 2021 ); and the use of RSME17-CDR181-57.8, CDR2-29.85, and CDR3-16.19 in Chicago crimes (C.Catlett et al.,2014). Clustering methods (V. Ingilevich and S.Ivanov, 2018) include LR19, LOR20 and gradient boosting, which are used in Saint-Petersburg Russia Crime, with an R-square21 of 0.9. The RFR22 (LK.G.A.Alves et. Al, 2018) used by Dept. of Informatics of the Brazilian Public Health System (DATASUS), having up to 97% accuracy with an adjusted R-square of 80% on average. Machine Learning methods like deep learning algorithms such as convolution and RNN23 are promising for crime prediction (Sarker, 2021). Predictive policing using these algorithms and training on crime data with either spatial or temporal components have been found to be quite accurate in specific cities in the USA (A. Meijer and M.Wessels, 2019). Predictive models often use pretrained data such as time, location, and type of crime incident to predict future criminal activities and identify criminal hotspots (S. Hossain et al, 2020).
With crime prediction using computer vision and video analysis (Neil Shal et al, 2021), technologies analyze video footage from surveillance cameras from various locations to detect, identify and classify criminal activities such as theft, assault and robbery. Even when monitoring a city’s safety and security, surveillance is conducted by drone and aerial technologies. Deep learning algorithms (M.Saraiva et al, 2022 ) are used for analyzing criminal data from various sources, enhancing the ability and responsiveness to crime prevention in real time. The methods used in data mining (T.Chandrakala et al, 2020) stand as an amazing asset offering tenets of criminal investigative procedures. With respect to digital forensics, a very well-known technology known as the NSVNN24 (Umar Islam et al, 2023) is currently being developed. Supposed to be a reliable approach to anomaly detection in this field of criminal investigation. Additionally, other deep learning mechanisms, such as the DBN25 and clustering-based methods (Ashraf et al, 2022), provide novel approaches for anomaly identification in digital forensics. Additionally, DNN26 exist using a feature-level data fusion method (Kang HW, Kang HB, 2017) that can efficiently fuse multi-model data from several domains within related environmental contexts. Researchers also used Google Tensor Flow to forecast crime hotspots and evaluated three options in the RNN (Zhuang Y, 2017) architecture: precision, accuracy and recall. A comparative study (McClendon L, Meghanathan N, 2015) between violent crime patterns was carried out using the open-source data mining software WEKA27, between violent crime patterns. Here, three algorithms, namely, linear regression, additive regression and decision stump, were implemented to determine the efficiency and efficacy of the ML algorithms. This was intended to predict violent crime patterns and determine criminal hotspots, criminal profiles and criminal trends.
Fairness versus Biasness: The process models circumventing these technologies are often accused of being biased with no profound fairness in predictive or deterministic algorithms. The word fairness in the justice system is the rule of law. When AI-based investigation and justice delivery occurs, fairness and unbiases are of paramount importance. AI algorithms must prioritize fairness as their use expands across many jurisdictions worldwide in forecasting recidivism risk. In this study discrimination, bias, fairness and trustworthiness of AI algorithms were measured to ensure the absence of prejudice (Daniel Varona et al, 2022). However, uncensored discrimination creates unfairness in AI algorithms for predicting recidivism (Ninareh Mehrabi et al, 2021). Scholars have already attributed the logical argumentation of GIGO28 or RIRO29 to the quality of pretrained datasets, leading to unfair AI algorithms. The term discrimination in AI/ ML algorithms has been defined as (Verma & Rubin, 2018) with Biasness in modelling, training, and usage (Ferrer, 2021). Arguably, algorithms can’t alone eliminate discrimination as the outcomes are shaped as per the initial data received. When underlying data is unfair, AI systems can perpetuate widespread inequality (Chen, 2023). Frameworks for discovering and removing two types of discrimination (Lu Zhang et al, 2016) are conducted where the indirect discrimination is caused by direct ones. Like a group classifier (direct discrimination based on historical data) tuning a neutral non-protected attribute in the system (indirect discrimination) causing unfairness and inequality. Analysis of direct discrimination to audit the black-box algorithms, to mitigate biasness based on pretrained data sets or attributes – causing discrimination, biasness, unfairness, untrustworthiness has been conducted ( Daniel Varona et al, 2022). Also, indirect (unintended and necessarily not unfair) for data pre-processing to limit – control group discrimination, distortion in individual data sets, a novel probabilistic formation has been introduced (Flavido du Pin Calmon et al, 2018). Sources of unfairness is not limited to discrimination but also to biasness. The types of bias included data bias, model bias and model evaluation bias as referred in the review (Michael Mayowa Farayola et al., 2023). In one study ( Richard et al, 2023 , Dana Pessach et al, 2022, Eike Peterson et al, 2023), the use of historical data was found to cause measurement bias. Even having fair data is not sufficient, as there can be a trigger from the model being biased, causing unfair prediction without justification (Davinder kaur et al, 2022). In this study ( Arpita Biswas and Suvam Mukherjee,2021), there is a use case scenario in which unfairness can increase due to incorrect evaluation metrics, i.e., biased feedback. The fairness pipeline model, which includes pre-processing, in-processing and post-processing steps has been shown (Mingyang Wan et al, 2023; Felix Petersen, 2021). While pre-processing guarantees the ethical growth of the AI model, the in-processing phase focuses on the tuning of the algorithm. The post-processing phase aims at the assessment stage of the AI lifecycle to address concerns relating to prejudice and biasness.
AI delivering Justice: Using the neuro data and other neural biomarkers used to predict recidivism can clearly be of interest for additional objectives, such as for health insurers or when evaluating potential employees, also raising consent issues (Caulfield and Murdoch, 2017). Artificial intelligence should not be allowed to hallucinate in critical arenas of its usage, such as that of the criminal justice System. Additionally, it is imperative that data integrity holds importance, as a thorough examination of pretrained data is needed to detect and correct biases at their origin. The admissibility of neurological evidence gathered using neuro-imaging methods, such as fMRI, in court has been brought into doubt by legal cases in the most developed nations vis -in the matter of United States v. Jones (2012). Additionally, adherence to algorithmic transparency can never be negated, which needs to override closed-source risk assessment tools. The courts encountered challenges in assessing the dependability and pertinence of the evidence. Additionally, AI plays an impactful role in sentencing and decision-making across many nations around the globe. There has been a range of judicial rulings concerning the utilization of AI algorithms in the context of sentencing. The case of Wisconsin v. Loomis (2016) in the United Nations highlighted the need for openness in the use of AI-generated risk assessments within the context of sentence determinations. Additionally, in the case of Carpenter v. United States (2018) highlighted the constitutional consequences of using people's brain data for predictive objectives, therefore addressing apprehensions around privacy and the gathering of data.
The COMPAS30 algorithm (L. Belenguer, 2022), developed by Northpointe, now Equivant, is a tool used in US courts to assess the likelihood of a defendant committing another offense. It uses risk assessment scales to predict general and violent recidivism, as well as pre-trial offending. The algorithm's practitioner's guide uses behavioral and psychological aspects to predict reoffending and criminal paths. The General Recidivism Scale predicts the probability of engaging in new criminal behavior after being released, while the Violent Recidivism Scale assesses the probability of reoffending violent crimes after a prior conviction. However, a ProPublica investigation ( C.Rudin, 2019) revealed that individuals who were mainly of black origin, such as those of African descent, were almost twice as likely to be classified as having a higher risk by COMPAS, even if they did not actually re-offend. The COMPAS_AI algorithm promises to demonstrate a superior degree of precision compared to individuals without criminal justice expertise, but it does not reach the same level of accuracy.
Existing AI technologies in India: In India, Punjab Police, in collaboration with Staque Technologies, has implemented an artificial intelligence-powered facial recognition system. The Cuttack Police has used AI-powered devices to assist investigative officers in adhering to investigative protocols. The Uttar Pradesh police has introduced an AI-powered facial recognition application named 'Trinetra' to effectively resolve criminal cases. The government of Andhra Pradesh has introduced 'e-pragati', a database containing electronic Know Your Customer (e-KYC) information for millions of individuals in the state. The Delhi Police, in collaboration with IIT Delhi, has established an artificial intelligence center to manage criminal activities. (Varun VM, 2020). It is important to note that the right to privacy holds paramount importance guaranteed in Article 21 of Indian Constitution, banking of neuro-biomarkers may not be allowed if there is such violation per se. Utilizing artificial intelligence in judicial settings has the potential to impact the results of cases and may also lead to disparities in the imposition of sentences. Additionally, without any succinct neuro-biobanks, designing such AI algorithms for predictive policing, assessing the risk of recidivism and offering deterministic judgments is likely to be impossible. The use of neuroprediction and artificial intelligence in the criminal justice system, if incorporated in India, will likely give rise to ethical considerations about biases and the possibility of prejudice.

CONCLUSION

Summary of Key Findings: The key findings of the review shed light on the optimal prioritization strategy for addressing biases in AI technologies, particularly focusing on the context of humanly biased pre-trained datasets and algorithmic learning/ training models. The incorporation of techniques such as model bias evaluation and processing in phase checks is needed to identify biases inside the learning and training algorithms, guaranteeing that they do not perpetuate or magnify preexisting prejudices. The need for ongoing assessment holds quintessential and needs a consistent evaluation and improvement in both the data and algorithms to minimize any biases that may arise or remain. To ascertain the default outcome in the setting of inaccurate predictions, it is necessary to comprehend the origins of biases and their dissemination inside the AI system. Ensuring responsibility and correction mechanisms are in place throughout both the data curation and algorithmic learning phases, which is also essential for establishing fairness and accuracy in decision-making powered by artificial intelligence. However, thorough cross-validation techniques, recalibration, scrupulous data gathering and simultaneous verification are essential for a wide range of brain data sources. This approach ensures privacy, promotes fairness and confronts prejudices and simultaneously enumerates human‒machine dependability. Undoubtedly, a fair and unbiased trial demands an equitable and flawless algorithm. Pretrained data previously impacted by human biases might naturally introduce biases into the system. This principle applies to all logical argumentation: soundness implies validity, but validity does not imply soundness.
While the optimal strategy depends on the specific context, addressing biases in pre-trained datasets is considered foundational due to their direct impact on biased outputs regardless of the model used. Once datasets are verified for biases, evaluating algorithmic learning/training models becomes crucial to ensure they do not introduce additional biases. Furthermore, the review emphasizes the importance of scrutinizing processing models alongside algorithms and training data to safeguard against biases introduced during human-machine interactions. Additionally, the review highlights that the increase in false positives and false negatives in deterministic/predictive methods can be influenced by both pre-trained datasets and default settings of training models. Biased datasets are identified as a fundamental issue leading to biased predictions, while adjusting model settings such as decision thresholds can impact the balance between false positives and negatives. These findings underscore the importance of meticulous consideration and calibration of both datasets and model settings to minimize errors, uphold unbiasedness and accuracy to ensure delivery of Justice/ Governance by Fair Algocracy. The concept of "bias in, bias out" elucidates the fundamental challenge in AI development, emphasizing the necessity of unbiased and representative data to mitigate perpetuation of systemic biases. In contexts such as criminal justice, where AI-driven risk assessment tools can exacerbate existing biases, meticulous attention to data collection and processing is imperative to foster fairness and accuracy in AI systems.
Closing Remarks : In conclusion, the review literature mainly focuses on existing software currently used across the globe, with its performance analysis and criticism across the public domain. From Bytes to Bar, here, the review describes the use of the AI algorithm used to either send or keep criminals in Jail or at least to predict their likelihood of committing crimes in a similar manner. AI algorithms are thus now under public periciliary, and their deterministic approach is likely to be under public auction. Such an examination of AI algorithms is due to their perturbed efficacy for predictive policing, crime pattern analysis, and resource allocation. This highlights the importance of careful calibration to minimize errors and ensure equitable outcomes as these algorithms use previous crime data to forecast upcoming criminal activity and alert law enforcement. Nevertheless, the presence of biases in historical data poses issues as already discussed above, which may likely lead to the continuation of excessive policing in some groups or classes of citizens. In current scenario, AI now uses advanced algorithms to analyse large datasets and detect trends and irregularities in criminal behaviour. Nevertheless, the effectiveness of these methods depends on the precision of the data, the strength of the algorithms, and the capacity to comprehend the results. AI aids in optimizing resource allocation by forecasting regions that need heightened law enforcement. Additionally, ethical issues, algorithmic transparency, and accountability are of utmost importance. The use of AI in judicial courts needs to be closely examined since it may lead to inconsistencies in sentencing. To fully use the promise of AI while ensuring fairness and ethical norms, it is crucial to adopt a comprehensive strategy that includes the collaboration of AI specialists, legal professionals, ethicists, and lawmakers. There is definite difficulty in determining the underlying source of biases that result in false-positive and false-negative outcomes. As the learning and training algorithms may also unintentionally magnify these biases or be ineffective in mitigating them if the training is achieved under an unsupervised learning model. The pursuit of fairness, equality and equity now requires a comprehensive methodology, as per this study. Thus, the key takeaway is finding, addressing and removing any form of biases at every stage of AI algorithms to uphold fairness and accuracy in any decision-making processes.

DECLARATIONS

Ethical Approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

This article does not contain any studies with human participants performed by any of the authors.

Disclaimer

This research paper analyzes the emerging technologies that cannot be absolutely negated in today’s fast pacing world. Also emphasizing the need for inter-disciplinary co-operation along with fair and unbiased handshaking of law and AI of paramount stand. The paper is intended for educational purposes and the author's interventions are student-authorship for an intuition-based educational perspective.

Collaboration

No collaboration has been included.

Competing Interests for Future Scope

Multiple research and studies have been conducted and are currently happening over the last two decades to diversify and explain the distinctions between Artificial Intelligence and Criminal Law. The author declares no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Also, the author received no financial support for the research, authorship, and/or publication of this article (if any).

Data Availability

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Acknowlegment

Th author acknowledge her research network and peers for aiding on preparing the manuscript and supporting her with their expert guidance to submit at the Journal.

Funding

The author received no financial support for the research, authorship, and/or publication of this article (if any).

Conflict of Interest

The author declares no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

ABBREVIATIONS

  • AI-Artificial Intelligence
  • ML-Machine Learning
  • HMI- Human Machine Interface
  • GRRS- General Recidivism Risk Assessment Scale
  • VRRS- Violent Recidivism Risk Assessment Scale
  • XAI-Explainable Artificial Intelligence
  • BMI- Brain Machine Interface
  • MPVA-Multi-Voxel Pattern Analysis
  • f-MRI-Functional Magnetic Resonance Imaging
  • CNN- Convolutional Neural Network
  • NLP- Natural Language Processing
  • BCI- Brain Computer Interface
  • ASPD- Antisocial Personality Disorders
  • ARIMAX-Autoregressive Integrated Moving Average with Explanatory Variable
  • ARIMA-Autoregressive Integrated Moving Average
  • RF-Random Forest
  • RMSE- Root Mean square Error
  • CDR- Crime Dense Region
  • R- Linear Regression
  • LOR- Least Outstanding Requests
  • R2- the coefficient of determination
  • RFR- Random Forest Regressor
  • RRN- Recurrent Neural Networks
  • NSVNN- Novel Support Vector Neural Network
  • DBN-Deep Belief Network
  • DNNs-Deep Neural Networks
  • WEKA-Waikato Environment for Knowledge Analysis
  • GIG0- Garbage In, Garbage Out
  • RIRO- Rubbish In, Rubbish Out
  • COMPAS-Correctional Offender Management Profiling for Alternative Sanctions

Notes

1
AI-Artificial Intelligence
2
ML-Machine Learning
3
HMI- Human Machine Interface
4
GRRS- General Recidivism Risk Assessment Scale
5
VRRS- Violent Recidivism Risk Assessment Scale
6
XAI-Explainable Artificial Intelligence
7
BMI- Brain Machine Interface
8
MPVA-Multi-Voxel Pattern Analysis
9
fMRI-Functional Magnetic Resonance Imaging
10
CNN- Convolutional Neural Network
11
NLP- Natural Language Processing
12
BCI- Brain Computer Interface
13
ASPD- Antisocial Personality Disorders
14
ARIMAX-Autoregressive Integrated Moving Average with Explanatory Variable
15
ARIMA-Autoregressive Integrated Moving Average
16
RF- Random Forest
17
RMSE- Root Mean square Error
18
CDR- Crime Dense Region
19
LR- Linear Regression
20
LOR- Least Outstanding Requests
21
R2- the coefficient of determination
22
RFR- Random Forest Regressor
23
RNN- Recurrent Neural Networks
24
NSVNN- Novel Support Vector Neural Network
25
DBN-Deep Belief Network
26
DNNs-Deep Neural Networks
27
WEKA-Waikato Environment for Knowledge Analysis
28
GIG0- Garbage In, Garbage Out.
29
RIRO- Rubbish In, Rubbish Out.
30
COMPAS-Correctional Offender Management Profiling for Alternative Sanctions.

References

  1. A.K. Zakaria, “AI applications in the criminal justice system: the next logical step or violation of human rights,” Journal of Law and Emerging Technologies, vol. 3, no. 2, pp. 233–257, Nov. 2023. [CrossRef]
  2. Aharoni, E., Vincent, G. M., Harenski, C. L., Calhoun, V. D., Sinnott-Armstrong, W., Gazzaniga, M. S., & Kiehl, K. A. (2013). Neuroprediction of future rearrest. Proceedings of the National Academy of Sciences of the United States of America, 110(15), 6223–6228. [CrossRef]
  3. Alexandre, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., Gramfort, A., Thirion, B., & Varoquaux, G. (2014b). Machine learning for neuroimaging with scikit-learn. Frontiers in Neuroinformatics, 8. [CrossRef]
  4. Ali, S., Abuhmed, T., El–Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N., & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99, 101805. [CrossRef]
  5. Anthony W. Flores, Kristin Bechtel, Christopher T. Lowenkamp “False Positives, False Negatives, and False Analyses: A Rejoinder to, United States Courts.” https://www.uscourts.gov/federal-probation-journal/2016/09/false-positives-false-negatives-and-false-analyses-rejoinder.
  6. Arpita Biswas and Suvam Mukherjee. 2021. Ensuring fairness under prior probability shifts. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, New York, NY, USA, 414–424. [CrossRef]
  7. Ashraf, N.; Mehmood, D.; Obaidat, M.A.; Ahmed, G.; Akhunzada, A. Criminal Behavior Identification Using Machine Learning Techniques Social Media Forensics. Electronics 2022, 11, 3162. [Google Scholar]
  8. Belenguer, L. (2022, February 10). AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and ethics, 2(4), 771-787. [CrossRef]
  9. Blitz, M.J. (2017). Lie Detection, Mind Reading, and Brain Reading. In: Searching Minds by Scanning Brains. Palgrave Studies in Law, Neuroscience, and Human Behavior. Palgrave Macmillan, Cham. [CrossRef]
  10. Bzdok, D., and Meyer-Lindenberg, A. (2018). Machine learning for precision psychiatry: opportunities and challenges. Biol. Psychiatry 3, 223–230.
  11. Catlett, C., Malik, T., Goldstein, B., Giuffrida, J., Shao, Y., Panella, A., Eder, D. N., Van Zanten, E., Mitchum, R. M., Thaler, S., & Foster, I. (2014). Plenario: an open data discovery and exploration platform for urban science. IEEE Data(Base) Engineering Bulletin, 37(4), 27–34. http://sites.computer.org/debull/A14june/p27.pdf.
  12. Catlett, E. Cesario, D. Talia, and A. Vinci, 2019, ‘‘Spatiotemporal crime predictions in smart cities: A data-driven approach and experiments,’’ Pervas. Mobile Comput., vol. 53, pp. 62–74, Feb. 2019.
  13. Caulfield, T., & Murdoch, B. (2017). Genes, cells, and biobanks: Yes, there's still a consent problem. PLoS biology, 15(7), e2002654. [CrossRef]
  14. Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities & Social Sciences Communications, 10(1). [CrossRef]
  15. Cognitive neural prosthetics, Annual Review of Psychology, 61, 169–190, https://ab.harvard.edu/2018arXiv180109808A (accessed January 01, 2018).
  16. Coppola F. (2018). Mapping the brain to predict antisocial behaviour: new frontiers in neurocriminology, ‘new’challenges for criminal justice. U.C.L. J. Jurisprud. Spec. 1 106–110.
  17. D.M. Raza and D. B. Victor, ‘‘Data mining and region prediction based on crime using random forest,’’ in Proc. Int. Conf. Artif. Intell. Smart Syst. (ICAIS), Mar. 2021, pp. 980–987.
  18. Daly, A., Hagendorff, T., Li, H., Mann, M., Marda, V., Wagner, B., Wang, W W., & Witteborn, S. (2019, January 1). Artificial Intelligence, Governance and Ethics: Global Perspectives. [CrossRef]
  19. Dana Pessach and Erez Shmueli. 2022. A review on fairness in machine learning. ACM Computing Surveys (CSUR) 55, 3 (2022), 1–44.
  20. Daniel Varona and Juan Luis Suárez. 2022. Discrimination, Bias, Fairness, and Trustworthy AI. Applied Sciences 12, 12 (2022), 5826.
  21. Davinder Kaur, Suleyman Uslu, Kaley J Rittichier, and Arjan Durresi. 2022. Trust worthy artificial intelligence: a review. ACM Computing Surveys (CSUR) 55, 2 (2022), 1–38.
  22. Douglas, T., Pugh, J., Singh, I., Savulescu, J., and Fazel, S. (2017). Risk assessment tools in criminal justice and forensic psychiatry: the need for better data. Eur. Psychiatry 42, 134–137. [CrossRef]
  23. E. P. Utomo, ‘‘Prediction the crime motorcycles of theft using ARIMAXTFM with single input,’’ in Proc. 3rd Int. Conf. Informat. Comput. (ICIC), Oct. 2018, pp. 1–7.
  24. Eickhoff, S B., & Langner, R. (2019, November 14). Neuroimaging-based prediction of mental traits: Road to utopia or Or well?. PLoS biology, 17(11), e3000497-e3000497. [CrossRef]
  25. Eike Petersen, Melanie Ganz, Sune Holm, and Aasa Feragen. 2023. On (assessing) the fairness of risk score models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 817–829.
  26. F. Contini, “Artificial intelligence and the transformation of humans, law and technology interactions in judicial proceedings,” Law, Technology and Humans, vol. 2, no. 1, pp. 4–18, May 2020. [CrossRef]
  27. F. Lagioia, R. Rovatti, and G. Sartor, “Algorithmic fairness through group parities? The case of COMPAS-SAPMOC,” AI & SOCIETY, vol. 38, no. 2, pp. 459–478, Apr. 2022. [CrossRef]
  28. Farayola, M M., Tal, I., Bendechache, M., Saber, T., & Connolly, R. (2023, August 29). Fairness of AI in Predicting the Risk of Recidivism: Review and Phase Mapping of AI Fairness Techniques. [CrossRef]
  29. Felix Petersen, Debarghya Mukherjee, Yuekai Sun, and Mikhail Yurochkin. 2021. Postprocessing for individual fairness. Advances in Neural Information Processing Systems 34 (2021), 25944–25955.
  30. Fellous, J., Sapiro, G., Rossi, A. F., Mayberg, H. S., & Ferrante, M. (2019). Explainable artificial intelligence for neuroscience: behavioral neurostimulation. Frontiers in Neuroscience, 13. [CrossRef]
  31. Ferrer, X. (2021, August 9). Bias and Discrimination in AI: A Cross-Disciplinary Perspective - IEEE Technology and Society. IEEE Technology and Society. https://technologyandsociety.org/bias-and-discrimination-in-ai-a-cross-disciplinary-perspective/.
  32. Flavio du Pin Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. 2018. Data preprocessing for discrimination prevention: Information-theoretic optimization and analysis. IEEE Journal of Selected Topics in Signal Processing 12, 5 (2018), 1106–1119. [CrossRef]
  33. G.Van Dijck, “Predicting Recidivism Risk Meets AI Act,” European Journal on Criminal Policy and Research, vol. 28, no. 3, pp. 407–423, Jun. 2022. [CrossRef]
  34. Gaudet, Lyn M. and Kerkmans, Jason and Anderson, Nathaniel and Kiehl, Kent, Can Neuroscience Help Predict Future Antisocial Behavior? (September 29, 2016). Fordham Law Review, Vol. 85, No. 2, 2016, Available at SSRN: https://ssrn.com/abstract=2862083.
  35. Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On fairness and calibration. Advances in neural information processing systems 30 (2017).
  36. Ghandour, L., Yasmine, R., & El-Kak, F. (2013, July 1). Giving Consent without Getting Informed: A Cross-Cultural Issue in Research Ethics. [CrossRef]
  37. Greely, H. T., & Farahany, N. A. (2019). Neuroscience and the criminal justice system. Annual Review of Criminology, 2(1), 451–471. [CrossRef]
  38. H. R. S. A. Shamsi and S. Safei, “Artificial intelligence adoption in predictive policing to predict crime mitigation performance,” International Journal of Sustainable Construction Engineering and Technology, vol. 14, no. 3, Sep. 2023. [CrossRef]
  39. Hampshire, A., & Sharp, D. J. (2015). Contrasting network and modular perspectives on inhibitory control. Trends in cognitive sciences, 19(8), 445–452. [CrossRef]
  40. M. Lindquist, W. Thompson, J. Aston , 2017, Handbook of Neuroimaging Data Analysis, New York: Chapman & Hall/CRC, 2017.
  41. Hassani, X. Huang, E. S. Silva, and M. Ghodsi, “A review of data mining applications in crime,” Statistical Analysis and Data Mining, vol. 9, no. 3, pp. 139–154, Apr. 2016. [CrossRef]
  42. Hayward, K., & Maas, M. M. (2020). Artificial intelligence and crime: A primer for criminologists. Crime, Media, Culture, 17(2), 209–233. [CrossRef]
  43. Herron, J. A., Thompson, M. C., Brown, T., Chizeck, H., Ojemann, J. G., & Ko, A. L. (2017). Cortical Brain–Computer Interface for Closed-Loop Deep Brain Stimulation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(11), 2180–2187. [CrossRef]
  44. Holzinger A., Malle B., Kieseberg P., Roth P. M., Müller H., Reihs R., et al. (2017b). Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology. arXiv [Preprints] Available at: https://ui.adsabs.harvard.edu/abs/2017arXiv171206657H (accessed December 01, 2017).
  45. Ienca, M. (2017, August 1). Preserving the Right to Cognitive Liberty. https://www.scientificamerican.com/article/preserving-the-right-to-cognitive-liberty/.
  46. Ienca, M., & Andorno, R. (2017, December 25). Towards new human rights in the age of neuroscience and neurotechnology. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5447561/.
  47. Jackson, B. A., Banks, D., Woods, D., & Dawson, J. C. (2017, January 10). Future-Proofing Justice: Building a research agenda to address the effects of technological change on the protection of constitutional rights. RAND. https://www.rand.org/pubs/research_reports/RR1748.html.
  48. Janiesch, P. Zschech, and K. Heinrich, ‘‘Machine learning and deep learning,’’ Electron. Mark., vol. 31, no. 3, pp. 685–695, Apr. 2021.
  49. Jiaming Zeng, Berk Ustun, and Cynthia Rudin. 2017. Interpretable classification models for recidivism prediction. Journal of the Royal Statistical Society: Series A (Statistics in Society) 180, 3 (2017), 689–722. [CrossRef]
  50. Jiang, J., Shang, X., Wang, X., Chen, H., Li, W., Wang, Y., & Xu, J. (2021). Nitrous oxide-related neurological disorders: Clinical, laboratory, neuroimaging, and electrophysiological findings. Brain and Behavior, 11(12). [CrossRef]
  51. Johanson, M., Vaurio, O., Tiihonen, J., & Lähteenvuo, M. (2020). A Systematic Literature Review of Neuroimaging of Psychopathic Traits. Frontiers in Psychiatry, 10. [CrossRef]
  52. Jones, O D., Bonnie, R J., Casey, B J., Davis, A., Faigman, D L., Hoffman, M B., Montague, R., Morse, S J., Raichle, M E., Richeson, J A., Scott, E S., Steinberg, L., Taylor-Thompson, K., Wagner, A D., & Yaffe, G. (2014, June 1). Law and neuroscience: recommendations submitted to the President's Bioethics Commission. Journal of law and the biosciences, 1(2), 224-236. [CrossRef]
  53. Kang HW, Kang HB (2017) Prediction of crime occurrence from multimodal data using deep learning. PLoS One 12(4):e0176244. [CrossRef]
  54. Kanwel, S., Khan, M. I., & Usman, M. (2023). From Bytes to Bars: The Transformative Influence of Artificial Intelligence on Criminal Justice. Qlantic Journal of Social Sciences, 4(4), 84-89. [CrossRef]
  55. Klein, E., & Ojemann, J G. (2016, June 1). Informed consent in implantable BCI research: identification of research risks and recommendations for development of best practices. Journal of neural engineering, 13(4), 043001-043001.
  56. L. Tortora, G. Meynen, J. W. J. Bijlsma, E. Tronci, and S. Ferracuti, “Neuroprediction and A.I. in Forensic Psychiatry and Criminal justice: A Neurolaw perspective,” Frontiers in Psychology, vol. 11, Mar. 2020. [CrossRef]
  57. L.G. A. Alves, H. V. Ribeiro, and F. A. Rodrigues, ‘‘Crime prediction through urban metrics and statistical learning,’’ Phys. A, Stat. Mech. Appl., vol. 505, pp. 435–443, Sep. 2018.
  58. Langlotz C. P., Allen B., Erickson B. J., Kalpathy-Cramer J., Bigelow K., Cook T. S., et al. (2019). A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology 291 781–791. [CrossRef]
  59. Lavazza, A. (2018, February 19). Freedom of Thought and Mental Integrity: The Moral Requirements for Any Neural Prosthesis. Frontiers in neuroscience, 12. [CrossRef]
  60. Ligthart SLTJ, ‘Coercive Neuroimaging, Criminal Law, and Privacy: A European Perspective’ (2019) 6 Journal of Law and the Biosciences.
  61. Loll, A. Automated Fingerprint Identification Systems (AFIS). In Encyclopedia of Forensic Sciences, 2nd ed.; Academic Press: Cambridge, MA, USA, 2013; pp. 86–91. [Google Scholar]
  62. Lu Zhang, Yongkai Wu, and Xintao Wu. 2016. A causal framework for discovering and removing direct and indirect discrimination. arXiv preprint arXiv:1611.07509 (2016).
  63. Lum, K., & Isaac, W. (2016). To predict and serve? Significance, 13(5), 14–19. [CrossRef]
  64. M. Saraiva, I. Matijosaitiene, S. Mishra, and A. Amante, ‘‘Crime prediction and monitoring in Porto, Portugal, using machine learning, spatial and text analytics,’’ ISPRS Int. J. Geo-Inf., vol. 11, no. 7, p. 400, Jul. 2022.
  65. Mark MacCarthy. 2017. Standards of fairness for disparate impact assessment of big data algorithms. Cumb. L. Rev. 48 (2017), 67.
  66. McClendon L, Meghanathan N (2015) Using machine learning algorithms to analyze crime data. Mach Lear Appl Int J 2(1):1–12. [CrossRef]
  67. Meijer and M. Wessels, ‘‘Predictive policing: Review of benefits and drawbacks,’’ Int. J. Public Admin., vol. 42, no. 12, pp. 1031–1039, Sep. 2019.
  68. Meynen, G. (2019). Forensic psychiatry and neurolaw: Description, developments, and debates. International Journal of Law and Psychiatry, 65, 101345. [CrossRef]
  69. Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 259–268.
  70. Mingyang Wan, Daochen Zha, Ninghao Liu, and Na Zou. 2023. In-processing modeling techniques for machine learning fairness: A survey. ACM Transactions on Knowledge Discovery from Data 17, 3 (2023), 1–27.
  71. Mirabella, G., & Lebedev, M. А. (2017). Interfacing to the brain's motor decisions. Journal of neurophysiology, 117(3), 1305–1319. [CrossRef]
  72. Morse, S. J. (n.d.). Neuroprediction: new technology, old problems. Penn Carey Law: Legal Scholarship Repository. https://scholarship.law.upenn.edu/faculty_scholarship/1619/.
  73. Mugdha Dwivedi, “The Tomorrow of Criminal Law: Investigating the Application of Predictive Analytics and AI in the Field of Criminal Justice” 11 International Journal of Creative Research Thoughts a499-a501 (2023).
  74. Muñoz, J M. (2023, March 17). Achieving cognitive liberty. https://www.science.org/doi/10.1126/science.adf8306.
  75. Neil Shah, Nandish Bhagat, Manan Shah, “Crime forecasting: a machine learning and computer vision approach to crime prediction and prevention”, Visual Computing for Industry, Biomedicine and Art, vol. 4:9, 2021.
  76. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Com puting Surveys (CSUR) 54, 6 (2021), 1–35.
  77. Northpointe Inc., Dieterich, W., Ph. D., Mendoza, C., M. S., & Brennan, T., Ph. D. (2016). COMPAS Risk Scales: Demonstrating accuracy equity and predictive parity performance of the COMPAS risk scales in Broward County. https://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf.
  78. Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernández, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., . . . Staab, S. (2020, February 3). Bias in data-driven artificial intelligence systems—An introductory survey. [CrossRef]
  79. Ombao H., Lindquist M., Thompson W., Aston J. (2017). Handbook of Neuroimaging Data Analysis. New York: Chapman and Hall/CRC.
  80. Poldrack, R A., Monahan, J., Imrey, P B., Reyna, V F., Raichle, M E., Faigman, D L., & Buckholtz, J W. (2018, February 1). Predicting Violent Behavior: What Can Neuroscience Add?. Trends in cognitive sciences, 22(2), 111-123. [CrossRef]
  81. Rebers, S., Aaronson, N K., Leeuwen, F E V., & Schmidt, M K. (2016, February 6). Exceptions to the rule of informed consent for research with an intervention. BMC medical ethics, 17(1). [CrossRef]
  82. Redish, A. D., Gordon, J. A., et al., (2016). Breakdowns and Failure Modes: An Engineer’s View. In Strüngmann Forum Reports (Vol. 20). MIT Press. https://archives.esforum.de/publications/sfr20/chaps/SFR20_02%20Redish%20and%20Gordon.pdf.
  83. Richard A Berk, Arun Kumar Kuchibhotla, and Eric Tchetgen Tchetgen. 2023. Fair Risk Algorithms. Annual Review of Statistics and Its Application 10 (2023), 165–187.
  84. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nature Machine Intelligence, vol. 1, no. 5, pp. 206–215, May 2019. [CrossRef]
  85. S. Goel, R. Shroff, J. Skeem, and C. Slobogin, ‘‘The accuracy, equity, and jurisprudence of criminal risk assessment,’’ in Research Handbook on Big Data Law. Cheltenham, U.K.: Edward Elgar Publishing, 2021, pp. 9–28.
  86. S. Hossain, A. Abtahee, I. Kashem, M. M. Hoque, and I. H. Sarker, ‘‘Crime prediction using spatiotemporal data,’’ in Computing Science, Communication and Security. Gujarat, India: Springer, 2020, pp. 277–289.
  87. S. Kim, P. Joshi, P. S. Kalsi, and P. Taheri, ‘‘Crime analysis through machine learning,’’ in Proc. IEEE 9th Annu. Inf. Technol., Electron. Mobile Commun. Conf. (IEMCON), Nov. 2018, pp. 415–420.
  88. Schultze-Kraft, M., Birman, D., Rusconi, M., Allefeld, C., Görgen, K., Dähne, S., Blankertz, B., & Haynes, J. D. (2016). The point of no return in vetoing self-initiated movements. Proceedings of the National Academy of Sciences of the United States of America, 113(4), 1080–1085. https://doi.org/10.1073/pnas.1513569112. [CrossRef]
  89. Shah, U S., Dave, I., Malde, J., Mehta, J., & Kodeboyina, S. (2021, April 2). Maintaining Privacy in Medical Imaging with Federated Learning, Deep Learning, Differential Privacy, and Encrypted Computation. [CrossRef]
  90. Shams, R A., Zowghi, D., & Bano, M. (2023, January 1). Challenges and Solutions in AI for All. [CrossRef]
  91. Sidra Kanwel, Muhammad Imran Khan, et al., “Ginal Research Article Open Access from Bytes to Bars: The Transformative Influence of Artificial Intelligence on Criminal Justice” 4 JOURNAL of LEGAL STUDIES and RESEARCH, the Law Brigade (Publishing) Group 84-89 (2023).
  92. Sommaggio, P., Mazzocca, M., Gerola, A., & Ferro, F. (2017, November 1). Cognitive liberty. A first step towards a human neuro-rights declaration. BioLaw Journal - Rivista di BioDiritto, 11(3), 27-45. [CrossRef]
  93. Soto, J. M. D., & Borbón, D. (2022). Neurorights vs. neuroprediction and lie detection: The imperative limits to criminal law. Frontiers in Psychology, 13. [CrossRef]
  94. Srinivasan, R., & Chander, A. (2021, July 26). Biases in AI systems. [CrossRef]
  95. T. Chandrakala, S. Nirmala Sugirtha Rajini, K. Dharmarajan, K. Selvam, Development of Crime and Fraud Prediction using Data Mining Approaches. International Journal of Advanced Research in Engineering and Technology, 11(12), 2020, pp. 1450-1470. http://www.iaeme.com/IJARET/issues.asp?JType=IJARET&VType=11&IType=12.
  96. Taylor, “Justice by Algorithm: The limits of AI in criminal Sentencing,” Criminal Justice Ethics, vol. 42, no. 3, pp. 193–213, Sep. 2023. [CrossRef]
  97. U. Islam et al., “Investigating the effectiveness of novel support vector neural network for anomaly detection in digital forensics data,” Sensors, vol. 23, no. 12, p. 5626, Jun. 2023. [CrossRef]
  98. Umbach, R., Berryessa, C. M., & Raine, A. (2015). Brain imaging research on psychopathy: Implications for punishment, prediction, and treatment in youth and adults. Journal of Criminal justice, 43(4), 295–306. [CrossRef]
  99. V. Ingilevich and S. Ivanov, ‘‘Crime rate prediction in the urban environment using social factors,’’ Proc. Comput. Sci., vol. 136, pp. 472–478, Jan. 2018.
  100. V. Mandalapu, L. Elluri, P. Vyas, and N. Roy, “Crime Prediction using Machine Learning and Deep Learning: A Systematic review and Future Directions,” IEEE Access, vol. 11, pp. 60153–60170, Jan. 2023. [CrossRef]
  101. Varun VM, “Role of Artificial Intelligence in Improving the Criminal justice System in India” 6 JOURNAL of LEGAL STUDIES and RESEARCH, the Law Brigade (Publishing) Group 63-69 (2023).
  102. Vu M. T., Adali T., Ba D., Buzsaki G., Carlson D., Heller K., et al. (2018). A shared vision for machine learning in neuroscience. J. Neurosci. 38 1601–1607. 10.1523/JNEUROSCI.0508-17.2018.
  103. W. Safat, S. Asghar, and S. A. Gillani, ‘‘Empirical analysis for crime prediction and forecasting using machine learning and deep learning techniques,’’ IEEE Access, vol. 9, pp. 70080–70094, 2021.
  104. Z. J. Fernando, R. Rosmanila, L. Ratna, A. Cholidin, and B. P. Nunna, 2023, “The role of Neuroprediction and Artificial intelligence in the Future of Criminal Procedure Support Science: A New Era in Neuroscience and Criminal justice,” Yuridika, vol. 38, no. 3, pp. 593–620, Sep. 2023. [CrossRef]
  105. Zhuang Y, Almeida M, Morabito M, Ding W (2017) Crime hot spot forecasting: a recurrent model with spatial and temporal information. Paper presented at the IEEE international conference on big knowledge. IEEE, Hefei 9-10 August 2017. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated