Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Improving Performance of the PRYSTINE Traffic Sign Classification by Using Perturbation-based Explainability Approach

Version 1 : Received: 5 November 2021 / Approved: 9 November 2021 / Online: 9 November 2021 (15:03:27 CET)

A peer-reviewed article of this Preprint also exists.

Sudars, K.; Namatēvs, I.; Ozols, K. Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach. J. Imaging 2022, 8, 30. Sudars, K.; Namatēvs, I.; Ozols, K. Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach. J. Imaging 2022, 8, 30.

Abstract

Model understanding is critical in many domains, particularly those involved in high-stakes decisions, i.e., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as Convolutional Neural Networks. This paper evaluates the traffic sign classifier of Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels` compression. After all, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating the original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network's precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads only to a 1% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail.

Keywords

Explainable AI; Convolutional Neural Network; Network Compression

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.