Preprint Article Version 2 Preserved in Portico This version is not peer-reviewed

Are Deep Models Robust against Real Distortions? A Case Study on Document Image Classification

Version 1 : Received: 1 February 2022 / Approved: 3 February 2022 / Online: 3 February 2022 (15:24:28 CET)
Version 2 : Received: 13 June 2022 / Approved: 14 June 2022 / Online: 14 June 2022 (08:43:57 CEST)

A peer-reviewed article of this Preprint also exists.

Saifullah, S. A. Siddiqui, S. Agne, A. Dengel and S. Ahmed, "Are Deep Models Robust against Real Distortions? A Case Study on Document Image Classification," 2022 26th International Conference on Pattern Recognition (ICPR), Montreal, QC, Canada, 2022, pp. 1628-1635, doi: 10.1109/ICPR56361.2022.9956167. Saifullah, S. A. Siddiqui, S. Agne, A. Dengel and S. Ahmed, "Are Deep Models Robust against Real Distortions? A Case Study on Document Image Classification," 2022 26th International Conference on Pattern Recognition (ICPR), Montreal, QC, Canada, 2022, pp. 1628-1635, doi: 10.1109/ICPR56361.2022.9956167.

Abstract

Deep neural networks have been extensively researched in the field of document image classification to improve classification performance and have shown excellent results. However, there is little research in this area that addresses the question of how well these models would perform in a real-world environment, where the data the models are confronted with often exhibits various types of noise or distortion. In this work, we present two separate benchmark datasets, namely RVL-CDIP-D and Tobacco3482-D, to evaluate the robustness of existing state-of-the-art document image classifiers to different types of data distortions that are commonly encountered in the real world. The proposed benchmarks are generated by inserting 21 different types of data distortions with varying severity levels into the well-known document datasets RVL-CDIP and Tobacco3482, respectively, which are then used to quantitatively evaluate the impact of the different distortion types on the performance of latest document image classifiers. In doing so, we show that while the higher accuracy models also exhibit relatively higher robustness, they still severely underperform on some specific distortions, with their classification accuracies dropping from ~90% to as low as ~40% in some cases. We also show that some of these high accuracy models perform even worse than the baseline AlexNet model in the presence of distortions, with the relative decline in their accuracy sometimes reaching as high as 300-450% that of AlexNet. The proposed robustness benchmarks are made available to the community and may aid future research in this area.

Keywords

Document Image Classification; Corruption Robustness; Robustness to Distortions; Model Robustness

Subject

Computer Science and Mathematics, Computer Vision and Graphics

Comments (1)

Comment 1
Received: 14 June 2022
Commenter: Saifullah Saifullah
Commenter's Conflict of Interests: Author
Comment: Paper has been updated significantly based on peer review and has been accepted in ICPR 2022.
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 1
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.