Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Self-Attention Autoencoder for Anomaly Segmentation

Version 1 : Received: 28 August 2021 / Approved: 31 August 2021 / Online: 31 August 2021 (11:47:08 CEST)

How to cite: Yang, Y. Self-Attention Autoencoder for Anomaly Segmentation. Preprints 2021, 2021080570. https://doi.org/10.20944/preprints202108.0570.v1 Yang, Y. Self-Attention Autoencoder for Anomaly Segmentation. Preprints 2021, 2021080570. https://doi.org/10.20944/preprints202108.0570.v1

Abstract

Anomaly detection and segmentation aim at distinguishing abnormal images from normal images and further localizing the anomalous regions. Feature reconstruction based method has become one of the mainstream methods for this task. This kind of method has two assumptions: (1) The features extracted by neural network is a good representation of the image. (2) The autoencoder solely trained on the features of normal images cannot reconstruct the features of anomalous regions well. But these two assumptions are hard to meet. In this paper, we propose a new anomaly segmentation method based on feature reconstruction. Our approach mainly consists of two parts: (1) We use a pretrained vision transformer (ViT) to extract the features of the input image. (2) We design a self-attention autoencoder to reconstruct the features. We regard that the self-attention operation which has a global receptive field is beneficial to the methods based on feature reconstruction both in feature extraction and reconstruction. The experiments show that our method outperforms the state-of-the-art approaches for anomaly segmentation on the MVTec dataset. It is both effective and time-efficient.

Keywords

anomaly detection; anomaly segmentation; self-attention; transformers; autoencoders

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.