Version 1
: Received: 23 May 2023 / Approved: 23 May 2023 / Online: 23 May 2023 (08:22:57 CEST)
Version 2
: Received: 23 May 2023 / Approved: 24 May 2023 / Online: 24 May 2023 (04:58:28 CEST)
Version 3
: Received: 24 May 2023 / Approved: 29 May 2023 / Online: 29 May 2023 (07:08:13 CEST)
How to cite:
Rui, B.; Wu, H. ADL: Anomaly Detection and Localization in Crowded Scenes Using Hybrid Methods. Preprints2023, 2023051617. https://doi.org/10.20944/preprints202305.1617.v1
Rui, B.; Wu, H. ADL: Anomaly Detection and Localization in Crowded Scenes Using Hybrid Methods. Preprints 2023, 2023051617. https://doi.org/10.20944/preprints202305.1617.v1
Rui, B.; Wu, H. ADL: Anomaly Detection and Localization in Crowded Scenes Using Hybrid Methods. Preprints2023, 2023051617. https://doi.org/10.20944/preprints202305.1617.v1
APA Style
Rui, B., & Wu, H. (2023). ADL: Anomaly Detection and Localization in Crowded Scenes Using Hybrid Methods. Preprints. https://doi.org/10.20944/preprints202305.1617.v1
Chicago/Turabian Style
Rui, B. and Hequn Wu. 2023 "ADL: Anomaly Detection and Localization in Crowded Scenes Using Hybrid Methods" Preprints. https://doi.org/10.20944/preprints202305.1617.v1
Abstract
In recent years, video anomaly detection technology , which can intelligently analyze
massive video and quickly find abnormal phenomena, has attracted extensive attention with the
wide application of video surveillance technology . To address the complex and diverse problem
of abnormal human behavior detection in surveillance videos, a surveillance video abnormal
behavior detection and localization supervised method based on the deep network model and
the traditional method is proposed. Specifically, we combined AGMM and YOLACT methods to
obtain more accurate foreground information by fusing the foreground maps extracted by each technique. To further improve the accuracy , we use the PWC-Net technique to extract features of
the foreground images and input them into an anomaly classification model for classification. The
proposed method effectively detects and locates the abnormal behavior in the monitoring scene.
In addition to the aforementioned methods, this paper also employs YOLOV5 and DeepSORT
networks for object detection and tracking in the video, which allows us to track the detected
objects for better understanding of the scene in the video. Experiments on the UCSD benchmark
dataset and the comparison with state-of-the-art schemes prove the advantages of our method.
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.