Version 1
: Received: 3 June 2023 / Approved: 5 June 2023 / Online: 5 June 2023 (08:09:38 CEST)
How to cite:
Ogunrinde, I.O.; Bernadin, S. Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network. Preprints2023, 2023060262. https://doi.org/10.20944/preprints202306.0262.v1
Ogunrinde, I.O.; Bernadin, S. Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network. Preprints 2023, 2023060262. https://doi.org/10.20944/preprints202306.0262.v1
Ogunrinde, I.O.; Bernadin, S. Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network. Preprints2023, 2023060262. https://doi.org/10.20944/preprints202306.0262.v1
APA Style
Ogunrinde, I.O., & Bernadin, S. (2023). Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network. Preprints. https://doi.org/10.20944/preprints202306.0262.v1
Chicago/Turabian Style
Ogunrinde, I.O. and Shonda Bernadin. 2023 "Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network" Preprints. https://doi.org/10.20944/preprints202306.0262.v1
Abstract
The presence of fog in the background can prevent small and distant objects from being detected, let alone tracked. Under safety-critical conditions, multi-object tracking models require faster-tracking speed while maintaining high object-tracking accuracy. The original DeepSORT algorithm used YOLOv4 for the detection phase, and a simple neural network for deep appearance descriptor. Consequently, the feature map generated loses relevant details about the track being matched with a given detection in fog. Targets with a high degree of appearance similarity on the detection frame are more likely to be mismatched, resulting in identity switches or track failures in heavy fog. We propose an improved multi-object tracking model based on the DeepSORT algorithm to im-prove tracking accuracy and speed under foggy weather conditions. First, we employed our camera-radar fusion network (CR-YOLOnet) in the detection phase for faster and more accurate object detection. We proposed an appearance feature network to replace the basic convolutional neural network. We incorporated GhostNet to take the place of the traditional convolutional layers to generate more features and reduce computational complexities and cost. We adopted a segmentation module and fed the semantic labels of the corresponding input frame to add rich semantic information to the low-level appearance feature maps. Our proposed method outperformed YOLOv5 + DeepSORT with a 35.15% increase in multi-object tracking accuracy, a 32.65% increase in multi-object tracking precision, the speed increased by 37.56%, and identity switches decreased by 46.81%.
Keywords
Multi-object tracking; DeepSORT; object detection; sensor fusion; deep learning, autonomous vehicles; radars; adverse weather; fog
Subject
Computer Science and Mathematics, Computer Vision and Graphics
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.