Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

SDC-Net++: End-to-End Crash Detection and Action Control for Self-Driving Car Deep-IoT-Based System

Version 1 : Received: 7 May 2024 / Approved: 7 May 2024 / Online: 8 May 2024 (15:51:48 CEST)

How to cite: Tolba, M. A.; Kamal, H. A. SDC-Net++: End-to-End Crash Detection and Action Control for Self-Driving Car Deep-IoT-Based System. Preprints 2024, 2024050468. https://doi.org/10.20944/preprints202405.0468.v1 Tolba, M. A.; Kamal, H. A. SDC-Net++: End-to-End Crash Detection and Action Control for Self-Driving Car Deep-IoT-Based System. Preprints 2024, 2024050468. https://doi.org/10.20944/preprints202405.0468.v1

Abstract

Few prior works study self-driving cars by deep learning with IoT collaboration. SDC-Net, which is an end-to-end multitask self-driving car camera cocoon IoT-based system, is one of the research that tackles this direction. However, by design SDC-Net is not able to identify the accident locations, it only classifies if it is a crash scene or not. In this work, we introduce an enhanced design for the SDC-Net system by 1) replacing the classification network with a detection one, 2) adapting our benchmark dataset labels built on the CARLA simulator to include the vehicles bounding boxes while keeping the same training, validation, and testing samples, 3) modifying the shared information via IoT to include the accident location. We keep the same path planning and automatic emergency braking network, the digital automation platform, and the input representations to formulate the comparative study. SDC-Net++ system is proposed to 1) output the relevant control actions, especially in case of accidents: accelerate, decelerate, maneuver, and brake, and 2) share the most critical information to the connected vehicles via IoT, especially the accident locations. A comparative study is also conducted between SDC-Net and SDC-Net++ with the same input representations: front camera only, panorama and bird-eye views, and with single-task networks: crash avoidance only, and multitask networks. Multitask network with a BEV input representation outperforms the nearest representation in precision, recall, f1-score, and accuracy by more than 15.134%, 12.046%, 13.593%, and 5%, respectively. SDC-Net++ multitask network with BEV outperforms SDC-Net multitask with BEV in precision, recall, f1-score, accuracy, and average MSE by more than 2.201%, 2.8%, 2.505%, 2%, and 18.677% respectively.

Keywords

Autonomous driving, deep learning; computer vision; multitask learning; crash detection; path planning; automatic emergency braking; camera-cocoon; IoT; system

Subject

Engineering, Automotive Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.