Version 1
: Received: 4 October 2023 / Approved: 4 October 2023 / Online: 5 October 2023 (07:34:02 CEST)
How to cite:
Park, S.; You, J. Development of Driver State Monitoring System based on Multi-Sensor Data Fusion. Preprints2023, 2023100239. https://doi.org/10.20944/preprints202310.0239.v1
Park, S.; You, J. Development of Driver State Monitoring System based on Multi-Sensor Data Fusion. Preprints 2023, 2023100239. https://doi.org/10.20944/preprints202310.0239.v1
Park, S.; You, J. Development of Driver State Monitoring System based on Multi-Sensor Data Fusion. Preprints2023, 2023100239. https://doi.org/10.20944/preprints202310.0239.v1
APA Style
Park, S., & You, J. (2023). Development of Driver State Monitoring System based on Multi-Sensor Data Fusion. Preprints. https://doi.org/10.20944/preprints202310.0239.v1
Chicago/Turabian Style
Park, S. and Jongwon You. 2023 "Development of Driver State Monitoring System based on Multi-Sensor Data Fusion" Preprints. https://doi.org/10.20944/preprints202310.0239.v1
Abstract
A stable driver state is essential in the process of manually transition control, which inevitably occurs in Level 3 automated driving situations. To this end, this paper proposed a CNN algorithm-based driver state monitoring system that uses multi-sensor data such as driver's face image, biometric information, and vehicle behavior information as input. This system calculates the probability of drowsiness for each of the four time periods using a convolutional neural network (CNN) based on ToF camera-based eye blinking, ECG information (pulse rate) embedded in the steering wheel, and vehicle information (steering angle data). In order to build a reliable and high-quality training dataset (Ground Truth) for the CNN algorithm, a baseline was established by matching the driver's face image with the electrocardiogram (ECG) and electroencephalogram (EEG) changes in the drowsy and normal states. In a simulation test of the proposed CNN algorithm using more than 20,000 driver image data acquired using a driving simulator, the TNR was 94.8% and the accuracy was 94.2%. Our proposed method is expected to minimize human errors that may occur when switching control by monitoring inappropriate driver state (drowsiness) in real time.
Keywords
Driver state monitoring system; Multi-sensor data fusion; CNN Model; ToF(Time of Flight) camera; Hand on detection; EEG; ECG
Subject
Engineering, Automotive Engineering
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.