Preprint
Review

This version is not peer-reviewed.

Advancements in the Technology of Car Automatization: A Review

Submitted:

26 April 2026

Posted:

27 April 2026

You are already at the latest version

Abstract
The purpose of this paper is to investigate, collect, and analyze the different technologies that are being integrated into vehicle automation systems. These technologies can range from LIDAR/RADAR sensors, voice recognition, and AI models. With the continued push for the development of AI and au- tonomous vehicles in both the economy and among the populace, designers and engineers are more incentivized than ever to break new ground. As technology in the industry changes, so must the priorities of its developers. First, data and analysis on the safety of autonomous vehicles will be provided, providing context for the importance of the topic. Second, an overview of the research and development of the technology used to address the previous concerns is provided. Third, an examination of the successes and failures of the technology in regard to those concerns will be made. Lastly, this paper will explore the emerging breakthroughs and future advancements that will drive the mass adoption of autonomous vehicles, specifically those that can be scaled up to civilian automobiles.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

Autonomous vehicles were once only postulated in the realm of science fiction, but now, with all the technological advancements in the past 20 years, they have become reality. The act of getting into a car and having it silently, efficiently, and safely carry you to your destination is becoming more and more commonplace. However, much of the public still holds some hesitation and skepticism towards the safety of these vehicles, often citing it as one of the major reasons why they don’t intend to adopt them [1].
This fear of adoption persists despite data showing that modern Autonomous Vehicles (AVs) operate more cautiously, even though they may slow down traffic [25]. With this in mind, vehicle control systems must have the highest robustness and reliability. Sensors that determine the presence of obstacles, pedestrians, and signage must provide accurate data to the controller [4]. This involves not only recognizing stationary objects, but also moving objects. Distinctions between structures, such as telephone poles, and readable signage, like stop signs and one-way signs, must be made and recognized. Additionally, distinctions must be made between other moving automobiles and moving pedestrians, as they have different methods of movement relative to the vehicle itself. Once the data is collected, it must be processed, either on board via a hardware controller or remotely via cloud computing. Once that data is processed, the appropriate commands to the motor system must be transmitted back at speeds sometimes as fast as milliseconds. Mechanical components, such as the steering wheel, accelerator, and brakes, must be effectively connected so that the sensors and controller can provide the necessary reaction timing. This timing can be the difference between life and death, not only for those in the vehicle but also for pedestrians.
This review provides a detailed look at studies, research, and reports on the various advances made in the pursuit of automobile automation. These relevant articles were searched via Google Scholar, Science Direct, and PubMed using the criteria of “Autonomous", “Automation", "Vehicle", "Car", "Radar", "LIDAR", and "Automotive". From there, the abstract of the article is used to determine its relevance. Each article’s contents must pertain to the autonomous operation of automobiles. This also includes research and experiments in technologies utilized through scaled-down automobiles, whose findings can then be scaled up to full-sized civilian automobiles with further development. This latter approach can allow for further development in autonomous vehicle technology in more controlled environments. The structure of this report goes as follows: the first section will share advances in hardware sensors used in the automation of vehicles, specifically, LIDAR and RADAR-based systems. Cameras can also be featured in reports in this section, but they are not required for a study to be reviewed. The second half will provide advances made through Artificial Intelligence (AI), Deep Learning (DL), and Machine Learning (ML). This includes the utilization of AI/DL/ML in the processing and infrastructure related to autonomous vehicle operations. The final section will share the conclusions determined by all evaluations, experiments, and research that have been done heretofore, including current challenges and areas of focus for future research.

2. Sensors and Hardware

Two of the most frequently adopted technologies for autonomous vehicles are LIDAR and RADAR sensors. Laser Imaging, Detection, and Ranging methodology, here on referred to as LIDAR, is a system developed in 1961 shortly after the invention of the laser [3]. The system involves targeting the surface of an object with a laser and measuring the amount of time it takes for the laser to return to its point of origin.
c t 2 = d
By taking this measurement of time (t), multiplying it by the speed of light (c), and dividing by 2, the system can determine how far away an object is.
In contrast, radio detection and ranging, here referred to as Radar, is a similar system that, instead of lasers, utilizes radio waves to determine the distance of an object. A transmitter produces electromagnetic waves within the radio wavelength domain that bounce off objects and return to a receiver. This provides the receiver with information regarding the object’s speed and location.
D. Göhring [2] et al. have worked on a combined LIDAR/Radar system specifically used to follow cars on the highway. They positioned six 110 degree LIDAR sensors about their autonomous cars so that it could be a near 360 degree field of view. These LIDAR sensors could give very accurate positional data, but had imprecise velocity measurements. An additional RADAR sensor was attached to the front of the car. The RADAR sensor only had a 12 degree field of view, but could provide very accurate and precise velocity data. D. Göhring et al. utilized a sensor fusion system that combined the positional and velocity data from the LIDAR and RADAR sensors and refined the processing of the data to create a system that could determine whether the object in front of the car was moving or not. Figure 1 shows the set-up of the sensors on the surface of the car along with their respective angles.
From that determination, a stationary object would cause the system to provide a negative acceleration to the car to force it to stop. A dynamic object, however, would cause the system to decelerate the car to a distance determined to be safe based on the two cars’ velocities. The system would then have the autonomous car match its velocity to the car in front of it. The team experimented with this autonomous car model with another car on the Autobahn in Germany and found it to be able to comfortably brake and accelerate to match its companion car. The team has mentioned that future work would be done to include RADAR sensors in common car blind spots and to implement means of passing cars on the highway.
I. Bilik [4] et al. have worked on providing and outlining the challenges for radar applications in autonomous vehicle development, while also providing the state-of-the-art technological advancements made to mitigate these challenges and future technologies that could further increase the quality of autonomous vehicular radar systems.
The variety of environments and scenarios that an autonomous vehicle can encounter is a particular challenge. From slow and busy urban environments to sparse and unpredictable rural and highway ones, the radar system must be able to qualify and quantify its many obstacles. A radar high resolution (particularly in Doppler, range, azimuth, and elevation detection) is needed to provide accurate and precise information on the obstacle’s extent and size and must properly characterize them. This high resolution necessitates a large antenna array aperture and a large number of receiver and transmitter channels. Clutter within the received radar signals must be classified properly as to whether or not they are targets of interest as any object above ground level constitutes a potential hazard. These vehicular radar systems necessitate the inclusion of mitigation algorithms for the types of interference frequently encountered in their mission, including self-interference from the car itself, cross interference from the other sensors aboard the vehicle, and interference from the radars of other vehicles on the road.
In terms of currently utilized technology, state-of-the-art autonomous vehicle radar systems have adopted multiple input, multiple output (MIMO) operation concept for higher angular resolution with high update rate and wide field of view (FOV). For future radar systems of the future, a way to make efficient antenna calibration that can keep up with the current high volumes of antenna production. Development of a mission-adaptive waveform that can switch between LRR and SRR depending on the environment is also necessary. Promising future technological advancements in radar include the use of cognitive radar, extended target detection, Doppler ambiguity, multipath mitigation, clustering, angular super resolution, and waveform optimization. Many of these proposed systems outlined by the team have yet to implemented in autonomous vehicular radar systems, but show much promise to improve the current quality and safety of radar systems.
Y. Li [5] et al. have outlined several challenges related to LIDAR adoption in autonomous cars. The current cost of development of LIDAR systems, the meeting of automotive and safety standards, development of a longer measuring distance for better highway application, robustness against adverse weather conditions, and the need for a higher image resolution area all current issues with LIDAR technology that are holding it back from its potential. Important future directions for LIDAR technology includes developing algorithms to extract more physical information and increasing its semantic estimation. As the development of new LIDARs quickens, newer, better algorithms will emerge, as well.
J. Gu [13] et al. have worked on developing a dataset collection framework for use with autonomous vehicle systems that use three types of sensors: cameras, radar, and LIDAR. The framework aims not only to collect data from these sensors, but also to fuse the sensory data and synchronize said sensors while offering scalability and end-to-end implementation. The framework fuses the reliable and robust data from the radar sensors with the moving objects sensed by the LIDAR, which then gets projected onto the camera image, taking advantage of the strengths of each sensor to achieve a more precise prediction. The team prototyped their framework using a civilian automobile mounted with a camera, two radar sensors, and a LIDAR sensor with processing units to initiate the sensors and provide post-processing of the data. Figure 2 shows the vehicle with the framework affixed.
The software infrastructure of the framework was adapted from the infrastructure on the iseAuto, based on Autoware and ROS, which saw use in Estonia as the first autonomous shuttle. A cloud server was utilized for hosting the database module and storing the post-processing data. The prototype was able to successfully perform its tasks of collecting, combining, processing, and storing its data received through its many sensors. The team notes that the framework has potential for application in a variety of robotic or autonomous systems with large-scale deployment opportunities due to the modular nature of the framework. Further development could improve the framework by integrating more advanced deep fusion techniques and high-level sensor fusion.
J. Shi [14] et al. have worked on developing a method for multitarget-tracking based on the fusion of millimeter-wave radar and LIDAR censors for autonomous vehicles. By developing a distributed multi-sensor multitarget tracking (DMMT) system, the team aims to improve upon the accuracy and reliability of target tracking, sensor registration, track association, and data fusion that current millimeter-wave radar and LIDAR fusion systems struggle with. during high-speed conditions (higher than 80 km/h). The framework utilizes four millimeter wave radar sensors configured on the front, back , left and right sides of the car with a 360 degree, 32-track LIDAR sensor configured to the top of the vehicle.
These sensors form target tracks corresponding to each individual sensor. Then, the sensor data has temporal and spatial registration through the use of the Kalman filter method and the residual bias estimation registration (RBER) method, respectively. In order to pursue precise track association, a sequential m-best track association algorithm is used. Due to the different configurations between the sensors, the IF heterogeneous sensor fusion algorithm is used to complete the data fusion. Figure 3 shows a block diagram of this architecture. To verify, this system was used within a simulation of a high-speed driving scenario with one autonomous host vehicle and four target vehicles. The performance of the framework was verified through the Generalized Optimal SubPattern Assignment Metric (GOSPA). In comparison with single-radar tracker, the position, velocity, size and direction estimation errors of the track fusion tracker were reduced by 85.5%, 64.6%, 75.3%, and 9.5% respectively, and the GOSPA indicators were reduced by 19.8%. In regards to future work, the framework could be tested on scenarios involving adverse weather conditions and/or environmental changes like tunnels. Different configurations of sensors will also be considered to further verify the effectiveness of the model.
R. Zheng et al. [20] have proposed a dual-stream enhancement network focused on improving foreground-category and periphery awareness and detection within autonomous vehicle systems, called DFEOcc. The networks consists of a dual-stream occupancy pipeline with a specific branch that focuses on foreground-category object feature enhancement. The data taken from both steams is integrated via an adaptive fusion module. In order to improve the peripheral feature sparsity and object prediction, a Mamba-based spiral-scanning mechanism is used. The Mamba block utilizes center-to-edge feature enhancement by extending structural priors from dense regions to peripheral areas in clockwise and counter-clockwise directions. The proposed approach was testing on the autonomous-driving focused Occ3D-nuScenes dataset and the SurroundOccnu-Scenes dataset. Evaluation of the performance is done using mean Intersection over Union (mIoU), which measures the quality of occupancy prediction independent of labels. mIoU is calculated as follows where C is the number of semantic classes, TP is true positives, FP is the number of false positives and FN is the number of false negatives. Several other methods were used in testing for comparison with the proposed method’s results.
T P T P + F P + F N = m I o U
1 C c c = 1 C I o U c = m I o U
DFEOcc was able to provide state-of-the-art mIoU results in comparison with these other methods and, in addition, was able to substantially increase the detail of foreground objects necessary for object detection. It was also able to maintain accurate prediction throughout the peripheral regions. Both of these results show improvement from the model in integral aspects of autonomous driving. The team notes that future work will be focused on enhancing model robustness through improving dataset annotation quality and applying additional awareness from action recognition tasks.

3. AI for Autonomous Vehicle

With the rapid advancement and subsequent embracement of artificial intelligence technologies, the automotive industry and science advocacy groups have began the discussion, proposal, and research into its use within the realm of autonomous automobile operation. Although consumers have shown some apprehension in autonomous vehicle adoption, the hypothetical inclusion of AI for use in AVs grants some potential customer more security [21]. In addition to the local AI/DL/ML frameworks utilized by autonomous vehicles, the development in the infrastructure of the Internet of Vehicles (IoV) has also grown in recent years. The IoV enables the autonomous vehicles connected to it, as well as the learning frameworks on board them, to communicate and share data with pedestrians, environments, and other vehicles[18]. The development of the IoV shows tremendous promise in the effectiveness of the infrastructure for use by autonomous vehicles and its relevance can’t be overlooked.
H. Thadeshwar [6] et al. have worked on a proposal for a self-driving automobile that relies on a fusion of hardware sensors and artificial intelligence to operate. The team went through several sources of literature related to hardware sensor technologies and AI. network technologies, reviewed their findings, and combined several of the best performing technologies to use on the autonomous driving operation of a 1/10 scale remote control car.
The scaled car utilizes input data read from an ultrasonic sensor and a Raspberry Pi camera. This input data is then sent to a convolutional neural network (CNN) server which makes use of a COCO dataset, which helps in making the model run faster via quantization. The CNN processing handles the interpretation of road signs, lane detection, and steering predictions and commands. The server also utilizes the input data to further train the neural network. The server then sends the commands to an Arduino module which then commands the RC car. With this integration of multiple technologies onto a scale model for research and development, a cohesive and comprehensive approach to car automation can be made in stead of one only focused on one aspect fo the technology. This approach can, with further research, be up scaled to an actual car in the future.
S. Mishra [7] et al. have worked on developing a fully autonomous robot car model using the AlexNet AI model. Utilizing a NVIDIA Jetson Nano board and an AlexNet model to train their neural network, the team’s robot car was able to navigate a city simulation with real-time object detection. The input data from the city simulation was also useful for their deep learning and neural network training with the cost-efficient Jetson Nano board. The team plans to expand the AI functionality beyond just collision and object detection to include text-to-speech and speech-to-action commands.
G. Ghanhi [8] et al. have worked on the integration of artificial intelligence with blockchain technology to better train autonomous vehicle operation and safety. They have outlined a system whereby autonomous, AI driven vehicles utilize a public ledger where the data taken in from a number AI driven vehicles is available to the other vehicles connected to the blockchain. Each individual AI driven car can be trained by utilizing this data, allowing for cross-training across a whole network of cars. This would help eliminate the need to train each car separately, providing modularity in the AI training process, saving money, time, and resources, and reduce the need for human effort in the training process. Future work towards this goal will be aimed at embedding the training algorithms with the proposed blockchain network and exploring the extent of the modularity that the system can achieve.
X. Du [9] et al. have worked on proposing a merged system of LIDAR and vision fusion system that makes use of AI deep learning framework for vehicle detection. This system consists of three major portions: the generation of potential car location seeds taken via a LIDAR point cloud, refinement of the proposed locations by way of exploring the information within the network, and lastly a final location detection from the network.
C. Casetti [10] et al. have surveyed recent developments made in AI and Machine Learning (ML) applications and services for use in autonomous vehicle navigation. Their focus is particularly centered on the advancements in the 5G and 6G mobile ecosystems. The team emphasizes the necessity of combining the obstacle detection and classification hardware with infrastructure that combines the data about the environment from a number of different automobile sensor systems (referred to as "sensor fusion"). While traditional frameworks of the sensor data fusion are often rigid and lack potential for growth, the integration of machine learning can provide much more flexibility and adaptation in the object detection process. One particular ML framework is HyndraFusion which outperformed older object detection approaches by 14 percent. Another featured, ML-centered technique called Feature Engineering has been suggested to combine observed data and simulated data into a shared data pool, allowing a more complete and polished dataset for use in ML applications and could therefore increase the accuracy of the predictions.
With the advent of 5G and 6G technology, the possibility of a Vehicle to Everything (V2X) model shows additional optimism. These architectures carry the possibility of increased robustness and availability of data sharing across multiple autonomous vehicles on the road. The messages shared among the vehicles have been standardized by the European Telecommunications Standards Institute (ETSI), which can then be used to build maps of local environments referred to as Local Dynaims Maps (LDMs). The LDM,located server-side, can be a integral asset for up-to-date and detailed real-time representation of the road. With the integration of AI with these mobile technologies (particularly 6G, due to its high speeds and low-latency), an even more reliable and adaptable flow of information can be achieved. 6G technology and AI/ML frameworks have both shown effectiveness in several autonomous vehicle applications such as adaptive cruise control, trajectory prediction, and cooperative lane changing. While there is still work to be done to fully integrate the AL/ML and 6G solutions, the combined framework shows substantial potential at increasing the effectiveness, reliability, and safety of autonomous car control.
X. Jia [11] et al. have worked on developing an improved image object detection algorithm based on a modified version of the YOLOv5 AI vision model. The team was able to rework the existing YOLOv5 algorithm by integrating structural re-parameterization and using training-interface-decoupling to result in a higher level of accuracy in the training phase of the model and higher speeds in the inference phase. The team tested their improved YOLOv5 model on the KITTI dataset, a widely used dataset in the autonomous driving field. To evaluate the performance of their modified model, the team utilized mathematical formulas for Precision (P), Recall rate (R) accuracy (mAP, mean average precision), and frames per second (FPS). Calculations are done as follows:
T P T P + F P = P
T P T P + F N = R
Where TP indicates a positive predicted as positive, FP indicates a positive evaluated as negative, and FN indicates a negative predicted as positive.
0 1 P ( R ) d R = A P
i = 1 N A P i N × 100 % = m A P
Where AP is the area enclosed by the P-R curve, N is the number of categories, and APi is the AP of the ith category. mAP indicates modal accuracy.
Their model shows a clear improvement of other models used within the study, showing that with continued iteration and development, already existing models can be further improved to deliver better results for object detection in autonomous vehicles.
D. Silva [12] et al. have worked on developing a modified, recurrent YOLOv8 framework for object detection. The original YOLOv8 model was modified by the team to integrate a recurrent C2f block into the original framework. This allows for further refinement of the feature maps as they move onto downsampling. Figure 4 shows the structure of this model.
Three distinct variants of ReYOLOv8 were made with regard to previous scales set by YOLOv8: ReYOLOv8n (nano scale), ReYOLOv8s (small scale), and ReYOLOv8m (medium scale). ReYOLOv8 was implemented along with event-based cameras along with a novel and lightweight memory encoding called Volume of Ternary Event Images (VTEI) which minimized latency and bandwidth while increasing sparsity and compression ratio. The ReYOLOv8 models were tested for the GEN1 and PEDRo datasets, data sets that are commonly used for testing autonomous driving scenarios. The ReYOLOv8 models were tested against other state-of-the-art models used for object detection. After the testing, the ReYOLOv8 models showed a noticeable improvement in their mean Average Precision over other models of similar scale, specifically a 0.7 % improvement for GEN1 and 4.5 % for PEDRo . The team has outlined that future work can be done to outline benchmarks for system level impacts from this method and including evaluation with other datasets such as 1MegaPixel.
I. Ogunrinde [15] et al. have worked on developing an improved DeepSort-based object detection sensor fusion network for use in autonomous vehicles in foggy weather conditions. The original DeepSORT is intended for use with the YOLOv4 object detection model,but showed errors when targets were under heavy fog, switching identities and losing distinction in predictions. The team made use of their previously proposed, deep learning-based CR-YOLOnet sensor fusion network for use with camera and radar object detection. To further increase the network’s accuracy in harsh visual scenarios, the convolutional neural network in the original DeepSORT was replaced with an appearance feature extraction model. GhostNet was also utilized in the place of traditional convolutional layers in the network, reducing computational cost and complexity while increasing performance. The method was tested with and without the previously developed CIoU and GIoU loss functions using the CARLA real-time autonomous driving simulator. The testing occurred across short, medium, and long distances and low, medium, and heavy fog levels to provide a comprehensive result. Figure 4 shows some of the object detection data from the CLARA simulation.
The improved network shows improvement over the YOLOv5+DeepSORT combination. Specifically, multi-object tracking precision increased y 35.15%, the multi-object tracking precision increased by 32.65%, the speed increased by 37.65%,and identity switches decreased by 46.81%. Future research for the project will focus on improving the sensor fusion techniques, improving the real-time performance, and integrating state-of-the-art deep learning models to improve its real world applications.
F. Nesti [16] et al. have worked on the use of Simplex architecture within autonomous vehicle systems to provide a safer and more secure layer within the vehicle’s neural network. The architecture is composed of two execution domains running in strong isolation, the safe domain and the rich domain. The safe domain has high criticality and is responsible for safety-critical tasks such as sensing, actuation, communication, and safety monitoring, powered by a real-time operating system. The rich domain has low criticality and oversees all of the high performance processing and computations, powered by a rich operating system. The rich domain is also considered "untrustworthy" compared to the safe domain, meaning any safety decisions made by the rich domain are superseded by the safe domain. CLARE, a type-1 real-time hypervisor, facilitates the communication between the domains which grants the system better security. A diagram of the overall architecture of the system is presented in Figure 6.
Figure 5. Foggy weather object detection results for the DeepSort-based object detection model. Row 1 is clear weather. Row 2 is medium fog level. Row 3 is heavy fog level [15].
Figure 5. Foggy weather object detection results for the DeepSort-based object detection model. Row 1 is clear weather. Row 2 is medium fog level. Row 3 is heavy fog level [15].
Preprints 210384 g005
In addition, a safety monitor module (not shown in Figure 5) runs constantly during runtime. As default, the rich domain takes control of the system. However, if the safety monitor detects any potential anomaly or dangerous situation, the high-performance controller is disconnected, and the safe controller takes over operation decisions. The team evaluated the architecture with two case studies, one involving a Furuta pendulum and another with a AgileX Scout mini Rover. For the pendulum study, the rich domain handled swinging the pendulum up from its resting position to the top position and the safe domain activates in response to outside disturbances acting on the pendulum from its upright position, such as a push, and responds with actions to readjust the pendulum to the upright position. This first study was utilized to confirm the architecture’s basic function and the architecture responded successfully based on the proposed model. The rover study consisted of the rover autonomously navigating an environment via a camera and a LIDAR sensor. The rover’s safe domain captured the data through the LIDAR sensor, ran the safety monitor module, and managed motor actuation. The rover’s rich domain read LIDAR distance measurements, captured camera image data, and sent commands to the safe domain based on the data from the sensors. The rover was successfully navigate the environment while promptly and accurately respond to stationary and sudden obstacles. Due to the general nature of the proposed architecture, it can be iterated upon and used across a variety of applications. Future work on the architecture will focus on integration with deeper neural networks, implementation of vehicle-in-the-loop simulation, further benchmarking of the real-time and power properties of the system.
Y. Li [17] et al. have worked on developing an AI-driven hierarchical routing framework with Q-learning for use in 6G enabled Internet of Vehicles (IoV). The framework, referred to as Hierarchal Routing with Q-learning and Structured representation (HRQSR), seeks to confront the challenges present in 6-G enabled IoV, such as multi-objective optimization and the high dynamics associated with 6G IoV. The architecture of the HRQSR framework consists of five core functional modules. First, the Vehicle Node layer collects mobility-based states (position, velocity, heading, and residual energy) which a RSU/EdgeAI controller uses to perform dynamic clustering and cluster prediction. Second, a AI-enabled multitask Graph Attention Network (GAT) based prediction module creates estimation for route decisions. Third, a hierarchal Q-learning optimization module provides prediction-decision coupling from the GAT module in order to provide robust and adaptive routing decisions. the fourth module determines the final route from multiple generate candidate routes evaluated regarding delay, energy efficiency, and reliability. The final module then handles the execution of the routing back to the vehicle network, closing the architecture’s loop. This framework’s effectiveness was evaluated using both real-world maps and large-scale simulation environments, utilizing various levels of traffic density and heterogeneous network conditions. The experiments were designed to emphasize the HRQSR’s effectiveness in future 6G vehicular communication systems. The HRQSR framework achieved a success rate of 94.680%, 98.870%, and 98.920% in low, medium, and high traffic scenarios, respectively, while showing improvement over other methods in the same experiment.
With its resulting high performance, the HRQSR framework shows much potential if it were to be adopted for use in the 6G IoV infrastructure, leading to better and more robust autonomous vehicle V2X communication. Future work for the team includes integrating energy and carbon efficiency objectives, extending the Q-learning mechanism to a multi-agent setting for collaborative intelligence with other points of interest, and real-world validation of the framework via the physical vehicular network.
H. Yang [19] et al. have work on improving target detection accuracy during low-visibility environmental conditions utilizing a hybrid backbone network and multi-feature fusion. The method integrates the MobileNetv4 neural network with the YOLO11 backbone feature network. This hybrid backbone module also incorporates the Single-Head Self-Attention (SHSA) attention module, reducing computational complexity and enhances the model’s sensitivity which is necessary in low-light conditions such as fog, rain, and nighttime. The neck portion of the architecture utilized GDC-Down, a novel downsampling structure, along with a redesigned SG-C3k2 module that is made from the combination of the C3k and Bottleneck modules. This allows for better object recognition within low-visibility scenarios. The detection head then operates using the feature maps created by the neck. The complete architecture is shown in Figure 7.
To test the proposed architecture, the KITTI, Real-Time Traffic Surveillance (RTTS), and BDD_100k datasets were used with a particular focus on data consisting of low-visibility scenarios such as rain, haze, fog, and nighttime. The proposed method was tested against several other YOLO models and each method was evaluated based on the parameters of precision (P), recall (R), and mean average precision (mAP). Precision quantifies the proportion of correctly predicted targets to all object detections, recall quantifies the proportion all all true positive targets successfully identified by the model to all actual positive instances, and mean average precision is the arithmetic mean of the area under the precision-recall curve. The proposed method showed an improvement over the other methods of object detection, namely, a 2.68%, 1.52%, and 1.64% improvement in mAP in the BDD_100k, KITTI, and RTTS data sets, respectively. Future work for the proposed methods includes further reducing the model’s complexity and improving real-time performance, specifically under adverse weather conditions.
X. He [22] et al. have worked on developing a neurologically inspired framework for for use during AI-driven autonomous vehicle operation. The team has based their model off of the amygdala, a section of the brain for use in controlling, feeling, and responding to fear, so that it may operate as cautiously as an actual human driver would. The Fear-Neuro-Inspired Reinforcment Learning (FNI-RL) framework was was composed of the adversarial imagination technique, which simulates worst-case situations within the mode, and the custom Fear-Constrained Actor-Critic (FC-AC) algorithm. The framework was tested against several state-of-the-art AI agents and 30 human participants. Testing was done through the simulation of urban mobility package (SUMO). Overall, the FNI-RL framework outperformed the AI models and was able to achieve the performance of the human drivers in several of the critical safety scenarios, showing much promise. Future work involves further improving the response of the framework and eventually testing it outside of simulations.
R. Gutiérrez-Moreno [23] et al. have worked on developing a deep learning algorithm that seeks provide an enhanced Decision Making (DM) module within the autonomous driving stack. The DM module has 4 layers: Layer 1 (Perception) focuses on receiving data via the sensors and generating position and velocity from surrounding objects. Layer 2 (Tactical) defines a tactical trajectory based on the 3D map made by the previous level and lays the foundation for the routing and navigation. Layer 3 (Strategy) carries out the behavioral planning, making the high-level decision making. The final layer 4 (Operative) combines the predicted trajectory and the decided actions, calculating the driving commands. The algorithm was tested across four key scenarios: crossroads, merges, round-abouts, and lane-changes. It was also tested against the CARLA Autopilot and AD stack Techs4AgeCar architecture. The proposed architecture outperformed the Techs4AgeCar architecture, but could not outperform the CARLA. The team emphasizes that this architecture is a proof of concept and to show that classical implementation of deep reinforcement learning can be integrated into a Autonomous Driving architecture. Further work would focus on refining the architecture’s accuracy and response.
S. Grogprescu [24] et al. have worked on developing an AI-based operating system for use with autonomous applications, called CyberCortex. The operating system allows for several operating nodes to talk with one another and for them to talk with a centralized high performance server. Sensory and control data is streamed to the server for use with training the AI algorithms. The trained algorithms then are deployed back to the nodes for improved autonomous operation. The OS has two main components: An inference system which runs in real time on the embedded hardware, utilizing DataBlock. The second component, the dojo, runs on the high powered computer in the cloud and handles the design, training, and deployment of the AI algorithms. The OS’s performance was measured on GPS data used via the CARLA driving simulator. CyberCortex was able to outperform it’s competitor, the industry standard Robotic Operating System, showing a lower Root Mean Square Error across the experiments. Further work will focus on improving the module’s inter dependencies and sampling rate.
H. Zhang [26] et al. have worked on developing an improved noise-robust framework. The Noise Robust Mixture of Experts (NRMoE) framework makes use of a noise-injection pipeline which injects noise during training, allowing the model to become more noise-robust. Also, an adaptive gating network is combined with an expert network to encode input sequences using a 2D convolutional block followed by a Squeeze-and-Excitation module for feature readability. The expert network, made of two heterogeneous GRU models (Gated Recurrent Unit), then provides weighted outputs used for prediction results. The combined noise-resistant training methods and gating/expert networks combine to form the proposed model. The NRMoE framework was tested on several datasets ,both noise-free and ones with gaussian noise, with the RMSE measured for performance metric. The NRMoE performed with a much lower RMSE than several state-of-the-art frameworks, including CNN-LSTM-MA and Seq2seq-Att. Future work will be focused on optimizing model structure, improving the prediction performance and designing further noise-robust control algorithms.
A. Abiko [27] et al. have worked on developing a novel generative AI-framework for determining vulnerability within Connected Autonomous Electric Vehicles (CAEV) software components and communication protocols. The framework, called GenSecure-CAEV, has it’s architecture based on transformer-style Large Language Model GPT-3 which was tuned on a corpus specifically for CAEV cyber-security scenarios and contexts. The corpus is derived from three primary sources: AUTOSAR-Adaptive, CAN/V2X communication logs, and security advisories and exploit write-ups. For evaluating the framework, the team measured effectiveness in detecting memory safety and protocol logic vulnerabilities, decrease in time-to-detection, scalability-accuracy trade-offs for CAEV codebases, and integration into automotive security networks. Three datasets were used in the simulations: Autosar vulnerability dataset, CAN Bus Intrusion Detection Dataset, and the CARLA Autonomous Driving Simulation Data. The framework was able to perform above several framework peers, achieving F1 scores of 96.3% and 95.8%, for ATUOSAR and CAN BUS IDD respectively, and reducing time-to-detection to 3.8h for AUTOSAR and 2.5h for CAN Bus IDD. Future work on the framework can go in the direction of adding the scope of machine-learning vulnerabilities and incorporating continuous learning within the framework via real-time threat intelligence.
B. Lamichhane [28] et al. have worked on developing a novel roadside sensor network for use with object and threat perception and detection for autonomous vehicles in adverse environments. By utilizing infrastructure based-sensors such as cameras, LIDARs, radar, and weather sensors, the network uses context-sensitive fusion methodologies to dynamically asses and determine sensor reliability in real time. The network then communicates with autonomous vehicles to provide them threat detection support in heavy weather conditions where their onboard sensors might encounter issues. The network was assessed via a CARLA autonomous driving simulation environment. The network was able to improve camera-LIDAR object detection accuracy by 74.4% and a reduction in collision rates during heavy rain conditions by 28%. Future work on the network will involve adding more sensor modalities such as infared and ultrasonic for better system resilience. Additionally, incorporating AI-driven sensor fusion techniques could further improve the network’s performance.
S. Sheng [29] et al. have worked on developing a low-cost pipeline that provides real-time updates for HD roadwork maps for autonomous driving vehicles. The pipeline consists of three major parts: First, a roadworks sign recognition CLIP model that takes the input image of a road sign through a text and image encoder which is fine-tuned through a prior trained CLIP model which then processes the new image to the prediction component. Next, the detected roadworks signs are converted to real-world coordinates by using distortion coefficients, an external reference martrix, and coordinate transformation formulas. Once the real-world coordinates are determined, the pipeline updates the OpenDRIVE file with the new environment information. This pipeline was tested via a dataset of 3752 images. The model showed a 97% recognition rate and a RMSE of less than 1.2m for positional accuracy. Future work will focus on incorporating more detailed roadworks information into the system and testing the real-time map updating capabilities on a real-life road.
T. Yufei [30] et al. have worked on a multi-modal combinatorial approach to mitigating traffic congestion and increasing safety and efficiency for autonomous vehicles. The model utilizes speed prediction by way of an Sparrow Search Algorithm (SSA) optimized Long Short-term Memory Network (LTSM). THE SSA-LTSM model can accurately determine the speed trends of vehicles in the near future and can also provide input to the Adaptive Cruise Control (ACC) in the system. By using the data from the SSA-LTSM, the ACC can provide finer, more accurate distance control in regards to other vehicles on the road. This framwork was tested across three simulations: One focusing on regular, weekday, morning traffic patterns. Another on high-density congestion scenarios to better determine efficiency in tight traffic. Lastly, on accident or temporary closure areas, used to determine how the model acts in response to irregular traffic patterns and speeds. The SSA-LTSM framework was able to drive smoothly in the simulations by effectively determining the best distance to maintain between it’s leader vehicle. While traffic was congested and irregular, the framework was able to reduce the average queue length by 92.86& and the maximum queue length was reduced by 78.57%. Future work for the framework involves integrating more complex sequence modeling networks for predictive purposes and adding a reinforcement learning mechanism into the controller for better self-learning and self-tuning capabilities.

4. Conclusions

The progress made in the technologies involved in autonomous vehicle operation has grown extensively and its use is becoming more widespread internationally. While safety remains as one of the most significant public barriers to autonomous vehicle adoption, technology is closing the gap further with each innovation. Table 1 shows a summary of the documents used in this review.
The hardware providing the sensory data and evaluation of other cars, environments, pedestrians, and hazards has grown more sophisticated. Advancements in sensor fusion allows for multiple types of sensors to work in parallel to accurately and reliably map determine the autonomous vehicle’s surroundings. These systems have been tested in a variety of different driving conditions including slow, urban navigation and high-speed, freeway navigation. The underlying processing frameworks have also seen steady progress with the integration of novel methods utilizing deep learning, artificial intelligence, and machine learning. These methodologies allow the hardware sensors to communicate better while minimizing errors and creating more accurate predictions during navigation. While technological improvements are vital to autonomous vehicle success, there are still hurdles that must be addressed. Further refinement of specific methods across a multitude of driving conditions must all be satisfactorily met to comply with general automotive and safety standards. While there is still distance to climb for mainstream adoption of autonomous car operation, the work done so far has shown how quickly progress can be made.

References

  1. Othman, K. Public acceptance and perception of autonomous vehicles: a comprehensive review. AI Ethics 2021, 1(3), 355–387. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  2. Göhring, D.; Wang, M.; Schnürmacher, M.; Ganjineh, T. Radar/Lidar sensor fusion for car-following on highways. The 5th International Conference on Automation, Robotics and Applications, keywords: Robot sensing systems;Trajectory;Laser radar;Sensor fusion;Roads;History. Wellington, New Zealand, 2011; pp. 407–412. [Google Scholar] [CrossRef]
  3. New Radar System; Odessa American, 28 Feb 1961.
  4. Bilik, Igal; et al. The rise of radar for autonomous vehicles: Signal processing solutions and future research directions. IEEE Signal Process. Mag. 2019, 36.5, 20–31. [Google Scholar] [CrossRef]
  5. Li, You; Ibanez-Guzman, Javier. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Process. Mag. 2020, 37.4, 50–61. [Google Scholar] [CrossRef]
  6. Thadeshwar, H.; Shah, V.; Jain, M.; Chaudhari, R.; Badgujar, V. Artificial Intelligence based Self-Driving Car. 2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP), Chennai, India, 2020; pp. 1–5. [Google Scholar] [CrossRef]
  7. Mishra, S.; Minh, C. S.; Thi Chuc, H.; Long, T. V.; Nguyen, T. T. Automated Robot (Car) using Artificial Intelligence. 2021 International Seminar on Machine Learning, Optimization, and Data Science (ISMODE), Jakarta, Indonesia, 2022; pp. 319–324. [Google Scholar] [CrossRef]
  8. Gandhi, G. M.; Salvi. Artificial Intelligence Integrated Blockchain For Training Autonomous Cars. 2019 Fifth International Conference on Science Technology Engineering and Mathematics (ICONSTEM), Chennai, India, 2019; pp. 157–161. [Google Scholar] [CrossRef]
  9. Du, X.; Ang, M. H.; Rus, D. Car detection for autonomous vehicle: LIDAR and vision fusion approach through deep learning framework. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 2017; pp. 749–754. [Google Scholar] [CrossRef]
  10. Casetti, Claudio; Chiasserini, Carla Fabiana; Dressler, Falko; Memedi, Agon; Gasco, Diego; Schiller, Elad Michael. AI/ML-based services and applications for 6G-connected and autonomous vehicles. Comput. Networks,Volume Volume 255(2024), 110854. [CrossRef]
  11. Jia, X.; Tong, Y.; Qiao, H.; Li, M.; Tong, J.; Liang, B. Fast and accurate object detector for autonomous driving based on improved YOLOv5. Sci. Rep. 2023, 13(1), 9711. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  12. Silva, D.A.; Smagulova, K.; Elsheikh, A.; Fouda, M.E.; Eltawil, A.M. A recurrent YOLOv8-based framework for event-based object detection. Front Neurosci. 2025, 18, 1477979. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  13. Gu, J.; Lind, A.; Chhetri, T.R.; Bellone, M.; Sell, R. End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles. Sensors 2023, 23(15), 6783. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  14. Shi, J.; Tang, Y.; Gao, J.; Piao, C.; Wang, Z. Multitarget-Tracking Method Based on the Fusion of Millimeter-Wave Radar and LiDAR Sensor Information for Autonomous Vehicles. Sensors 2023, 23(15), 6920. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  15. Ogunrinde, I.; Bernadin, S. Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network. Sensors 2024, 24(14), 4692. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  16. Nesti, Federico; Salamini, Niko; Marinoni, Mauro; Cicero, Giorgiomaria; Serra, Gabriele; Biondi, Alessandro; Buttazzo, Giorgio. The use of the Simplex architecture to enhance safety in deep-learning-powered autonomous systems. Eng. Appl. Artif. Intell. Volume 174(2026), 114583. [CrossRef]
  17. Li, Yang; Xu, Xiaolong; Xu, Jian. AI-enhanced hierarchical routing with Q-learning and graph neural networks for 6G-enabled internet of vehicles. Comput. Netw. Vol. Volume 281(2026), 112206. [CrossRef]
  18. Abdul Hamid, Umar Zakir; Zamzuri, Hairi; Limbu, Dilip. Internet of Vehicle (IoV) Applications in Expediting the Implementation of Smart Highway of Autonomous Vehicle: A Survey; 2019. [Google Scholar] [CrossRef]
  19. Yang, Hao; Zhang, Zhigang; Chen, Zhige; Ge, Shuaishuai; Yu, Xiaoxia. HBMF-YOLO: Target detection in harsh environments based on a hybrid backbone network and multi-feature fusion. Image Vis. Comput. Vol. Volume 169(2026), 105958. [CrossRef]
  20. Zheng, Rui; Liu, Nannan; Guo, Yanyin; Deng, Chuiyi; Zhao, Zhuoyi; Liu, Zhiheng; Li, Junwei. A dual-stream foreground-aware enhancement network with spiralscan-Mamba for vision-based occupancy prediction in autonomous driving. Eng. Appl. Artif. Intell. Vol. Volume 173(2026), 114448. [CrossRef]
  21. Liang, Y.; Qian, L.; Lu, Y.; Bektaş, T. The effects of risk preferences on consumers’ reference-dependent choices for autonomous vehicles. Risk Anal. 2025, 45(12), 4157–4176. [Google Scholar] [CrossRef] [PubMed]
  22. He, X.; Wu, J.; Huang, Z.; Hu, Z.; Wang, J.; Sangiovanni-Vincentelli, A.; Lv, C. Fear-Neuro-Inspired Reinforcement Learning for Safe Autonomous Driving. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46(1), 267–279. [Google Scholar] [CrossRef] [PubMed]
  23. Gutiérrez-Moreno, R.; Barea, R.; López-Guillén, E.; Arango, F.; Sánchez-García, F.; Bergasa, L.M. Enhancing Autonomous Driving in Urban Scenarios: A Hybrid Approach with Reinforcement Learning and Classical Control. Sensors 2024, 25(1), 117. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  24. Grigorescu; Mihai, Sorin; Zaha, Mihai V. CyberCortex.AI: An AI-based operating system for autonomous robotics and complex automation. J. Field Robot. 2024, 42, 474–492. [Google Scholar] [CrossRef]
  25. Hu, X.; Zheng, Z.; Chen, D.; Sun, J. Autonomous Vehicle’s Impact on Traffic: Empirical Evidence From Waymo Open Dataset and Implications From Modelling. IEEE Trans. Intell. Transp. Syst. 2023, vol. 24(no. 6), 6711–6724. [Google Scholar] [CrossRef]
  26. Zhang, Huansong; Bao, Qiong; Qin, Zhiqing; Shen, Yongjun. Enhancing car-following risk prediction reliability: A noise-robust mixture of experts framework. Reliab. Eng. Syst. Safety,Volume 272, Part 3 2026, 112647. [Google Scholar] [CrossRef]
  27. Tirulo Abiko, Aschalew; Chauhan, Siddhartha; Vasilakos, Athanasios. GenSecure-CAEV: A generative AI framework for proactive vulnerability discovery in connected autonomous electric vehicles. Future Gener. Comput. Syst. 182 2026, 108445. [Google Scholar] [CrossRef]
  28. Lamichhane, Badri Raj; Aueawatthanaphisut, Aueaphum; Srijuntongsiri, Gun; Horanont, Teerayut. Enhancing autonomous vehicle resilience: Roadside sensor networks for robust perception and decision-making in challenging environments. Array,Volume Volume 30(2026), 100784. [CrossRef]
  29. Sheng, Shaofan; Formosa, Nicolette; Feng, Yuxiang; Quddus, Mohammed. Real-time roadworks detection and high definition (HD) map updates for autonomous vehicles. Eng. Appl. Artif. Intell. Volume 171(2026), 114321. [CrossRef]
  30. Yufei, Tao; Neamah, Husam A. A predictive cellular automata framework with SSA-LSTM and ACC for safe and efficient autonomous driving. Transp. Res. Interdiscip. Perspect. Volume 36(2026), 101828. [CrossRef]
Figure 1. The placement of the sensors on the LIDAR/Radar sensor fusion highway vehicle [2].
Figure 1. The placement of the sensors on the LIDAR/Radar sensor fusion highway vehicle [2].
Preprints 210384 g001
Figure 2. Protoype of the Multimodal Data Collection Framework. (a) is the testing vehicle with mounted sensors. (b) is a close up of the sensor locations, (c) is the inside of the shell holding the sensors [13].
Figure 2. Protoype of the Multimodal Data Collection Framework. (a) is the testing vehicle with mounted sensors. (b) is a close up of the sensor locations, (c) is the inside of the shell holding the sensors [13].
Preprints 210384 g002
Figure 3. Flow chart of the multisensor multitarget tracking architecture [14].
Figure 3. Flow chart of the multisensor multitarget tracking architecture [14].
Preprints 210384 g003
Figure 4. Dataflow Architecture of recurrent ReYOLOv8 object detection model [12].
Figure 4. Dataflow Architecture of recurrent ReYOLOv8 object detection model [12].
Preprints 210384 g004
Figure 6. Functional block diagram of the Simplex Dual-layer system architecture [16].
Figure 6. Functional block diagram of the Simplex Dual-layer system architecture [16].
Preprints 210384 g006
Figure 7. The overall architecture of the object detection model for low-visibility scenarios and conditions [19].
Figure 7. The overall architecture of the object detection model for low-visibility scenarios and conditions [19].
Preprints 210384 g007
Table 1. Research Reviewed.
Table 1. Research Reviewed.
Citation Number Citation Title Subject
2 Radar/Lidar sensor fusion for car-following on highways LIDAR/RADAR Sensor Fusion
4 The rise of radar for autonomous vehicles: Signal processing solutions and future research directions. RADAR Sensors
5 Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems LIDAR Sensors
6 Artificial Intelligence based Self-Driving Car Artificial Intellgence (AI)
7 Automated Robot (Car) using Artificial Intelligence AI
8 Artificial Intelligence Integrated Blockchain For Training Autonomous Cars AI/Blockchain
9 Car detection for autonomous vehicle: LIDAR and vision fusion approach through deep learning framework LIDAR Sensor Vusion
10 AI/ML-based services and applications for 6G-connected and autonomous vehicles AI/Machine Learning
11 Fast and accurate object detector for autonomous driving based on improved YOLOv5 AI
12 A recurrent YOLOv8-based framework for event-based object detection AI
13 End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles Sensor Fusion
14 Multitarget-Tracking Method Based on the Fusion of Millimeter-Wave Radar and LiDAR Sensor Information for Autonomous Vehicles LIDAR/RADAR Sensor Fusion
15 Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network AI
16 The use of the Simplex architecture to enhance safety in deep-learning-powered autonomous systems AI
17 AI-enhanced hierarchical routing with Q-learning and graph neural networks for 6G-enabled internet of vehicles AI
18 Internet of Vehicle (IoV) Applications in Expediting the Implementation of Smart Highway of Autonomous Vehicle: A Survey AI/Internet of Vehicles (IoV)
19 HBMF-YOLO: Target detection in harsh environments based on a hybrid backbone network and multi-feature fusion AI/Sensor Fucion
20 A dual-stream foreground-aware enhancement network with spiralscan-Mamba for vision-based occupancy prediction in autonomous driving AI
22 Fear-Neuro-Inspired Reinforcement Learning for Safe Autonomous Driving AI
23 Enhancing Autonomous Driving in Urban Scenarios: A Hybrid Approach with Reinforcement Learning and Classical Control AI
24 CyberCortex.AI: An AI-based operating system for autonomous robotics and complex automation AI
25 Autonomous Vehicle’s Impact on Traffic: Empirical Evidence From Waymo Open Dataset and Implications From Modelling AI
26 Enhancing car-following risk prediction reliability: A noise-robust mixture of experts framework AI
27 A generative AI framework for proactive vulnerability discovery in connected autonomous electric vehicles AI
28 Enhancing autonomous vehicle resilience: Roadside sensor networks for robust perception and decision-making in challenging environments AI/IoV
29 Real-time roadworks detection and high definition (HD) map updates for autonomous vehicles AI
30 A predictive cellular automata framework with SSA-LSTM and ACC for safe and efficient autonomous driving AI
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated