ARTICLE | doi:10.20944/preprints202102.0569.v1
Online: 25 February 2021 (10:00:59 CET)
Potholes on roads pose a major threat to motorists and autonomous vehicles. Driving over a pothole has the potential to cause serious damage to a vehicle, which in turn may result in fatal accidents. Currently, many pothole detection methods exist. However, these methods do not utilize deep learning techniques to detect a pothole in real-time, determine the location thereof and display its location on a map. The success of determining an effective pothole detection method, which includes the aforementioned deep learning techniques, is dependent on acquiring a large amount of data, including images of potholes. Once adequate data had been gathered, the images were processed and annotated. The next step was to determine which deep learning algorithms could be utilized. Three different models, including Faster R-CNN, SSD and YOLOv3 were trained on the custom dataset containing images of potholes to determine which network produces the best results for real-time detection. It was revealed that YOLOv3 produced the most accurate results and performed the best in real-time, with an average detection time of only 0.836s per image. The final results revealed that a real-time pothole detection system, integrated with a cloud and maps service, can be created to allow drivers to avoid potholes.
ARTICLE | doi:10.20944/preprints202102.0364.v1
Subject: Engineering, Automotive Engineering Keywords: generating missed hydrograph; genetic algorithm; Reverse Flood Routing; Karun River; numerical FASTER model
Online: 17 February 2021 (10:09:36 CET)
Flood routing in flood forecasting issue, calculation the height of flood bands, determining the river boundaries, and estimation of protective facilities for flood –exposed building is applicable. In many cases, due to the lack of measuring stations, the status of the upstream flood generating hydrograph is not known. The purpose of this study is to present an integrated method comprising of an optimization model and a hydrodynamic numerical model for flood modeling to determine the upstream hydrograph using the provided hydrograph at the downstream measuring station of a river. The routing procedure consists of three steps: (1) generating a hypothetical upstream hydrograph using genetic algorithm method; (2) hydrodynamic modeling using a numerical simulation model for flood routing according to the hypothetical hydrograph which is generated in the first step; (3) compare the calculated and observed hydrograph in downstream by using a fitness function. This recommended procedure was named as Reverse Flood Routing Method (RFRM) and was then applied to Karun River, the largest river in Iran. Comparing the generated upstream hydrograph by the RFRM model with the corresponding measured hydrograph at Ahvaz hydrometric station, as an ungauged river location, shows the high accuracy of the recommended model in this study.
ARTICLE | doi:10.20944/preprints202110.0319.v1
Subject: Life Sciences, Other Keywords: YOLOv4; Faster RCNN; Deep-SORT; pig posture detection; object tracking; greenhouse gas; animal welfare
Online: 21 October 2021 (23:06:30 CEST)
Pig behavior is an integral part of health and welfare management, as pigs usually reflect their inner emotions through behavior change. The livestock environment plays a key role in pigs' health and wellbeing. A poor farm environment increases the toxic GHGs, which might deteriorate pigs' health and welfare. In this study a computer-vision-based automatic monitoring and tracking model was proposed to detect short-term pigs' physical activities in a compromised environment. The ventilators of the livestock barn were closed for an hour, three times in a day (07:00-08:00, 13:00-14:00, and 20:00-21:00) to create a compromised environment, which increases the GHGs level significantly. The corresponding pig activities were observed before, during, and after an hour of the treatment. Two widely used object detection models (YOLOv4 and Fast-er R-CNN) were trained and compared their performances in terms of pig localization and posture detection. The YOLOv4, which outperformed the Faster R-CNN model, coupled with a Deep-SORT tracking algorithm to detect and track the pig activities. The results showed that the pigs became more inactive with the increase in GHG concentration, reducing their standing and walking activities. Moreover, the pigs also shortened their sternal-lying posture increasing the lateral lying posture duration at higher GHG concentration. The high detection accuracy (mAP: 98.67%) and tracking accuracy (MOTA: 93.86% and MOTP: 82.41%) signify the models’ efficacy in monitoring and tracking pigs' physical activities non-invasively.
ARTICLE | doi:10.20944/preprints201907.0181.v2
Subject: Physical Sciences, Particle & Field Physics Keywords: particle tracks; monopoles; tachyons; superluminal; faster-than-light, elliptical orbits, photographic emulsion; Kepler orbits; LENR; strange radiation
Online: 3 November 2019 (15:18:08 CET)
In the literature of Low-Energy Nuclear Reactions (LENR), particle tracks in photographic emulsions (and other materials) associated with certain electrical discharges have been reported. Some Russian and French researchers have considered these particles to be magnetic monopoles. These tracks correspond directly to tracks created with a simple uniform exposure to photons without an electrical discharge source. This simpler method of producing tracks supports a comprehensive exploration of particle track properties. Out of 750 exposures with this method, elliptical particle tracks were detected, 22 of which were compared to Bohr-Sommerfeld electron orbits. Ellipses fitted to the tracks were found to have quantized semi-major axis sizes with ratios of ≈n2/α2 to corresponding Bohr-Sommerfeld hydrogen ellipses. This prompts inquiry relevant to magnetic monopoles due to the n2/α2 force difference between magnetic charge and electric charge using the Schwinger quantization condition. A model using analogy with the electron indicates that the elliptical tracks could be created by a bound magnetically charged particle with mass mm = 1.45 × 10-3 eV/c2, yet with superluminal velocities. Using a modified extended relativity model, mm becomes the relativistic mass of a superluminal electron, with m0 = 5.11 × 10-3 eV/c2, the fine structure constant becomes a mass ratio and charge quantization is the result of two states of the electron.
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Car Detection; Convolutional Neural Networks; Deep Learning; Faster R-CNN; Unmanned Aerial Vehicles; You Only Look Once (Yolo).
Online: 12 March 2020 (08:57:09 CET)
In this paper, we address the problem of car detection from aerial images using Convolutional Neural Networks (CNN). This problem presents additional challenges as compared to car (or any object) detection from ground images because features of vehicles from aerial images are more difficult to discern. To investigate this issue, we assess the performance of two state-of-the-art CNN algorithms, namely Faster R-CNN, which is the most popular region-based algorithm, and YOLOv3, which is known to be the fastest detection algorithm. We analyze two datasets with different characteristics to check the impact of various factors, such as UAV's altitude, camera resolution, and object size. The objective of this work is to conduct a robust comparison between these two cutting-edge algorithms. By using a variety of metrics, we show that YOLOv3 yields better performance in most configurations, except that it exhibits a lower recall and less confident detections when object sizes and scales in the testing dataset differ largely from those in the training dataset.
ARTICLE | doi:10.20944/preprints201910.0195.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: car detection; convolutional neural networks; deep learning; you only look once (yolo); faster r-cnn; unmanned aerial vehicles
Online: 17 October 2019 (12:29:29 CEST)
In this paper, we address the problem of car detection from aerial images using Convolutional Neural Networks (CNN). This problem presents additional challenges as compared to car (or any object) detection from ground images because features of vehicles from aerial images are more difficult to discern. To investigate this issue, we assess the performance of two state-of-the-art CNN algorithms, namely Faster R-CNN, which is the most popular region-based algorithm, and YOLOv3, which is known to be the fastest detection algorithm. We analyze two datasets with different characteristics to check the impact of various factors, such as UAV’s altitude, camera resolution, and object size. The objective of this work is to conduct a robust comparison between these two cutting-edge algorithms. By using a variety of metrics, we show that none of the two algorithms outperforms the other in all cases.
ARTICLE | doi:10.20944/preprints202003.0313.v3
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: object detection; faster region-based convolutional neural network (FRCNN); single-shot multibox detector (SSD); super-resolution; remote sensing imagery; edge enhancement; satellites
Online: 29 April 2020 (13:33:56 CEST)
The detection performance of small objects in remote sensing images has not been satisfactory compared to large objects, especially in low-resolution and noisy images. A generative adversarial network (GAN)-based model called enhanced super-resolution GAN (ESRGAN) showed remarkable image enhancement performance, but reconstructed images usually miss high-frequency edge information. Therefore, object detection performance showed degradation for small objects on recovered noisy and low-resolution remote sensing images. Inspired by the success of edge enhanced GAN (EEGAN) and ESRGAN, we applied a new edge-enhanced super-resolution GAN (EESRGAN) to improve the quality of remote sensing images and used different detector networks in an end-to-end manner where detector loss was backpropagated into the EESRGAN to improve the detection performance. We proposed an architecture with three components: ESRGAN, EEN, and Detection network. We used residual-in-residual dense blocks (RRDB) for both the ESRGAN and EEN, and for the detector network, we used a faster region-based convolutional network (FRCNN) (two-stage detector) and a single-shot multibox detector (SSD) (one stage detector). Extensive experiments on a public (car overhead with context) dataset and another self-assembled (oil and gas storage tank) satellite dataset showed superior performance of our method compared to the standalone state-of-the-art object detectors.