Submitted:
29 November 2023
Posted:
30 November 2023
You are already at the latest version
Abstract
Keywords:
1. Introduction
- Increased productivity in field operations through the use of highly automated vehicles capable of continuous operation without breaks and able to work at night;
- Reduction of labor costs by decreasing the number of drivers needed to operate agricultural machinery;
- Lowering the overhead costs of producing agricultural crops, leading to a reduction in the price of the final product and strengthening the company's market position;
- Minimization of harmful effects on machine operators during the application of agrochemicals.
- The BEV projection provides a top view of the environment, allowing the system to more accurately determine the depth and distance to objects. This is crucial for safe maneuvering and collision avoidance.
- The BEV projection offers a wide view of the surroundings, enabling the control system to see more than what is possible with just a frontal camera. This improves the decision-making capabilities in complex traffic situations.
- With the depth and distance to objects taken into account, the system can more effectively plan paths and maneuver in complex conditions, such as at intersections or in heavy traffic.
- Utilizing only cameras with the capability of creating a BEV projection potentially reduces the need for additional sensors like LiDAR or radar, which can decrease the cost and complexity of the system.
- The BEV projection may be more adaptable to changes in the environment, such as variations in lighting or weather conditions, compared to other perception methods.
2. Obstacle detection and avoidance method
- Obstacle detection module and determination of their coordinates.
- Module for determining the space free for movement.
- Module for constructing a local map of the surrounding space of the unmanned vehicle.
- Module for controlling the linear and angular velocity of the unmanned vehicle.
- The boundaries of the area are determined:
- For each pixel in this area, if the pixel belongs to class , its value is replaced:
- Let be the original image, and be the transformed image.
- For each point in the image , the homography matrix is applied to find the corresponding point in the image :
- After applying the homography matrix, the coordinates are normalized to convert them into Euclidean coordinates:
- The pixel value at the new coordinates is assigned to the corresponding pixel in the image . This process is repeated for each point of the original image to form the complete image in the Birds-Eye-View projection.
- Let be the Birds-Eye-View (BEV) projection image, where areas occupied by recognized obstacles have already been replaced with the value 100.
- Each recognized object is defined by a bounding rectangle with parameters: the rectangle's center , width , height , and the object class .
- For each bounding rectangle, the coordinates of the two lower points in the BEV projection are calculated:
- The area for replacement is calculated as follows:
- For each point in the specified area on the image , if it is within the boundaries of the area:the pixel value is replaced with the object class .
- These steps are repeated for each recognized object from on the original image .
2. Development of software and results
- node_sensors_webots (Vehicle Sensor Driver) – a node that is responsible for collecting data from sensors in the Webots environment and generates messages in the appropriate topics with pre-processing:
- node_localmap – a node that collects data from sensors to implement security functions and calls methods of the MapBuilder class to build a local map and transmit control commands to the car.
- Vehicle controller, which is responsible for sending vehicle control commands to the Webots simulation environment.
- The map of static obstacles and the road contour RM.
- The map of dynamic obstacles and models for behavioral analysis of these obstacles BM.
- The map of objects affecting the behavior of the autonomous vehicle itself (traffic lights, road signs).
- The local map is necessary for building a high-precision global map, which, in addition to the layer with coordinates of waypoints in the context of the task at hand, stores data about objects affecting the behavior of the autonomous vehicle itself (traffic lights, road signs) along the entire route, for reuse.
- A tree consisting of a single vertex - the starting point - is created.
- On each iteration of the algorithm, a random point in the state space is generated. This point will be a potential new node in the tree.
- It is necessary to find the vertex in the existing tree that is closest to the generated random point. This vertex will be called the "nearest vertex" or "nearest neighbor."
- A new vertex is created, connecting the nearest vertex with the generated point. This creates a new edge in the tree.
- After adding the new vertex, the cost and distance to all vertices in the tree that could be reached through the new edge are recalculated. This includes the nearest neighbors and their potential neighbors. If the recalculated cost to a certain vertex is less than its current cost, the cost of that vertex is updated, and the cost for its neighbors is recalculated to reflect the new information.
- Steps 2 through 5 are repeated until the target vertex is reached (or until the maximum number of iterations is reached).
- Once the target vertex is reached, the optimal path is reconstructed, moving from the target vertex to the starting point at the lowest cost, using the recalculated costs.
- The algorithm completes execution when the optimal path is found or the maximum number of iterations is performed.
4. Discussion
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Kuutti, S.; Bowden, R.; Jin, Y.; Barber, P.; Fallah, S. A Survey of Deep Learning Applications to Autonomous Vehicle Control. IEEE Transactions on Intelligent Transportation Systems 2021, 22, 712–733. [Google Scholar] [CrossRef]
- Peng, Chen, Peng Wei, Zhenghao Fei, Yuankai Zhu, and Stavros G. Vougioukas. "Optimization-Based Motion Planning for Autonomous Agricultural Vehicles Turning in Constrained Headlands." arXiv 2023, arXiv:2308.01117. arXiv:2308.01117.
- Sensors | Free Full-Text | Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots. Available online: https://www.mdpi.com/1424-8220/18/8/2730 (accessed on 22 November 2023).
- Liu, L.; Lu, S.; Zhong, R.; Wu, B.; Yao, Y.; Zhang, Q.; Shi, W. Computing systems for autonomous driving: State of the art and challenges. IEEE Internet of Things Journal 2020, 8, 6469–6486. [Google Scholar] [CrossRef]
- Stoma, M.; Dudziak, A.; Caban, J.; Droździel, P. The Future of Autonomous Vehicles in the Opinion of Automotive Market Users. Energies 2021, 14, 4777. [Google Scholar] [CrossRef]
- GPS Systems for Agriculture | Patchwork Technology | Usk. Available online: https://www.patchworkgps.com (accessed on 22 November 2023).
- Autonomous Agriculture Solutions | Trimble Autonomy | Autonomy. Available online: https://autonomy.trimble.com/en/agriculture (accessed on 22 November 2023).
- Ag Leader | Agricultural Technology Solutions. Available online: https://www.agleader.com/?locale=en (accessed on 22 November 2023).
- Solution - Smajayu. Available online: https://www.smajayu.com/category/solution/ (accessed on 22 November 2023).
- High Value Crops Smart Farming. Available online: https://fieldin.com/home/ (accessed on 22 November 2023).
- Bluewhite. Available online: https://www.bluewhite.co/ (accessed on 22 November 2023).
- Braun Mechanical Engineering. Available online: https://braun-maschinenbau.info/ (accessed on 22 November 2023).
- GPX Intelligence - Track What Matters Most. Available online: https://gpx.co/ (accessed on 22 November 2023).
- Home - Thorntek. Available online: https://thorntek.com.au/. https://thorntek.com.au/ (accessed on 22 November 2023).
- Autonomous Tractor Fleets. Available online: https://www.bearflagrobotics.com/ (accessed on 22 November 2023).
- Autonomous Tractor Upgrade | Sabanto Steward. Available online: https://sabantoag.com/ (accessed on 22 November 2023).
- Cognitive Pilot - Autonomous Driving Technologies for Ground Transport. Available online: https://en.cognitivepilot.com/ (accessed on 22 November 2023).
- BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation. Available online: https://hanlab.mit.edu/projects/bevfusion (accessed on 22 November 2023).
- Jocher, G.; Chaurasia, A.; Qiu, J. YOLO by Ultralytics. Available online: https://github.com/ultralytics/ultralytics (accessed on 22 November 2023).
- Ekzhang/Fastseg: PyTorch Implementation of MobileNetV3 for Real-Time Semantic Segmentation, with Pretrained Weights & State-of-the-Art Performance. Available online: https://github.com/ekzhang/fastseg (accessed on 22 November 2023).
- Fu, Zhongtao, et al. "Homography matrix based trajectory planning method for robot uncalibrated visual servoing." 2023 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, 2023.
- Liu, Chang, et al. "YOLO-BEV: Generating Bird's-Eye View in the Same Way as 2D Object Detection." arXiv preprint arXiv:2310.17379 (2023). arXiv:2310.17379 (2023).
- Xanthidis, Marios; Esposito, Joel M.; Rekleitis, Ioannis; O’Kane, Jason M. (2020-12-01). "Motion Planning by Sampling in Subspaces of Progressively Increasing Dimension". Journal of Intelligent & Robotic Systems. 100 (3): 777–789. ISSN 1573-0409. S2CID 3622004. [CrossRef]






Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).