Preprint
Article

This version is not peer-reviewed.

Robotic Harvesting of Apples Using ROS2

A peer-reviewed version of this preprint was published in:
Machines 2026, 14(4), 433. https://doi.org/10.3390/machines14040433

Submitted:

13 February 2026

Posted:

27 February 2026

You are already at the latest version

Abstract
Rising global food demand, increasing labor costs, and persistent labor shortages have created significant challenges for specialty crop production, particularly in la-bor-intensive tasks such as fruit harvesting. Robotic harvesting offers a promising long-term solution, yet its adoption in orchard environments remains limited due to unstructured conditions, variable lighting, and difficulties in fruit recognition and ma-nipulation. This study presents an improved robotic fruit harvesting system, Orchard roBot (OrBot), developed by the Robotics Vision Lab at Northwest Nazarene University, with the goal of advancing autonomous apple harvesting toward greater practicality and economic viability. The updated OrBot platform integrates a dual-camera vision system consisting of an eye-to-hand stereo camera with a wide field of view for fruit target detection and an eye-in-hand RGB-D camera for precise manipulation. The con-trol architecture was redesigned using Robot Operating System 2 (ROS2) and Python, enabling modular subsystem development and improved system coordination. Fruit detection was performed using a YOLOv5 deep learning model, and visual servoing was employed to guide the robotic manipulator toward the target fruit. System performance was evaluated through laboratory experiments using artificial trees and field tests conducted in a commercial apple orchard in Idaho. OrBot achieved a 100% harvesting success rate in indoor tests and a 75–80% success rate in outdoor orchard conditions, with improved performance observed following orchard pruning. Experimental results demonstrate that the dual-camera approach significantly enhances fruit search effi-ciency and harvesting reliability. Identified limitations include sensitivity to lighting conditions, end effector performance with varying fruit sizes, and depth estimation errors. Overall, the results indicate that OrBot represents a meaningful step toward ef-fective robotic fruit harvesting and highlights key areas for future improvement in vi-sion, manipulation, and system robustness.
Keywords: 
;  ;  ;  

1. Introduction

Global food systems are facing increasing pressure from population growth, rising production costs, climate variability, and persistent labor shortages. It is projected that global food production will need to double by 2050 to meet the demands of a population expected to reach nearly 10 billion [1]. At the same time, agriculture—particularly specialty crop production—remains heavily dependent on manual labor for tasks such as harvesting. In the United States, and especially in Idaho, one of the nation’s top five producers of specialty crops, growers of fruits and vegetables are increasingly challenged by a shrinking labor force and rising wages. Although the H-2A Temporary Agricultural Worker Program [2] has helped mitigate labor shortages, it represents a short-term and increasingly costly solution, highlighting the need for sustainable, long-term alternatives.
Technological advancements in precision agriculture, automation, and robotics offer promising alternatives s to address these challenges [3,4]. While robotics has been successfully integrated into highly structured environments such as automotive manufacturing, its adoption in agriculture—particularly in orchards—remains complex. Orchards present unstructured and dynamic environments characterized by variable lighting, occlusion from foliage, and natural variability in fruit location and tree geometry. These factors make robotic harvesting significantly more challenging than industrial automation, where workpieces are consistently positioned and lighting conditions are controlled [5].
Despite these challenges, robotic fruit harvesting has been an active area of research for more than three decades. Early systems demonstrated feasibility using black-and-white cameras and simple robotic manipulators, and subsequent studies have explored a wide range of sensing technologies, including color, stereo, thermal, multispectral, and depth cameras [6]. Among the key technical challenges are reliable fruit recognition, precise visual servoing of robotic manipulators, and the development of end effectors capable of removing fruit without damaging either the fruit or the tree [7]. Fruit recognition, in particular, has received significant research attention, as accurate detection is a prerequisite for successful harvesting [8].
The Robotics Vision Lab of Northwest Nazarene University has been developing a robotic harvesting platform, Orchard roBot (OrBot), to investigate these challenges in real-world orchard environments [9]. OrBot integrates robotic manipulation with machine vision systems to locate and harvest apples and has demonstrated promising results in commercial orchards. Although its harvesting speed remains slower than that of human labor, robotic systems offer the potential to operate continuously and complement human workers, including during nighttime hours when labor is unavailable. This study builds on previous iterations of OrBot [10] by exploring improved vision systems, control system, and harvesting strategies, with the goal of advancing robotic fruit harvesting toward a more practical and economically viable solution to the growing labor challenges faced by specialty crop producers. The first iteration of OrBot used Matlab to develop its control system and it used only one camera for finding and locating the fruit.
The objectives of this study are:
1.
To improve the control system of OrBot using Robot Operating System 2(ROS2),
2.
To improve the fruit vision system by adding another camera with a wider field of view,
3.
To evaluate the performance of OrBot in fruit harvesting.

2. Materials and Methods

2.1. Fruit Harvesting Robot—Orchard Robot (OrBot)

For the study of robotic fruit harvesting, the Robotics Vision Lab of Northwest Nazarene University developed a robotic platform called Orchard roBot, OrBot. OrBot is composed of the following: manipulator, eye-in-hand vision system, eye-to-hand vision system, end effector, a control system, power supply, and a tank-tread based navigation platform (Figure 1).

2.1.1. Manipulator

The manipulator is a 3rd generation Kinova robotic arm [11]. This arm has six degrees of freedom and has a reach of 902 mm. The arm has a full-range continuous payload of 2 kg, which is more than the weight of an apple fruit, and it has a maximum Cartesian translation speed of 500 mm/s. For this harvesting development, the translation speed was set to half its maximum speed for safety consideration. The average power consumption of the arm is 36 W.

2.1.2. Eye-in-Hand Vision System

The first version of OrBot has only one vision system [10] and is referred in this paper as the eye-in-hand vision system. This vision system is integrated with the end effector. The vision system composed of the color sensor has a resolution of 1920 x 1080 pixels with a field of view up to 65 degrees. The depth sensor, which is enabled by both stereo vision and active IR sensing, has a resolution of 480 x 270 pixels with a field of view of 72 degrees. The color sensor is used to detect the fruits and the depth sensor is used to estimate the distance of the fruit from the end effector.

2.1.3. Eye-to-Hand Vision System

In the current version of OrBot, another vision system was added. This vision system is fixed and does not move with the end effector. In this paper, we refer to this vision system as the eye-to-hand vision system. The ZED 2 Stereo Camera is used as the eye-to-hand vision system [12]. Unlike the eye-in-hand vision system, the ZED 2 camera has a wider field of view (FOV) of 110o (H) x 70o (V) and a resolution of 1920 x 1080 pixels. It has a depth range of 0.3 m to 20 m.

2.1.4. End Effector

The end effector is a standard two finger gripper. The gripper has a stroke of 85 mm and a grip force of 20 to 235 N, which could be controlled depending on the target. Customized 3D printed fingers of the gripper were designed and developed to allow for a larger stroke for larger sized fruit and to conform with the spherical shape of the fruit.

2.1.5. Control System

The control system used ZED Box Orin NX, which is powered by NVIDIA Jetson Orin NX with an Ampere GPU, Arm Cortex CPU, and LPDDR5 memory, multiple USB ports, Ethernet, HDMI, and ports for cameras [13]. The control system used UBUNTU as the operating system and Python as the main programming language. A foldable keyboard and monitor were mounted to the chassis of the robot for easy access in the field.

2.1.6. Tank-Tread Navigation Platform

The tank-tread navigation platform is an HD2 Tearded ATR Tank Robot Platform. It is driven by 2 24VDC Gear Motor with encoder and has a payload capacity of 45 kg. The platform has been configured to be able to mount OrBot, its components, the power supply, and other materials as needed.

2.2. Object Detection Algorithm

YOLO (You Only Look Once) is a series of real-time object detection models built on PyTorch and developed by Ultralytics [14]. This project used YOLO version 5 (YOLOv5). As a brief overview, the architecture of YOLOv5 contains a backbone, neck, and head. The backbone is the main body of the model, designed using the CSPDarknet53 structure, which is a convolutional neural network. The head is in charge of generating a final output. The neck connects the backbone and head.
YOLOv5 provides tools and Python scripts for training the model on a custom dataset, changing the weights to serve custom purposes. These new weights can then be loaded into the YOLOv5 architecture and used to perform inference.
When evaluating an object detection model like YOLO, it is common to use the mean average precision. The average precision refers to the average precision level of the model over every recall level. Precision measures the accuracy of the positive predictions made by the model, calculated as the ratio of true positives to the sum of true positives and false positives. Recall measures the model’s ability to identify all relevant instances, calculated as the ratio of true positives to the sum of true positives and false negatives. To determine if a predicted object is true positive or a false positive, an intersection over union (IoU) threshold must be determined. The IoU measures the overlap between the prediction and the true label. If the overlap of an object prediction is above the IoU threshold, then it is counted as a true positive. Otherwise, it is counted as a false positive [15].

2.3. Control System for Fruit Harvesting

2.3.1. Robot Operating System

The Robot Operating System (ROS) is a set of software libraries and tools for building robot applications. From drivers and state-of-the-art algorithms to developer tools, ROS has the open source tools needed for any robotics development [16]. One of the core advantages of ROS2 is that it is centered around modularity. ROS2 allows the robot to be broken down into subsystems that can be developed and run asynchronously.
In a typical ROS2 environment, each subsystem is organized into a node, which can be tested and run asynchronously with the rest of the system [17]. Each of these subsystems, or nodes, can communicate through topics, which are unidirectional asynchronous communication channels. Nodes can either publish to a topic (send messages to the topic) or subscribe to a topic (listen for messages on the topic). This communication can be established through publishers or subscribers created within each node. Topics can be used in one-to-one, one-to-many, many-to-one, or many-to-many relationships.
For more complicated communication, a service or an action may be used. In these communications methods, one node must host a service or action server, and other nodes may become clients of that service or action. Services and actions may create one-to-one or one-to-many relationships as illustrated in Figure 2.

2.3.2. Designing a ROS2 Application for OrBot

The first task of the project was to redesign Orbot’s software to utilize ROS2. In doing so, the primary language of the software was switched from MATLAB to Python. The first version of OrBot’s control system was developed using MATLAB. While MATLAB did have some ROS2 capabilities, Python allowed much more freedom with the tools and is open source.
To integrate ROS2 into the control system of OrBot, subsystems needed to be identified and programmed as nodes. Major subsystems that were identified were:
1.
Cameras
2.
Image processing algorithms
3.
The robotic arm
4.
The tank treads
5.
Apple Picking Algorithm
6.
Display
7.
Gamepad connection
Each node was written as a class that inherited from the Node class from the rclpy library. Using methods inherited from the Node class, publishers and subscribers as well as service and action servers and clients were added in the constructor. Additionally, timers were created in the constructor for continuous repeated actions. Class methods were then created that would be called when a message was received or when a timer finished. For example, a 0.1 second timer in the camera nodes would call a method to retrieve a frame and publish it. Another example is a subscriber in the tank treads node would call a method to change the speed of the motors whenever a message was received on the motor speed topic. To use this class, an instance of it was created and the spin() function was called. A basic outline of this format can be seen in Figure 3.

2.4. Two-Camera System

2.4.1. Eye-in-Hand and Eye-to-Hand Systems

The next objective of the project was to add a second camera to the system with a wider field of view that would remain stationary with respect to the navigation platform. This meant that the camera needed to have eye-to-hand coordination with the robotic arm, which was a departure from the eye-in-hand coordination used by the first camera located behind the gripper.
The camera attached to the gripper was an Intel RealSense camera. This camera and the eye-in-hand coordination used with it were excellent for performing the precise motions needed to pick apples. However, the eye-in-hand camera had a narrow field of view of 72 degrees and was unlikely to see most of the apples on a tree. A search algorithm could be implemented to have the arm move and allow the camera to scan the entire tree, but this process would be tedious and inefficient. A new camera was needed with a field of view that would encompass the entire tree.
The ZED 2, a stereo vision camera developed by StereoLabs, was utilized for this project. This camera had a much wider field of view than the eye-in-hand camera with 120 degrees. The ZED 2 required a Nvidia graphics card to be used.
The ZED SDK allowed for the calculation of a 3-D point cloud where each pixel would be mapped to a cartesian coordinate relative to the camera. Using the point cloud, the location of an apple identified in the image could be found with reference to the camera. This facilitated the three-dimensional location of the camera using the eye-to-hand camera.

2.4.2. Eye-To-Hand Coordination

Creating eye-to-hand coordination meant translating what the camera saw to coordinates for the robotic arm to move to. The vector from the camera to the apple can be found using the 3-D point cloud. However, to move the arm in front of the apple, the vector from the base of the robotic arm to the apple needed to be known. This can be calculated using the vector from the camera to the apple, r C A , and the vector from the base of the robot to the eye-to-hand camera, r R C , as shown in Figure 4.
The vector from the robot to the camera was measured physically and subtracted from the vector obtained from the point cloud. By translating the coordinates, eye-to-hand coordination was established using the following equation,
r R C + r C A = r R A
Position vector r R A was used to direct the end effector towards the apple fruit and the precise picking movement was controlled by the eye-in-hand camera, as shown by the yellow position vectors.

2.5. Evaluation of Fruit Harvesting

The performance of OrBot for fruit harvesting was evaluated both in the laboratory and in a commercial field. In the laboratory, two artificial trees with five artificial fruits randomly positioned on the tree were used for the test. Lighting in the laboratory was provided by the room’s fluorescent lights. During the test, the fruit picking success rate and the picking time were measured. OrBot was programmed to start from the first tree, harvest the fruits, and move to the second tree to harvest the remaining fruits. The fruits were then re-attached to the trees and the test was replicated five times.
OrBot was also tested in a commercial apple orchard, Symms Fruit Ranch, which is located in Caldwell, Idaho, USA. The apple variety in the orchard is Pink Lady apples. The apple trees, which are semi-dwarf, are trained using a spindle type structure. Twenty trees on the same row were used for the harvesting test. The harvesting test was conducted during midday and a fine weather condition in November of 2024 and November 2025.
Figure 5. Evaluation of Fruit Harvesting with a) Laboratory Test and b) Commercial Orchard Test.
Figure 5. Evaluation of Fruit Harvesting with a) Laboratory Test and b) Commercial Orchard Test.
Preprints 198940 g005

3. Results

3.1. Indoor Test

In all the replications of the indoor test, OrBot was able to successfully recognized and picked all the fruits. Figure 6 shows the harvesting sequence of finding the fruits, centering on the fruit, and harvesting the fruit. It can be seen from the figure the difference of the field of view of the two cameras. The camera-to-hand has a wider field of view while the camera-in-hand has a narrow field of view. In one of the harvesting tests, the trajectory of the end effector was recorded. The trajectory followed a cartesian based motion from the home position, to the centering of the fruit, and moving towards the fruit. The harvesting cycle times were also recorded in the laboratory tests and the average cycle time for fruit harvesting was 14.1 seconds.
Table 1. Harvesting operation cycle time.
Table 1. Harvesting operation cycle time.
Operation Time (s)
Moving to target fruit 3.9
Centering target fruit 2.7
Picking target fruit 7.5
Total 14.1

3.2. Outdoor Test

For the outdoor test, harvesting was considered a success if the target fruit was recognized and removed from the tree. Figure 6 shows a) the eye-to-hand camera view when searching for a target fruit, b) the eye-in-hand camera view for precision movement to pick fruit. Figure 6c and Figure 6d show the posture of the robot during fruit search and fruit picking. OrBot moved autonomously along the row and stopped when a target fruit was recognized. OrBot harvested 120 apple fruits from the row of twenty trees with a success rate of 75%. The average size and weights of the fruits were. There were several fruits that had indentation in them that was caused by the gripper, but these were still counted as a successful harvest.
Figure 7. Fruit harvesting test in a commercial orchard showing in a) Eye-to-hand view, b) eye-in-hand view, c) Orbot in search mode, and d) Orbot picking fruit.
Figure 7. Fruit harvesting test in a commercial orchard showing in a) Eye-to-hand view, b) eye-in-hand view, c) Orbot in search mode, and d) Orbot picking fruit.
Preprints 198940 g007

4. Discussion

The 100% harvesting success rate of OrBot in the indoor test proves the capability of OrBot to pick a fruit autonomously. In the indoor test, OrBot moved from one tree to the other without any issues. The indoor test also showed that the two-camera system facilitated the finding of the fruits. The eye-to-hand vision system searched for fruits while it was moving. Once a target fruit was identified, OrBot stopped and transferred control to the eye-in-hand and conducted a more precise motion by centering on the target fruit before it approached and picked the fruit. After the fruit was picked, control was transferred again to the camera-to-hand vision system.
When moving to the commercial orchard, the harvesting success rate of OrBot decreased to about 77%. The failed attempts were a) failure to recognize fruit, b) fruit was recognized but not picked by the end effector, c) fruit was recognized but end effector was not able to approach the fruit, and d) fruit was recognized and OrBot tried to picked another fruit.
In the failure to recognize the fruit, there were a few instances where OrBot was in front of the tree with fruits, and there was no fruit that was recognized. The main reason for this was the lighting condition. A review on fruit robotic harvesting performance listed lighting variation as one of the exogenous disturbances that affected harvesting performance [18]. When the robot and the fruit position caused a backlighting condition, the fruit appeared dark in the image. Unlike in the indoor tests where the fruits were always frontlighted, there were no issues with fruit recognition. However, when the robot was repositioned, it was able to recognize the fruit. One of the solutions that will be tried in future tests will be to include all different types of lighting conditions, such as cloudy, sunny, front light, and back light in the training of the fruit recognition algorithm.
In the instance where the fruit was recognized but not picked by the end effector, it was an issue of the end effector design [19]. The current end effector uses a standard two-finger gripper with customized fingers designed to capture the curvature of the apple fruit. As mentioned in the results the harvested apple fruits had a large variation in size. The robot had difficulty in harvesting smaller fruits as the fruits slipped out from the fingers during rotation. A study for an end effector for tomatoes noted that the fruit’s major diameter affected fruit grasping [20]. The end effector needs to be modified to ensure that the fruit is picked. The use of vacuum pickers [21] was also tested during this harvesting season as shown in Figure 8, initial tests showed that it improved the picking efficiency to more than 90% but further tests and design modifications are needed to validate its performance.
There were also a few times were the fruits were recognized but the manipulator did not move. The robot estimated that the fruit was outside its work envelope. The issue was with the eye-to-hand vision system. During the search section of the eye-to-hand vision system, the control system used the cloud point data combined with the color image to estimate the distance of the fruit. If one of the cloud point on the fruit was selected for distance estimation and this point fall outside the work envelope then the robot will not move. The overall area of the fruit should be used and averaged to estimate the distance of the fruit. In addition, it will be useful to combine the infrared distance sensor of the camera-in-hand to double check the target fruit distance.
The other failed harvesting was when the robot harvested a different fruit other than the target fruit. This was considered a failed harvesting because the target fruit was not harvested. Although, after the different fruit was harvested, the robot moved towards the target fruit and harvested it. There were also instances where the robot would center between the target fruit and a nearby fruit, and it would remain in this loop. Currently, the size of the fruit in the image was used for object tracking purposes. The relative position of the fruit with respect to the robot origin should be added for object tracking.

5. Conclusions

This study presented an improved robotic fruit harvesting system, Orchard roBot (OrBot), designed to address labor shortages in specialty crop production through automation. By integrating a dual-camera vision system, transitioning the control architecture to Robot Operating System 2 (ROS2), and implementing a modular, Python-based software framework, the updated OrBot platform demonstrated enhanced fruit detection, system coordination, and harvesting performance compared to previous iterations.
Experimental evaluation in both laboratory and commercial orchard environments confirmed the effectiveness of the proposed approach. OrBot achieved a 100% harvesting success rate in controlled indoor conditions and a 75–80% success rate in outdoor orchard tests, with improved performance observed after orchard pruning. The dual-camera configuration proved particularly effective, enabling efficient wide-area fruit search using the eye-to-hand vision system and precise manipulation using the eye-in-hand system. These results validate the feasibility of combining complementary vision modalities to overcome the limited field of view and precision constraints inherent in single-camera harvesting systems.
Field testing also revealed key challenges that must be addressed to further improve reliability and commercial viability. Variable lighting conditions, particularly backlighting, reduced fruit recognition accuracy, highlighting the need for more robust training datasets and adaptive vision algorithms. Limitations in the current gripper design affected harvesting performance for smaller fruits, suggesting that alternative end effector designs, such as vacuum-based systems, may offer improved consistency. Additionally, depth estimation and object tracking errors in cluttered environments emphasize the importance of integrating multi-sensor fusion and improved distance estimation strategies.
Overall, the results of this study demonstrate that robotic fruit harvesting is a viable and promising solution for augmenting human labor in orchard environments. While further refinement is required to increase speed, robustness, and economic feasibility, the advancements presented in this work represent a significant step toward practical deployment of autonomous harvesting systems capable of supporting sustainable food production in the face of growing labor constraints.

Author Contributions

Conceptualization, D.M.B.; methodology, D.M.B.,C.R. and C.S.; software, C.R. and C.S.; validation, C.R., C.S., and D.M.B.; formal analysis, C.R., C.S., and D.M.B.; investigation, C.R., C.S., and D.M.B.; resources, D.M.B.; data curation, C.R., C.S., and D.M.B.; writing—original draft preparation, D.M.B., C.R. and C.S.; writing—review and editing, D.M.B.,C.R. and C.S. supervision, D.M.B.; project administration, D.M.B.; funding acquisition, D.M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This project was supported in part by the Idaho State Department of Agriculture through the Specialty Crop Block Grant Program.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to acknowledge the support of Symms Fruit Ranch for allowing the use of OrBot in their commercial orchards.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. van Dijk, M.; Morley, T.; Rau, M.L.; et al. A meta-analysis of projected global food demand and population at risk of hunger for the period 2010–2050. Nat Food 2021, 2, 494–501. [Google Scholar] [CrossRef]
  2. Wei, X.; Campbell, B. L.; Khachatryan, H.; Brumfield, R. G. What Firms Hire H-2A Workers? Evidence from the US Ornamental Horticulture Industry. HortScience 2023, 58(4), 375–382. [Google Scholar] [CrossRef]
  3. Karunathilake, E.M.B.M.; Le, A.T.; Heo, S.; Chung, Y.S.; Mansoor, S. The Path to Smart Farming: Innovations and Opportunities in Precision Agriculture. Agriculture 2023, 13, 1593. [Google Scholar] [CrossRef]
  4. Botta, A.; Cavallone, P.; Baglieri, L.; Colucci, G.; Tagliavini, L.; Quaglia, G. A Review of Robots, Perception, and Tasks in Precision Agriculture. Appl. Mech. 2022, 3, 830–854. [Google Scholar] [CrossRef]
  5. Kootstra, G.; Wang, X.; Blok, P.M.; et al. Selective Harvesting Robotics: Current Research, Trends, and Future Directions. Curr Robot Rep 2021, 2, 95–104. [Google Scholar] [CrossRef]
  6. Hou, G.; Chen, H.; Jiang, M.; Niu, R. An Overview of the Application of Machine Vision in Recognition and Localization of Fruit and Vegetable Harvesting Robots. Agriculture 2023, 13, 1814. [Google Scholar] [CrossRef]
  7. Williams, Henry A.M.; Jones, Mark H.; Nejati, Mahla; Seabright, Matthew J.; Bell, Jamie; Penhall, Nicky D.; Barnett, Josh J.; Duke, Mike D.; Scarfe, Alistair J.; Ahn, Ho Seok; Lim, JongYoon; MacDonald, Bruce A. Robotic kiwifruit harvesting using machine vision, convolutional neural networks, and robotic arms. Biosystems Engineering 2019, Volume 181, Pages 140–156. [Google Scholar] [CrossRef]
  8. Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang. A review of key techniques of vision-based control for harvesting robot. Computers and Electronics in Agriculture 2016, Volume 127, 311–323. [Google Scholar] [CrossRef]
  9. Waltman, J.; Buchanan, E.; Bulanon, D.M. Nighttime Harvesting of OrBot (Orchard RoBot). AgriEngineering 2024, 6, 1266–1276. [Google Scholar] [CrossRef]
  10. Bulanon, D.M.; Burr, C.; DeVlieg, M.; Braddock, T.; Allen, B. Development of a Visual Servo System for Robotic Fruit Harvesting. AgriEngineering 2021, 3, 840–852. [Google Scholar] [CrossRef]
  11. Discover our Gen3 robotic arm. Available online: https://www.kinovarobotics.com/product/gen3-robots (accessed on 10 February 2026).
  12. ZED 2 Versatile stereo camera for spatial perception. Available online: https://www.stereolabs.com/products/zed-2 (accessed on 10 February 2026).
  13. ZED Box Orin. Available online: https://www.stereolabs.com/docs/embedded/zed-box-orin (accessed on 10 February 2026).
  14. Ultralytics YOLOv5. Available online: https://docs.ultralytics.com/models/yolov5/ (accessed on 10 February 2026).
  15. Ruybalid, Connor. Improving OrBot. Bachelor of Arts in Computer Science, Northwest Nazarene University, April 2025. [Google Scholar]
  16. ROS—Robot Operating System. Available online: https://www.ros.org/ (accessed on 10 February 2026).
  17. ROS 2 Documentation. Available online: https://docs.ros.org/en/foxy/index.html (accessed on 10 February 2026).
  18. Zhou, H.; Wang, X.; Au, W.; et al. Intelligent robots for fruit harvesting: recent developments and future challenges. Precision Agric 2022, 23, 1856–1907. [Google Scholar] [CrossRef]
  19. Silwal, A.; Davidson, J. R.; Karkee, M.; Mo, C.; Zhang, Q.; Lewis, K. Design, integration, and field evaluation of a robotic apple harvester. Journal of Field Robotics 2017, 34(6), 1140–1159. [Google Scholar] [CrossRef]
  20. Li, Z.; Miao, F.; Yang, Z.; Chai, P.; Yang, S. Factors affecting human hand grasp type in tomato fruit-picking: A statistical investigation for ergonomic development of harvesting robot. Computers and Electronics in Agriculture 2019, 157, 90–9. [Google Scholar] [CrossRef]
  21. You, K. Development of an adaptable vacuum based orange picking end effector. Agricultural Engineering International: CIGR Journal 2019, 21(1), 58–66. [Google Scholar]
Figure 1. OrBot and its components.
Figure 1. OrBot and its components.
Preprints 198940 g001
Figure 2. Example of ROS2 Nodes with Associated Communication.
Figure 2. Example of ROS2 Nodes with Associated Communication.
Preprints 198940 g002
Figure 3. ROS2 Nodes for fruit harvesting using OrBot.
Figure 3. ROS2 Nodes for fruit harvesting using OrBot.
Preprints 198940 g003
Figure 4. Position Vector Relationship Between Camera, Fruit, and Manipulator.
Figure 4. Position Vector Relationship Between Camera, Fruit, and Manipulator.
Preprints 198940 g004
Figure 6. Fruit harvesting sequence in the laboratory test.
Figure 6. Fruit harvesting sequence in the laboratory test.
Preprints 198940 g006
Figure 8. OrBot harvesting apples using a vacuum based end effector.
Figure 8. OrBot harvesting apples using a vacuum based end effector.
Preprints 198940 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated