The relative position of the orchard robot to the rows of fruit trees is an important parameter for achieving autonomous navigations. The current methods for estimating the position parameters between rows of orchard robots obtain low parameter accuracy, and to address this problem, this paper proposes a machine vision-based method for detecting the relative position of orchard robots and fruit tree rows. Firstly, the fruit tree trunk is identified based on the improved YOLOv4 model; secondly, the camera coordinates of the tree trunk are calculated from the principle of binocular camera triangulation, and the ground projection coordinates of the tree trunk are obtained through coordinate conversion; finally, the midpoints of the projection coordinates of different sides are combined and the navigation path is obtained by linear fitting with the least squares method, and the position parameters of the orchard robot are obtained through calculation. The experimental results show that the average accuracy and average recall of the improved YOLOv4 model for fruit tree trunk detection are 97.05% and 95.42%, respectively, which are 5.92 and 7.91 percentage points higher than those of the original YOLOv4 model. The average errors of heading angle and lateral deviation estimates obtained based on the method in this paper are 0.57° and 0.02 m. The method can accurately calculate heading angle and lateral deviation values at different positions between rows, and can provide a reference for autonomous visual navigation of orchard robots.