Submitted:
02 August 2024
Posted:
06 August 2024
You are already at the latest version
Abstract

Keywords:
1. Introduction
2. Problem Statement and General Approach
3. Methods

3.1. Forming the Model Object
- The operator selects one or more the most informative object’s views from the sequence of photographs of the overview trajectory (with respective stereo-pairs of frames), which together can potentially provide recognition of the object for any position of the working (inspection) trajectory when the AUV is positioned above the object.
- The operator fixes a rectangular area of the object’s location in the seabed plane (using the VNM, which provides calculation of points’ coordinates in the external CS with some accuracy). This allows for a rough object search at the recognition stage (while the AUV is moving along the working trajectory) before activation of the object recognition algorithm based on processing of images taken with the camera.
-
The operator creates a 3D model of the object that combines the 3D models of several used views of the overview trajectory. The spatial geometric elements making up the model are formed through processing the original stereo-pairs of frames of the views used. The processing includes the application of algorithmic procedures with the explicit involvement of the operator in the process of CEs formation. The types of elements used, as noted above, are as follows: points, rectilinear segments, and macro-elements based on segment lines (such as “corner” type or others). For each selected kth view of the AUV’s overview trajectory:
- The point features in the left and right frames of the stereo-pair are matched using the SURF detector. The points belonging to the object, specified by the operator, are filtered and selected. Manual generation of additional points is possible. It is also possible to include terrain points nearby the object, since the scene is static. By matching the sets of points M_2D_POINT_VkVk_LR and M_2D_POINT_ VkVk_RL, a set of 3D points M_3D_POINT_ VkVk_CSview_k, visible through the camera for this view, is constructed using the ray triangulation method. The points are coordinated in the CSview_k of this view of the overview trajectory.
- The operator generates a set of 3D edge lines visible through the camera in CSview_k of this view using the original frames of the stereo-pair. Each spatial segment line is described by two 3D endpoints belonging to it with pointers at 2D images of these points in the 2D sets indicated above. The problem of 3D reconstruction of the matched 2D images of segment lines in the model stereo-pair of frames is solved in a traditional way: by matching the endpoints in the stereo-pair of frames with calculation of correlation estimates when scanning epipolar lines and then calculating the 3D coordinates of the segment endpoints in CSview_k by the ray triangulation method. The endpoints can also be matched by the operator clearly indicating the matched images. The formed segments are coordinated in CSview_k and stored in the set M_3D_LINE_ VkVk_CSview_k.
- 3.3. Formation of “corner” type CEs based on the set of spatial segments obtained for this view. The formed CEs “corners” are coordinated in the CSview_k and stored in the set M_3D_CORNER_ VkVk_CSview_k.
- To obtain a model description independent of the AUV’s CS, the operator explicitly determines the object’s intrinsic CS. Such a CS is determined on the basis of the segment line indicated by the operator (let it be referred to as base segment line) in such a way that the Z'-axis is oriented in the direction of the Z-axis of the external CS (see Figure 2). This can be done using the data received by the standard navigation system of the AUV. For each used kth view of the object, a coordinate transformation matrix is calculated that links the object’s CS with the CS of the camera of this view of the overview trajectory (which is done in a standard way by setting the unit vectors of one CS in the other CS). If the camera does not see the selected base segment line in one of the views, then the VNM method is applied, which provides a matrix for coordinate transformation from the CS of one trajectory position into the CS of the other position (i.e., the coordinates of the base segment line are calculated in the CS of the considered view implicitly, without matching in the stereo-pair of frames). Afterwards, the object’s intrinsic CS is built, the same as for other views, on the base segment line, and the matrix of link between the CS of this view and the object’s CS is calculated. Thus, a single CS of the object is built by using the same spatial base segment line in all the views used. For each kth view, its own matrix HCSview_k, CSobject_n of transformation into this object’s CS is calculated (where n is the object’s sequential number).
- The coordinates of all three CE types of the used kth view are transformed using the calculated matrix into the object’ intrinsic CS built. The obtained coordinate representations are stored, respectively, in the sets M_3D_POINT_Vk_CSobject_n, M_3D_LINE_Vk_CSobject_n, and M_3D_CORNER_Vk_CSobject_n specified in CSobject_n. Simultaneously, these coordinate representations are recorded to the respective accumulated sets representing the object’s complete 3D model in CSobject_n with all processed views taken into account: M_3D_POINT_CSobject_n, M_3D_LINE_CSobject_n, and M_3D_CORNER_CSobject_n. Note that, if CE is present in several views, then in the complete model it is represented by averaged coordinates.
- The operator explicitly determines the CS of SPS. As such a CS, the CS of one of the objects is used. The intrinsic CSs of all objects are coordinated in the CS of SPS.
- as a set of models of several object’s views, where the model of a view is a combination of three CE sets (points, lines, and corners) specified in the CS of this view (CSview_k). The matrix HCSview_k, CSobject_n for coordinate transformation from this view’s CS into the objects’ intrinsic CS (CSobject_n) is calculated for each view;
- as a combination of sets of three CE types (points, lines, and corners) specified in the object’s intrinsic CS (CSobject_n) which represents a spatial structure of the object. Here, the set of each type is formed through summing the CEs from several views used. Note that a face plane is linked with each edge line, which belongs to it. The normal to this plane and its position relative to the segment line (the face on the right or left) are indicated. This information is necessary at the object recognition stage for correct calculation of the correlation estimate when matching the images of the segment line (two of its points) in the frames of the model’s stereo-pair and the stereo-pair of the working (inspection) trajectory (the rectangular area adjacent to the edge used to calculate the correlation coefficient is specified only on this plane).
3.2. Recognition of SPS Objects
| S_3D_POINT_CSwork (number of points matched to model / number of points in model) | S_3D_LINE_CSwork (number of lines matched to model / number of lines in model) | S_3D_CORNER_CSwork (number of corners matched to model / number of corners in model) | |
|---|---|---|---|
| View 1 in CS1 * | np_1 / mp | n1_1 / ml | nc1 / mc |
| … | … | … | … |
| View k in CSk | np_k / mp | nlk / ml | nck / mc |
| … | … | … | … |
| View last in CSlast | np_last / mp | nllast / ml | nclast / mc |
3.2.1. Subsubsection
3.1.2. Recognition and Calculation of 3D Coordinates of “Segment Line” and “Corner” Type CEs (Stage 2)


4. Experiments
5. Discussion
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Mai, Ch.; Hansen, L.; Jepsen, K.; Yang, Zh. Subsea Infrastructure Inspection: A Review Study. In Proceedings of 6th International Conference on Underwater System Technology: Theory and Applications, 2016. [CrossRef]
- Zhang, Y.; Zheng, M.; An, Ch.; Seo, J. K. A review of the integrity management of subsea production systems: inspection and monitoring methods. Ships and Offshore Structures, 2019, vol. 14, Issue 8, pp. 1-15. [CrossRef]
- Manley, J.E.; Halpin, S.; Radford, N.; Ondler, M. Aquanaut: A New Tool for Subsea Inspection and Intervention. In Proceedings of OCEANS 2018 MTS/IEEE Conference, 22-25 Oct. 2018, Charleston, SC, USA. [CrossRef]
- Albiez, J.; Cesar, D.; Gaudig, C.; Arnold, S.; Cerqueira, R.; Trocoli, T.; Mimoso, G.; Saback, R.; Neves, G. Repeated close-distance visual inspections with an AUV. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey, San Diego, CA, USA, 19 September 2016; pp. 1–8. [CrossRef]
- Terracciano, D.; Bazzarello, L.; Caiti, A.; Costanzi, R.; Manzari, V. Marine Robots for Underwater Surveillance. Curr. Robot. Rep.2020, 1, 159–167. [CrossRef]
- Jacobi, M. Autonomous inspection of underwater structures. Robot. Auton. Syst. 2015, 67, 80–86. [CrossRef]
- Bao, J., Li, D., Qiao, X., Rauschenbach, T. Integrated navigation for autonomous underwater vehicles in aquaculture: A review. Information Processing in Agriculture. Volume 7. Issue 1. 2020. P. 139-151. [CrossRef]
- Sahoo, A.; Dwivedy, S. K.; Robi, P.S. Advancements in the field of autonomous underwater vehicle. Ocean Engineering, Volume 181. 2019. pp. 145-160. https://. [CrossRef]
- Wirth, S.; Carrasco, P.L.N.; Oliver-Codina, G. Visual odometry for autonomous underwater vehicles. In Proceedings of the 2013 MTS/IEEE OCEANS. Bergen, Norway. 10–13 June 2013. pp. 1–6. https://. [CrossRef]
- Jung, J.; Li, J.-H.; Choi, H.-T.; Myung, H. Localization of AUVs using visual information of underwater structures and artificial landmarks. Intell. Serv. Robot. 2016. 10. pp. 67–76. [CrossRef]
- Gao, J.; Wu, P.; Yang, B.; Xia, F. Adaptive neural network control for visual servoing of underwater vehicles with pose estimation. J. Mar. Sci. Technol. 2016, 22, 470–478. [CrossRef]
- Xu, H.; Oliveira, P.; Soares, C.G. L1 adaptive backstepping control for path-following of underactuated marine surface ships. Eur. J. Control. 2020, 58, 357–372. [CrossRef]
- Fan, S.; Liu, C.; Li, B.; Xu, Y.; Xu, W. AUV docking based on USBL navigation and vision guidance. J. Mar. Sci. Technol. 2018, 24, 673–685. [CrossRef]
- Ferrera, M.; Moras, J.; Trouvé-Peloux, P.; Creuze, V. Real-Time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments. Sensors. 2019, 19, 687. [CrossRef]
- Himri, K.; Ridao, P.; Gracias, N. 3D Object Recognition Based on Point Clouds in Underwater Environment with Global Descriptors: A Survey. Sensors. 2019, 19, 4451. [CrossRef]
- Kasaei, S.H.; Lopes, L.S.; Tomé, A.M.; Oliveira, M. An orthographic descriptor for 3D object learning and recognition. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Daejeon, Korea. 9–14 October 2016. pp. 4158–4163.
- Osada, R.; Funkhouser, T.; Chazelle, B.; Dobkin, D. Matching 3D models with shape distributions. In Proceedings of the SMI 2001 International Conference On Shape Modeling and Applications. Genova, Italy, 7–11 May 2001. pp. 154–166.
- Marton, Z.C.; Pangercic, D.; Rusu, R.B.; Holzbach, A.; Beetz, M. Hierarchical object geometric categorization and appearance classification for mobile manipulation. In Proceedings of the 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Nashville, TN, USA, 6–8 December 2010; pp. 365–370.
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the IEEE International Conference on Robotics and Automation, ICRA’09, Kobe, Japan, 12–17 May 2009; pp. 3212–3217.
- Himri, K.; Ridao, P.; Gracias, N. Underwater Object Recognition Using Point-Features, Bayesian Estimation and Semantic Information. Sensors. 2021, 21, 1807. https://. [CrossRef]
- Chemisky, B.; Menna, F.; Nocerino, E.; Drap, P. Underwater Survey for Oil and Gas Industry: A Review of Close Range Optical Methods. Remote Sensing, 2021, 13 (14), pp.2789. https://. [CrossRef]
- Bobkov, V.; Kudryashov, A.; Inzartsev, A. Method for the Coordination of Referencing of Autonomous Underwater Vehicles to Man-Made Objects Using Stereo Images. J. Mar. Sci. Eng. 2021, 9, 1038. https://. [CrossRef]
- Rumson, A. G. The application of fully unmanned robotic systems for inspection of subsea pipelines. Ocean Engineering, 235. 2021: 109214. https://. [CrossRef]
- Bobkov, V. A.; Kudryashov, A. P.; Inzartsev, A. V. Technology of AUV High-Precision Referencing to Inspected Object. Gyroscopy and Navigation. 2019. Vol 10. №4. P. 322-329. https://. [CrossRef]
- Melman, S.; Bobkov, V.; Inzartsev, A.; Pavin, A. Distributed Simulation Framework for Investigation of Autonomous Underwater Vehicles' Real-Time Behavior. In Proceedings of the OCEANS'15 MTS/IEEE Washington DC. October 19-22. 2015.





Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).