Hoskere, V.; Narazaki, Y.; Spencer, B.F., Jr. Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds. Sensors2022, 22, 532.
Hoskere, V.; Narazaki, Y.; Spencer, B.F., Jr. Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds. Sensors 2022, 22, 532.
Manual visual inspections typically conducted after an earthquake are high-risk, subjective, and time-consuming. Delays from inspections often exacerbate the social and economic impact of the disaster on affected communities. Rapid and autonomous inspection using images acquired from unmanned aerial vehicles offer the potential to reduce such delays. Indeed, a vast amount of re-search has been conducted toward developing automated vision-based methods to assess the health of infrastructure at the component and structure level. Most proposed methods typically rely on images of the damaged structure, but seldom consider how the images were acquired. To achieve autonomous inspections, methods must be evaluated in a comprehensive end-to-end manner, incorporating both data acquisition and data processing. In this paper, we leverage recent advances in computer generated imagery (CGI) to construct a 3D synthetic environment for simulation of post-earthquake inspections that allows for comprehensive evaluation and valida-tion of autonomous inspection strategies. A critical issue is how to simulate and subsequently render the damage in the structure after an earthquake. To this end, a high-fidelity nonlinear finite element model is incorporated in the synthetic environment to provide a representation of earthquake-induced damage; this finite element model, combined with photo-realistic rendering of the damage, is termed herein a physics-based graphics models (PBGM). The 3D synthetic en-vironment with PBGMs provide a comprehensive end-to-end approach for development and validation of autonomous post-earthquake strategies using UAVs, including: (i) simulation of path planning of virtual UAVs and image capture under different environmental conditions; (ii) au-tomatic labeling of captured images, potentially providing an infinite amount of data for training deep neural networks; (iii) availability of the ground truth damage state from the results of the finite-element simulation; and (iv) direct comparison of different approaches to autonomous as-sessments. Moreover, the synthetic data generated has the potential to be used to augment field datasets. To demonstrate the efficacy of PBGMs, models of reinforced concrete moment-frame buildings with masonry infill walls are examined. The 3D synthetic environment employing PBGMs is shown to provide an effective testbed for development and validation of autonomous vision-based post-earthquake inspections that can serve as an important building block for ad-vancing autonomous data to decision frameworks.
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.