Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Physics-Based Graphics Models in 3D Synthetic Environments Enabling Autonomous Vision-Based Structural Inspections

Version 1 : Received: 5 November 2021 / Approved: 8 November 2021 / Online: 8 November 2021 (15:06:45 CET)

A peer-reviewed article of this Preprint also exists.

Hoskere, V.; Narazaki, Y.; Spencer, B.F., Jr. Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds. Sensors 2022, 22, 532. Hoskere, V.; Narazaki, Y.; Spencer, B.F., Jr. Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds. Sensors 2022, 22, 532.

Abstract

Manual visual inspections typically conducted after an earthquake are high-risk, subjective, and time-consuming. Delays from inspections often exacerbate the social and economic impact of the disaster on affected communities. Rapid and autonomous inspection using images acquired from unmanned aerial vehicles offer the potential to reduce such delays. Indeed, a vast amount of re-search has been conducted toward developing automated vision-based methods to assess the health of infrastructure at the component and structure level. Most proposed methods typically rely on images of the damaged structure, but seldom consider how the images were acquired. To achieve autonomous inspections, methods must be evaluated in a comprehensive end-to-end manner, incorporating both data acquisition and data processing. In this paper, we leverage recent advances in computer generated imagery (CGI) to construct a 3D synthetic environment for simulation of post-earthquake inspections that allows for comprehensive evaluation and valida-tion of autonomous inspection strategies. A critical issue is how to simulate and subsequently render the damage in the structure after an earthquake. To this end, a high-fidelity nonlinear finite element model is incorporated in the synthetic environment to provide a representation of earthquake-induced damage; this finite element model, combined with photo-realistic rendering of the damage, is termed herein a physics-based graphics models (PBGM). The 3D synthetic en-vironment with PBGMs provide a comprehensive end-to-end approach for development and validation of autonomous post-earthquake strategies using UAVs, including: (i) simulation of path planning of virtual UAVs and image capture under different environmental conditions; (ii) au-tomatic labeling of captured images, potentially providing an infinite amount of data for training deep neural networks; (iii) availability of the ground truth damage state from the results of the finite-element simulation; and (iv) direct comparison of different approaches to autonomous as-sessments. Moreover, the synthetic data generated has the potential to be used to augment field datasets. To demonstrate the efficacy of PBGMs, models of reinforced concrete moment-frame buildings with masonry infill walls are examined. The 3D synthetic environment employing PBGMs is shown to provide an effective testbed for development and validation of autonomous vision-based post-earthquake inspections that can serve as an important building block for ad-vancing autonomous data to decision frameworks.

Keywords

Computer Vision; Synthetic Data; Physics-based Graphics Models; Deep Learning; Post-earthquake Inspections

Subject

Engineering, Civil Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.