Working Paper Article Version 1 This version is not peer-reviewed

Evaluation of Depth Images in the Real Environment Generated by Double-GAN

Version 1 : Received: 4 November 2020 / Approved: 5 November 2020 / Online: 5 November 2020 (08:40:25 CET)

How to cite: Maldonado Romo, J.; Suárez Ruiz, F.; Aldape Pérez, M.; Rodríguez Molina, A. Evaluation of Depth Images in the Real Environment Generated by Double-GAN. Preprints 2020, 2020110202 Maldonado Romo, J.; Suárez Ruiz, F.; Aldape Pérez, M.; Rodríguez Molina, A. Evaluation of Depth Images in the Real Environment Generated by Double-GAN. Preprints 2020, 2020110202

Abstract

For making explorations in 3D environments, unmanned aircraft systems (UAVs) are used, which requires technologies capable of perceiving the environment to map and estimate the location of objects that could cause accidents and collisions. RGB-D sensors can determine the depth, but adding more accessories to the UAVs requires more considerable energy, causing higher dimensions. Due to measurements, it is not elementary to use UAV in indoor environments. UAVs have a conventional camera, allowing images to be capture for mapping 3D indoor environments. The main advantage of the processing of images is Generative Adversarial Networks (GAN) because these networks can generate realistic images from a source of noise. GAN has demonstrated the capability to create images from a set of samples. Therefore, we propose to use GAN to estimate the depth and segmentation of a real picture from a virtual environment representation to enrich a conventional camera that allows estimating depth. Due to there are three domains of samples, a GAN architecture is not enough. For this reason, we propose Double GAN architecture with noise reduction to represent an RGB-D sensor with a few samples of a real scenario. Besides, the comparison of performance with a physical RGB-D sensor such as sensor Kinect, allowing to create low-cost visual depth perception.

Subject Areas

Computer vision; Robotics-Perception; 3D Mapping; Machine learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.