Article
Version 1
Preserved in Portico This version is not peer-reviewed
Diversity-Generated Image Inpainting with Style Extraction
Version 1
: Received: 2 December 2019 / Approved: 3 December 2019 / Online: 3 December 2019 (12:04:01 CET)
A peer-reviewed article of this Preprint also exists.
W. Cai and Z. Wei, "PiiGAN: Generative Adversarial Networks for Pluralistic Image Inpainting," in IEEE Access, vol. 8, pp. 48451-48463, 2020. W. Cai and Z. Wei, "PiiGAN: Generative Adversarial Networks for Pluralistic Image Inpainting," in IEEE Access, vol. 8, pp. 48451-48463, 2020.
Abstract
The latest methods based on deep learning have achieved amazing results regarding the complex work of inpainting large missing areas in an image. This type of method generally attempts to generate one single "optimal" inpainting result, ignoring many other plausible results. However, considering the uncertainty of the inpainting task, one sole result can hardly be regarded as a desired regeneration of the missing area. In view of this weakness, which is related to the design of the previous algorithms, we propose a novel deep generative model equipped with a brand new style extractor which can extract the style noise (a latent vector) from the ground truth image. Once obtained, the extracted style noise and the ground truth image are both input into the generator. We also craft a consistency loss that guides the generated image to approximate the ground truth. Meanwhile, the same extractor captures the style noise from the generated image, which is forced to approach the input noise according to the consistency loss. After iterations, our generator is able to learn the styles corresponding to multiple sets of noise. The proposed model can generate a (sufficiently large) number of inpainting results consistent with the context semantics of the image. Moreover, we check the effectiveness of our model on three databases, i.e., CelebA, Agricultural Disease, and MauFlex. Compared to state-of-the-art inpainting methods, this model is able to offer desirable inpainting results with both a better quality and higher diversity. The code and model will be made available on https://github.com/vivitsai/SEGAN.
Keywords
deep learning; generative adversarial networks; image inpainting; diversity inpainting
Subject
MATHEMATICS & COMPUTER SCIENCE, Artificial Intelligence & Robotics
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (2)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
Commenter: Dinesh Datla
The commenter has declared there is no conflict of interests.
Commenter:
The commenter has declared there is no conflict of interests.