Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Diversity-Generated Image Inpainting with Style Extraction

Version 1 : Received: 2 December 2019 / Approved: 3 December 2019 / Online: 3 December 2019 (12:04:01 CET)

A peer-reviewed article of this Preprint also exists.

W. Cai and Z. Wei, "PiiGAN: Generative Adversarial Networks for Pluralistic Image Inpainting," in IEEE Access, vol. 8, pp. 48451-48463, 2020. W. Cai and Z. Wei, "PiiGAN: Generative Adversarial Networks for Pluralistic Image Inpainting," in IEEE Access, vol. 8, pp. 48451-48463, 2020.

Abstract

The latest methods based on deep learning have achieved amazing results regarding the complex work of inpainting large missing areas in an image. This type of method generally attempts to generate one single "optimal" inpainting result, ignoring many other plausible results. However, considering the uncertainty of the inpainting task, one sole result can hardly be regarded as a desired regeneration of the missing area. In view of this weakness, which is related to the design of the previous algorithms, we propose a novel deep generative model equipped with a brand new style extractor which can extract the style noise (a latent vector) from the ground truth image. Once obtained, the extracted style noise and the ground truth image are both input into the generator. We also craft a consistency loss that guides the generated image to approximate the ground truth. Meanwhile, the same extractor captures the style noise from the generated image, which is forced to approach the input noise according to the consistency loss. After iterations, our generator is able to learn the styles corresponding to multiple sets of noise. The proposed model can generate a (sufficiently large) number of inpainting results consistent with the context semantics of the image. Moreover, we check the effectiveness of our model on three databases, i.e., CelebA, Agricultural Disease, and MauFlex. Compared to state-of-the-art inpainting methods, this model is able to offer desirable inpainting results with both a better quality and higher diversity. The code and model will be made available on https://github.com/vivitsai/SEGAN.

Keywords

deep learning; generative adversarial networks; image inpainting; diversity inpainting

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (2)

Comment 1
Received: 4 December 2019
Commenter: Dinesh Datla
The commenter has declared there is no conflict of interests.
Comment: Hi there, I'm also working on inpainting, and I find diversity inpainting very interesting. Is there any plan on releasing your code? Thanks a lot.
+ Respond to this comment
Comment 2
Received: 5 December 2019
Commenter:
The commenter has declared there is no conflict of interests.
Comment: Where should I download the Agricultural Disease dataset mentioned in your paper? thank you very much!
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 2
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.