Philippe Schwaller, Daniel Probst, et al.
ACS Fall 2020
Despite the recent success of GANs in synthesizing images conditioned on inputs such as a user sketch, text, or semantic labels, manipulating the highlevel attributes of an existing natural photograph with GANs is challenging for two reasons. First, it is hard for GANs to precisely reproduce an input image. Second, after manipulation, the newly synthesized pixels often do not fit the original image. In this paper, we address these issues by adapting the image prior learned by GANs to image statistics of an individual image. Our method can accurately reconstruct the input image and synthesize new content, consistent with the appearance of the input image. We demonstrate our interactive system on several semantic image editing tasks, including synthesizing new objects consistent with background, removing unwanted objects, and changing the appearance of an object. Quantitative and qualitative comparisons against several existing methods demonstrate the effectiveness of our method.
Philippe Schwaller, Daniel Probst, et al.
ACS Fall 2020
Daniel Karl I. Weidele, Mauro Martino, et al.
IUI 2024
David Bau, Jun Yan Zhu, et al.
DGS@ICLR Workshop 2019
Benjamin Hoover, Yuchen Liang, et al.
NeurIPS 2023