Introduction -Artificial intelligence and image editing.
Artificial intelligence and image editing has come a long way! GANPaint Studio tool takes a natural image of a specific category, e.g. churches or kitchen, and allows modifications with brushes that do not just draw simple strokes, but actually draw semantically meaningful units – such as trees, brick-texture, or domes. This is a joined project by researchers from MIT CSAIL, IBM Research, and the MIT-IBM Watson AI Lab.
The core part of Artificial intelligence and image editing based GANPaint Studio is a neural network (GAN) that can produce its own images of a certain category, e.g. kitchen images. In previous work, we analyzed which internal parts of the network are responsible for producing which feature (project GANDissect). This allowed us to modify images that the network produced by “drawing” neurons.
The novelty we added for GANPaint Studio is that a natural image (of this category) can now be ingested and modified with semantic brushes that produce or remove units such as trees, brick-texture, or domes. The demo is currently in low resolution and not perfect, but it shows that something like this is possible. Please check the video below.
Semantic Photo Manipulation with a Generative Image Prior
To perform a semantic edit on an image x, they take three steps. (1) first compute a latent vector z = E (x) representing x. (2) then apply a semantic vector space operation ze = edit(z) in the latent space; this could add, remove, or alter a semantic concept in the image. (3) Finally, regenerate the image from the modified ze . Unfortunately, as can be seen in (b), usually the input image x cannot be precisely generated by the generator G , so (c) using the generator G to create the edited image G(xe) will result in the loss of many attributes and details of the original image (a). Therefore to generate the image we propose a new last step: (d) learn an image-specific generator G′ which can produce x′e = G′(ze) that is faithful to the original image x in the unedited regions. Photo from the LSUN dataset.
Semantic Photo Manipulation with a Generative Image Prior (to appear at SIGGRAPH 2019) David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, Antonio Torralba
Citation
@article{Bau:Ganpaint:2019,
author = {David Bau and Hendrik Strobelt and William Peebles and
Jonas Wulff and Bolei Zhou and Jun{-}Yan Zhu
and Antonio Torralba},
title = {Semantic Photo Manipulation with a Generative Image Prior},
journal = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH)},
volume = {38},
number = {4},
year = {2019},
}
https://ganpaint.io/
People vector created by pch.vector
Introduction -Artificial intelligence and image editing.
Artificial intelligence and image editing has come a long way! GANPaint Studio tool takes a natural image of a specific category, e.g. churches or kitchen, and allows modifications with brushes that do not just draw simple strokes, but actually draw semantically meaningful units – such as trees, brick-texture, or domes. This is a joined project by researchers from MIT CSAIL, IBM Research, and the MIT-IBM Watson AI Lab.
The core part of Artificial intelligence and image editing based GANPaint Studio is a neural network (GAN) that can produce its own images of a certain category, e.g. kitchen images. In previous work, we analyzed which internal parts of the network are responsible for producing which feature (project GANDissect). This allowed us to modify images that the network produced by “drawing” neurons.
Also Read: Real-world applications of artificial intelligence in web design.
The novelty we added for GANPaint Studio is that a natural image (of this category) can now be ingested and modified with semantic brushes that produce or remove units such as trees, brick-texture, or domes. The demo is currently in low resolution and not perfect, but it shows that something like this is possible. Please check the video below.
Try the demo – GANPaint Studio (SIGGRAPH)
To perform a semantic edit on an image x, they take three steps. (1) first compute a latent vector z = E (x) representing x. (2) then apply a semantic vector space operation ze = edit(z) in the latent space; this could add, remove, or alter a semantic concept in the image. (3) Finally, regenerate the image from the modified ze . Unfortunately, as can be seen in (b), usually the input image x cannot be precisely generated by the generator G , so (c) using the generator G to create the edited image G(xe) will result in the loss of many attributes and details of the original image (a). Therefore to generate the image we propose a new last step: (d) learn an image-specific generator G′ which can produce x′e = G′(ze) that is faithful to the original image x in the unedited regions. Photo from the LSUN dataset.
(to appear at SIGGRAPH 2019)
David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, Antonio Torralba
Citation
Share this: