NVIDIA Research has demonstrated GauGAN, a deep learning model that converts simple doodles into photorealistic images. The tool crafts images nearly instantaneously, and can intelligently adjust elements within images, such as adding reflections to a body of water when trees or mountains are placed near it.
The new tool is made possible using generative adversarial networks called GANs. With GauGAN, users select image elements like 'snow' and 'sky,' then draw lines to segment an image into different elements. The AI automatically generates the appropriate image for that element, such as a cloudy sky, grass, and trees.
As NVIDIA reveals in its demonstration video, GauGAN maintains a realistic image by dynamically adjusting parts of the render to match new elements. For example, transforming a grassy field to a snow-covered landscape will result in an automatic sky change, ensuring the two elements are compatible and realistic.
GauGAN was trained using millions of images of real environments. In addition to generating photorealistic landscapes, the tool allows users to apply style filters, including ones that give the appearance of sunset or a particular painting style. According to NVIDIA, the technology could be used to generate images of other environments, including buildings and people.
Talking about GauGAN is NVIDIA VP of applied deep learning research Bryan Catanzaro, who explained:
This technology is not just stitching together pieces of other images, or cutting and pasting textures. It's actually synthesizing new images, very similar to how an artist would draw something.
NVIDIA envisions a tool based on GauGAN could one day be used by architects and other professionals who need to quickly fill a scene or visualize an environment. Similar technology may one day be offered as a tool in image editing applications, enabling users to add or adjust elements in photos.
The company offers online demos of other AI-based tools on its AI Playground.
No comments:
Post a Comment