Pages

Thursday, March 28, 2019

A.I. Is Flying Drones (Very, Very Slowly)

A drone from the University of Zurich is an engineering and technical marvel. It also moves slower than someone taking a Sunday morning jog.

At the International Conference on Intelligent Robots and Systems in Madrid last October, the autonomous drone, which navigates using artificial intelligence, raced through a complicated series of turns and gates, buzzing and moving like a determined and oversized bumblebee. It bobbed to duck under a bar that swooshed like a clock hand, yawed left, pitched forward and raced toward the finish line. The drone, small and covered in sensors, demolished the competition, blazing through the course twice as fast as its nearest competitor. Its top speed: 5.6 miles per hour.

A few weeks earlier, in Jeddah, Saudi Arabia, a different drone, flown remotely by its pilot, Paul Nurkkala, shot through a gate at the top of a 131-foot-high tower, inverted into a roll and then dove toward the earth. Competitors trailed behind or crashed into pieces along the course, but this one swerved and corkscrewed through two twin arches, hit a straightaway and then blasted into the netting that served as the finish line for the Drone Racing League’s world championship. The winning drone, a league-standard Racer3, reached speeds over 90 miles per hour, but it needed a human to guide it. Mr. Nurkkala, known to fans as Nurk, wore a pair of goggles that beamed him a first-person view of his drone as he flew it.

https://www.nytimes.com/2019/03/26/technology/alphapilot-ai-drone-racing.html?fbclid=IwAR22BfFdl1QdgYUXwvecK8N1lScV2oO05m_bLm1tkTEJMxjy0H15LaqYoXM

Thursday, March 21, 2019

NVIDIA Research project uses AI to instantly turn drawings into photorealistic images

NVIDIA Research has demonstrated GauGAN, a deep learning model that converts simple doodles into photorealistic images. The tool crafts images nearly instantaneously, and can intelligently adjust elements within images, such as adding reflections to a body of water when trees or mountains are placed near it.

The new tool is made possible using generative adversarial networks called GANs. With GauGAN, users select image elements like 'snow' and 'sky,' then draw lines to segment an image into different elements. The AI automatically generates the appropriate image for that element, such as a cloudy sky, grass, and trees.

As NVIDIA reveals in its demonstration video, GauGAN maintains a realistic image by dynamically adjusting parts of the render to match new elements. For example, transforming a grassy field to a snow-covered landscape will result in an automatic sky change, ensuring the two elements are compatible and realistic.

GauGAN was trained using millions of images of real environments. In addition to generating photorealistic landscapes, the tool allows users to apply style filters, including ones that give the appearance of sunset or a particular painting style. According to NVIDIA, the technology could be used to generate images of other environments, including buildings and people.

Talking about GauGAN is NVIDIA VP of applied deep learning research Bryan Catanzaro, who explained:

This technology is not just stitching together pieces of other images, or cutting and pasting textures. It's actually synthesizing new images, very similar to how an artist would draw something.

NVIDIA envisions a tool based on GauGAN could one day be used by architects and other professionals who need to quickly fill a scene or visualize an environment. Similar technology may one day be offered as a tool in image editing applications, enabling users to add or adjust elements in photos.

The company offers online demos of other AI-based tools on its AI Playground.

https://www.dpreview.com/news/7387722427/nvidia-research-project-uses-ai-to-instantly-turn-drawings-into-photorealistic-images?fbclid=IwAR2FvvIk-RT_Ow_-0m6pl_Sl1Y-v-_YLpvcBj63a8-D0XOBxGjv2LRG9sVU