Pages

Thursday, July 4, 2019

Line tracking and following using computer vision - Robotex CY 2019



The research laboratory of the Department of Computer Science of Neapolis University Paphos Intelligent Systems Lab in cooperation with Cypriot start-up company Robotics Lab has designed, created and presented at the Robotex Cyprus 2019 robotics competition a new type of robotic vehicle that can autonomously track and follow a black line using computer vision technologies and infrared sensors. The robot was designed, implemented and programmed as an attempt to present a new and innovative idea that fuses the input from a computer camera with the raw inputs of infrared sensors.

The proposed algorithm uses methods from closed-loop control theory, open-loop control theory, and Fuzzy Logic control theory. The importance of the aforementioned experiment lies in the fact that the researchers managed to implement the new technology in their autonomous robotic vehicle using a low-end 16 MHz microcontroller from an Arduino Nano board, a 60 frames per second camera produced by Pixy and an array of 8 analog Infrared sensors made by Pololu. The rest of the parts were either designed from scratch and 3D printed or designed and manufactured specifically for this project.

The robot competed and won the Line Following challenge during the ROBOTEX CY 2019 robotics competition in the Universities category, on a very difficult 20-meter long track. The robot also achieved the 3rd over-all fastest time in the competition that leads to a 3rd place in the Best of the Best category.

The ROBOTEX CY 2019 robotics competition took place this past weekend at the University of Cyprus Sports Center. This was the 3rd year in a row that the competition is held in Cyprus, with this year's competitors exceeding 1000 individuals.

Thursday, March 28, 2019

A.I. Is Flying Drones (Very, Very Slowly)

A drone from the University of Zurich is an engineering and technical marvel. It also moves slower than someone taking a Sunday morning jog.

At the International Conference on Intelligent Robots and Systems in Madrid last October, the autonomous drone, which navigates using artificial intelligence, raced through a complicated series of turns and gates, buzzing and moving like a determined and oversized bumblebee. It bobbed to duck under a bar that swooshed like a clock hand, yawed left, pitched forward and raced toward the finish line. The drone, small and covered in sensors, demolished the competition, blazing through the course twice as fast as its nearest competitor. Its top speed: 5.6 miles per hour.

A few weeks earlier, in Jeddah, Saudi Arabia, a different drone, flown remotely by its pilot, Paul Nurkkala, shot through a gate at the top of a 131-foot-high tower, inverted into a roll and then dove toward the earth. Competitors trailed behind or crashed into pieces along the course, but this one swerved and corkscrewed through two twin arches, hit a straightaway and then blasted into the netting that served as the finish line for the Drone Racing League’s world championship. The winning drone, a league-standard Racer3, reached speeds over 90 miles per hour, but it needed a human to guide it. Mr. Nurkkala, known to fans as Nurk, wore a pair of goggles that beamed him a first-person view of his drone as he flew it.

https://www.nytimes.com/2019/03/26/technology/alphapilot-ai-drone-racing.html?fbclid=IwAR22BfFdl1QdgYUXwvecK8N1lScV2oO05m_bLm1tkTEJMxjy0H15LaqYoXM

Thursday, March 21, 2019

NVIDIA Research project uses AI to instantly turn drawings into photorealistic images

NVIDIA Research has demonstrated GauGAN, a deep learning model that converts simple doodles into photorealistic images. The tool crafts images nearly instantaneously, and can intelligently adjust elements within images, such as adding reflections to a body of water when trees or mountains are placed near it.

The new tool is made possible using generative adversarial networks called GANs. With GauGAN, users select image elements like 'snow' and 'sky,' then draw lines to segment an image into different elements. The AI automatically generates the appropriate image for that element, such as a cloudy sky, grass, and trees.

As NVIDIA reveals in its demonstration video, GauGAN maintains a realistic image by dynamically adjusting parts of the render to match new elements. For example, transforming a grassy field to a snow-covered landscape will result in an automatic sky change, ensuring the two elements are compatible and realistic.

GauGAN was trained using millions of images of real environments. In addition to generating photorealistic landscapes, the tool allows users to apply style filters, including ones that give the appearance of sunset or a particular painting style. According to NVIDIA, the technology could be used to generate images of other environments, including buildings and people.

Talking about GauGAN is NVIDIA VP of applied deep learning research Bryan Catanzaro, who explained:

This technology is not just stitching together pieces of other images, or cutting and pasting textures. It's actually synthesizing new images, very similar to how an artist would draw something.

NVIDIA envisions a tool based on GauGAN could one day be used by architects and other professionals who need to quickly fill a scene or visualize an environment. Similar technology may one day be offered as a tool in image editing applications, enabling users to add or adjust elements in photos.

The company offers online demos of other AI-based tools on its AI Playground.

https://www.dpreview.com/news/7387722427/nvidia-research-project-uses-ai-to-instantly-turn-drawings-into-photorealistic-images?fbclid=IwAR2FvvIk-RT_Ow_-0m6pl_Sl1Y-v-_YLpvcBj63a8-D0XOBxGjv2LRG9sVU

Friday, February 15, 2019

ThisPersonDoesNotExist.com uses AI to generate endless fake faces

The ability of AI to generate fake visuals is not yet mainstream knowledge, but a new website — ThisPersonDoesNotExist.com — offers a quick and persuasive education.

The site is the creation of Philip Wang, a software engineer at Uber, and uses research released last year by chip designer Nvidia to create an endless stream of fake portraits. The algorithm behind it is trained on a huge dataset of real images, then uses a type of neural network known as a generative adversarial network (or GAN) to fabricate new examples.

“Each time you refresh the site, the network will generate a new facial image from scratch,” wrote Wang in a Facebook post. He added in a statement to Motherboard: “Most people do not understand how good AIs will be at synthesizing images in the future.”

The underlying AI framework powering the site was originally invented by a researcher named Ian Goodfellow. Nvidia’s take on the algorithm, named StyleGAN, was made open source recently and has proven to be incredibly flexible. Although this version of the model is trained to generate human faces, it can, in theory, mimic any source. Researchers are already experimenting with other targets. including anime characters, fonts, and graffiti.

https://www.theverge.com/tldr/2019/2/15/18226005/ai-generated-fake-people-portraits-thispersondoesnotexist-stylegan?fbclid=IwAR1Wdm9r_ImUdQiY7QsVSYtdjLOxEqJ0JjnWwlmnFzAJbuEVx0Ynm9gP97w

Thursday, February 14, 2019

How artificial intelligence is shaking up the job market


The future of work is usually discussed in theoretical terms. Reports and opinion pieces cover the full spectrum of opinion, from the dystopian landscape that leaves millions unemployed, to new opportunities for social and economic mobility that could transform society for the better.

The World Economic Forum’s The Future of Jobs 2018 aims to base this debate on facts rather than speculation. By tracking the acceleration of technological change as it gives rise to new job roles, occupations and industries, the report evaluates the changing contours of work in the Fourth Industrial Revolution.

One of the primary drivers of change identified is the role of emerging technologies, such as artificial intelligence (AI) and automation. The report seeks to shed more light on the role of new technologies in the labour market, and to bring more clarity to the debate about how AI could both create and limit economic opportunity. With 575 million members globally, LinkedIn’s platform provides a unique vantage point into global labour-market developments, enabling us to support the Forum's examination of the trends that will shape the future of work.

Our analysis uncovered two concurrent trends: the continued rise of tech jobs and skills, and, in parallel, a growth in what we call “human-centric” jobs and skills. That is, those that depend on intrinsically human qualities.

https://www.weforum.org/agenda/2018/09/artificial-intelligence-shaking-up-job-market?fbclid=IwAR0FVmPAgcivWKx5D-68S9oW7E-EIY9LCg8qdfWpP3XIXlXqeTm0tSxK6pc

Thursday, January 10, 2019

UBTECH's Walker Robot

Walker is one of newest robots from UBTECH Robotics. Below is just a few of the features and technologies used in its development.

1.Flexible walking on complex terrain: With gait planning and control, Walker can achieve stable walking on different surfaces including carpet, floor, marble, and more. Walker can also adapt to complex environments such as obstacles, slopes, steps, and uneven ground.

2.Self-balancing: When Walker is disturbed by external impact or inertia, it can automatically adjust its center of gravity to maintain balance.

3.Hand-eye coordination: Walker’s hands offer seven degrees of freedom to flexibly manipulate objects. By combining its hands with its own perception, Walker can also position dynamic external objects while adapting to uncertain conditions in real-time.

4.U-SLAM navigation and obstacle avoidance: UBTECH Simultaneous Localization and Mapping (U-SLAM) uses environmental information to avoid obstacles and determine Walker’s best path through a dynamic environment.

5.Face and object recognition: Walker has powerful machine vision capabilities to detect and recognize corresponding faces and objects in complex background environments.

6.Smart home control: Walker can help users control common household equipment such as lighting, electrical appliances and electrical sockets, enhancing safety, convenience, and comfort.

With so much innovative technology packed into its humanoid robot body, Walker has the intelligence and capabilities to make a helpful impact in any home or business in the very near future.

Founded in 2012, UBTECH is a global leading AI and humanoid robotic company. In 2018, UBTECH achieved a valuation of USD$5 billion following the single largest funding round ever for an artificial intelligence company, underscoring the company’s technological leadership.

Wednesday, January 9, 2019

Finally, a Do-It-All Robot Arm That’s Actually Affordable

If you want a versatile robot arm, today’s market really only offers two options: expensive industrial robots, or glorified toys. Low-end models may look similar to “real” robot arms, but they don’t usually have the accuracy or repeatability to do actual work. The new Hexbot, however, is designed to give you the best of both worlds.

Hexbot just launched on Kickstarter, but has already reached more than three times the $50,000 funding goal. It’s easy to see why; Hexbot is a small, but capable, modular robot arm that costs just $299 through the Kickstarter Special. That price puts it near the bottom of the market, but it has the kinds of features and specs you’d normally only find on mid-level robot arms.

https://blog.hackster.io/finally-a-do-it-all-robot-arm-thats-actually-affordable-df6252e838e6

Machine learning leads mathematicians to unsolvable problem

A team of researchers has stumbled on a question that is mathematically unanswerable because it is linked to logical paradoxes discovered by Austrian mathematician Kurt Gödel in the 1930s that can’t be solved using standard mathematics.

The mathematicians, who were working on a machine-learning problem, show that the question of ‘learnability’ — whether an algorithm can extract a pattern from limited data — is linked to a paradox known as the continuum hypothesis. Gödel showed that the statement cannot be proved either true or false using standard mathematical language. The latest result appeared on 7 January in Nature Machine Intelligence1.

“For us, it was a surprise,” says Amir Yehudayoff at the Technion–Israel Institute of Technology in Haifa, who is a co-author on the paper. He says that although there are a number of technical maths questions that are known to be similarly ‘undecidable’, he did not expect this phenomenon to show up in a relatively simple problem in machine learning.

John Tucker, a computer scientist at Swansea University, UK, says that the paper is “a heavyweight result on the limits of our knowledge”, with foundational implications for both mathematics and machine learning.

https://www.nature.com/articles/d41586-019-00083-3?fbclid=IwAR2B5ZH9S4jZF4eLs4hRERF_H0OlzyrhbzQlIV9hzeNcfM-VdZZloqnOj-I

Monday, January 7, 2019

Machine Learning for Kids

This tool introduces machine learning by providing hands-on experiences for training machine learning systems and building things with them.
It provides an easy-to-use guided environment for training machine learning models for classifying text, numbers or recognising images.
This builds on existing efforts to introduce and teach coding to children, by adding these models to Scratch (a widely used educational coding platform), allowing children to create projects and build games with the machine learning models that they've trained.