Tuesday, December 17, 2013
Monday, December 16, 2013
Mobile Robotics offers comprehensive coverage of the essentials of the field suitable for both students and practitioners. Adapted from Alonzo Kelly's graduate and undergraduate courses, the content of the book reflects current approaches to developing effective mobile robots. Professor Kelly adapts principles and techniques from the fields of mathematics, physics, and numerical methods to present a consistent framework in a notation that facilitates learning and highlights relationships between topics. This text was developed specifically to be accessible to senior level undergraduates in engineering and computer science, and includes supporting exercises to reinforce the lessons of each section. Practitioners will value Kelly's perspectives on practical applications of these principles. Complex subjects are reduced to implementable algorithms extracted from real systems wherever possible, to enhance the real-world relevance of the text.
To Jump From The Web To The Real World
Original Article: techcrunch
Why does Google need robots? Because it already rules your pocket. The mobile market, except for the slow rise of wearables, is saturated. There are millions of handsets around the world, each one connected to the Internet and most are running either Android or iOS. Except for incremental updates to the form, there will be few innovations coming out of the mobile space in the next decade.
Then there’s Glass. These devices bring the web to the real world by making us the carriers. Google is already in front of us on our small screens but Glass makes us a captive audience. By depending on Google’s data for our daily interactions, mapping, and restaurant recommendations – not to mention the digitization of our every move – we become some of the best Google consumers in history. But that’s still not enough.
Google is limited by, for lack of a better word, meat. We are poor explorers and poor data gatherers. We tend to follow the same paths every day and, like ants, we rarely stray far from the nest. Google is a data company and needs far more data than humans alone can gather. Robots, then will be the driver for a number of impressive feats in the next few decades including space exploration, improved mapping techniques, and massive changes in the manufacturing workspace.
Robots like Baxter will replace millions of expensive humans – a move that I suspect will instigate a problematic rise of unemployment in the manufacturing sector – and companies like manufacturing giant Foxconn are investing in robotics at a clip. Drones, whether human-control or autonomous, are a true extension of our senses, placing us and keeping us apprised of situations far from home base. Home helpers will soon lift us out of bed when we’re sick, help us clean, and assist us near the end of our lives. Smaller hardware projects will help us lose weight and patrol our streets. The tech company not invested in robotics today will find itself far behind the curve in the coming decade.
That’s why Google needs robots. They will place the company at the forefront of man-machine interaction in the same way that Android put them in front of millions of eyeballs. Many pundits saw no reason for Google to start a mobile arm back when Android was still young. They were wrong. The same will be the case for these seemingly wonky experiments in robotics.
Did Google buy Boston Dynamics and seven other robotics companies so it could run a thousand quadrupedal Big Dogs through our cities? No, but I could see them using BD’s PETMAN, a bipedal robot that can walk and run over rough terrain – to assist in mapping difficult-to-reach areas. It could also become a sort of Google Now for the real world, appearing at our elbows in the form of an assistant that follows us throughout the day, keeping us on track, helping with tasks, and becoming our avatars when we can’t be in two places at once. The more Google can mediate our day-to-day experience the more valuable it becomes.
Need more proof? [Read More]
Saturday, December 14, 2013
Google has acquired robotics engineering company Boston Dynamics, best known for its line of quadrupeds with funny gaits and often mind-blowing capabilities. Products that the firm has demonstrated in recent years include BigDog, a motorized robot that can handle ice and snow, the 29 mile-per-hour Cheetah, and an eerily convincing humanoid known as PETMAN. News of the deal was reported on Friday by The New York Times, which says that the Massachusetts-based company's role in future Google projects is currently unclear.
Specific details about the price and terms of the deal are currently unknown, though Google told the NYT that existing contracts — including a $10.8 million contract inked earlier this year with the US Defense Agency Research Projects Agency (DARPA) — would be honored. Despite the DARPA deal, Google says it doesn't plan to become a military contractor "on its own," according to the Times.
Boston Dynamics began as a spinoff from the Massachusetts Institute of Technology in 1992, and quickly started working on projects for the military. Besides BigDog, that includes Cheetah, an animal-like robot developed to run at high speeds, which was followed up by a more versatile model called WildCat. It's also worked on Atlas, a humanoid robot designed to work outdoors.
In a tweet, Google's Andy Rubin — who formerly ran Google's Android division — said the "future is looking awesome."
Tuesday, December 10, 2013
I saw this job posting from EyeEm, a photo sharing app / service, in which they express their wish/plan to build a search engine that can ‘identify and understand beautiful photographs’. That got me thinking about how I would approach building a system like that.
Here is how I would start:
1. Define what you are looking for
EyeEm already has a search engine based on tags and geo-location. So I assume, they want to prevent low quality pictures to appear in the results and add missing tags to pictures, based on the image’s content. One could also group similar looking pictures or rank those pictures lower which “don’t contain their tags”. For instance for the Brandenburger Torthere are a lot of similar looking pictures and even some that don’t contain the gate at all.
But for which concepts should one train the algo-rithms? Modern image retrieval systems are trained for hundreds of concepts, but I don’t think it is wise to start with that many. Even the most sophisticated, fine tuned systems have high error rates for most of the concepts as can be seen in this year’s results of the Large Scale Visual Recognition Challenge.
For instance the team from EUVision / University of Amsterdam, placed 6 in the classification challenge, only selected 16 categories for their consumer app Impala. For a consumer application I think their tags are a good choice:
- Cats (sorry, no dogs)
- Party life
- Sunsets and sunrises
But of course EyeEm has the luxury of looking at their log files to find out what their users are actually searching for.
And on a comparable task of classifying pictures into 15 scene categories a team from MIT under Antonio Torralba showed that even with established algorithms one can achieve nearly 90% accuracy [Xiao10]. So I think it’s a good idea to start with a limited number of standard and EyeEm specific concepts, which allows for usable recognition accuracy even with less sophisticated approaches.
But what about identifying beautiful photographs? I think in image retrieval there is no other concept which is more desirable and challenging to master. What does beautiful actually mean? What features make a picture beautiful? How do you quantify these features? Is beautiful even a sensibly concept for image retrieval? Might it be more useful trying to predict which pictures will be `liked` or `hearted` a lot? These questions have to be answered before one can even start experimenting. I think for now it is wise to start with just filtering out low quality pictures and to try to predict what factors make a picture popular.
Article from tombone's blog
This year, at ICCV 2013 in Sydney, Australia, the vision community witnessed lots of grand new ideas, excellent presentations, and gained new insights which are likely to influence the direction of vision research in the upcoming decade.
3D data is everywhere. Detectors are not only getting faster, but getting stylish. Edges are making a comeback. HOGgles let you see the world through the eyes of an algorithm. Computers can automatically make your face pictures more memorable. And why ever stop learning, when you can learn all day long?
Here is a breakdown of some of the must-read ICCV 2013 papers which I'd like to share with you:
From Large Scale Image Categorization to Entry-Level Categories, Vicente Ordonez, Jia Deng, Yejin Choi, Alexander C. Berg, Tamara L. Berg, ICCV 2013.
This paper is the Marr Prize winning paper from this year's conference. It is all about entry-level categories - the labels people will use to name an object - which were originally defined and studied by psychologists in the 1980s. In the ICCV paper, the authors study entry-level categories at a large scale and learn the first models for predicting entry-level categories for images. The authors learn mappings between concepts predicted by existing visual recognition systems and entry-level concepts that could be useful for improving human-focused applications such as natural language image description or retrieval. NOTE: If you haven't read Eleanor Rosch's seminal 1978 paper, The Principles of Categorization, do yourself a favor: grab a tall coffee, read it and prepare to be rocked.
Monday, December 9, 2013
OpenROV is a open-source underwater robot. But it's so much more. It's also a community of people who are working together to create more accessible, affordable, and awesome tools for underwater exploration.
The backbone of the project is the global community of DIY ocean explorers who are working, tinkering and improving the OpenROV design. The community ranges from professional ocean engineers to hobbyists, software developers to students. It's a welcoming community and everyone's feedback and input is valued.
Saturday, December 7, 2013
PALO ALTO, Calif. — In an out-of-the-way Google office, two life-size humanoid robots hang suspended in a corner.
If Amazon can imagine delivering books by drones, is it too much to think that Google might be planning to one day have one of the robots hop off an automated Google Car and race to your doorstep to deliver a package?
Google executives acknowledge that robotic vision is a “moonshot.” But it appears to be more realistic than Amazon’s proposed drone delivery service, which Jeff Bezos, Amazon’s chief executive, revealed in a television interview the evening before one of the biggest online shopping days of the year.
Over the last half-year, Google has quietly acquired seven technology companies in an effort to create a new generation of robots. And the engineer heading the effort is Andy Rubin, the man who built Google’s Android software into the world’s dominant force in smartphones.
The company is tight-lipped about its specific plans, but the scale of the investment, which has not been previously disclosed, indicates that this is no cute science project.
At least for now, Google’s robotics effort is not something aimed at consumers. Instead, the company’s expected targets are in manufacturing — like electronics assembly, which is now largely manual — and competing with companies like Amazon in retailing, according to several people with specific knowledge of the project.
A realistic case, according to several specialists, would be automating portions of an existing supply chain that stretches from a factory floor to the companies that ship and deliver goods to a consumer’s doorstep.
“The opportunity is massive,” said Andrew McAfee, a principal research scientist at the M.I.T. Center for Digital Business. “There are still people who walk around in factories and pick things up in distribution centers and work in the back rooms of grocery stores.”
Google has recently started experimenting with package delivery in urban areas with its Google Shopping service, and it could try to automate portions of that system. The shopping service, available in a few locations like San Francisco, is already making home deliveries for companies like Target, Walgreens and American Eagle Outfitters.
Perhaps someday, there will be automated delivery to the doorstep, which for now is dependent on humans.
“Like any moonshot, you have to think of time as a factor,” Mr. Rubin said. “We need enough runway and a 10-year vision.”
Mr. Rubin, the 50-year-old Google executive in charge of the new effort, began his engineering career in robotics and has long had a well-known passion for building intelligent machines. Before joining Apple Computer, where he initially worked as a manufacturing engineer in the 1990s, he worked for the German manufacturing company Carl Zeiss as a robotics engineer.
“I have a history of making my hobbies into a career,” Mr. Rubin said in a telephone interview. “This is the world’s greatest job. Being an engineer and a tinkerer, you start thinking about what you would want to build for yourself.”
He used the example of a windshield wiper that has enough “intelligence” to operate when it rains, without human intervention, as a model for the kind of systems he is trying to create. That is consistent with a vision put forward by the Google co-founder Larry Page, who has argued that technology should be deployed wherever possible to free humans from drudgery and repetitive tasks.
The veteran of a number of previous Silicon Valley start-up efforts and twice a chief executive, Mr. Rubin said he had pondered the possibility of a commercial effort in robotics for more than a decade. He has only recently come to think that a range of technologies have matured to the point where new kinds of automated systems can be commercialized.
Earlier this year, Mr. Rubin stepped down as head of the company’s Android smartphone division. Since then he has convinced Google’s founders, Sergey Brin and Mr. Page, that the time is now right for such a venture, and they have opened Google’s checkbook to back him. He declined to say how much the company would spend. Read More
Wednesday, December 4, 2013
Google Compute Engine is now Generally Available with expanded OS support, transparent maintenance, and lower prices
Google Cloud Platform gives developers the flexibility to architect applications with both managed and unmanaged services that run on Google’s infrastructure. We’ve been working to improve the developer experience across our services to meet the standards our own engineers would expect here at Google.
Today, Google Compute Engine is Generally Available (GA), offering virtual machines that are performant, scalable, reliable, and offer industry-leading security features like encryption of data at rest. Compute Engine is available with 24/7 support and a 99.95% monthly SLA for your mission-critical workloads. We are also introducing several new features and lower prices for persistent disks and popular compute instances.
Expanded operating system support
During Preview, Compute Engine supported two of the most popular Linux distributions, Debian and Centos, customized with a Google-built kernel. This gave developers a familiar environment to build on, but some software that required specific kernels or loadable modules (e.g. some file systems) were not supported. Now you can runany out-of-the-box Linux distribution (including SELinux and CoreOS) as well as any kernel or software you like, including Docker, FOG, xfs and aufs. We’re also announcing support for SUSE and Red Hat Enterprise Linux (in Limited Preview) and FreeBSD.
Transparent maintenance with live migration and automatic restart
At Google, we have found that regular maintenance of hardware and software infrastructure is critical to operating with a high level of reliability, security and performance. We’re introducing transparent maintenance that combines software and data center innovations with live migration technology to perform proactive maintenance while your virtual machines keep running. You now get all the benefits of regular updates and proactive maintenance without the downtime and reboots typically required. Furthermore, in the event of a failure, we automatically restart your VMs and get them back online in minutes. We’ve already rolled out this feature to our US zones, with others to follow in the coming months.
New 16-core instances
Developers have asked for instances with even greater computational power and memory for applications that range from silicon simulation to running high-scale NoSQL databases. To serve their needs, we’re launching three new instance types in Limited Preview with up to 16 cores and 104 gigabytes of RAM. They are available in the familiar standard, high-memory and high-CPU shapes.
Faster, cheaper Persistent Disks
Building highly scalable and reliable applications starts with using the right storage. Our Persistent Disk service offers you strong, consistent performance along with much higher durability than local disks. Today we’re lowering the price of Persistent Disk by 60% per Gigabyte and dropping I/O charges so that you get a predictable, low price for your block storage device. I/O available to a volume scales linearly with size, and the largest Persistent Disk volumes have up to 700% higher peak I/O capability. You can read more about the improvements to Persistent Disk in our previous blog post.
10% Lower Prices for Standard Instances
We’re also lowering prices on our most popular standard Compute Engine instances by 10% in all regions.
Customers and partners using Compute Engine
In the past few months, customers like Snapchat, Cooladata, Mendelics, Evite and Wix have built complex systems on Compute Engine and partners like SaltStack, Wowza, Rightscale, Qubole, Red Hat, SUSE, and Scalrhave joined our Cloud Platform Partner Program, with new integrations with Compute Engine.
“We find that Compute Engine scales quickly, allowing us to easily meet the flow of new sequencing requests… Compute Engine has helped us scale with our demands and has been a key component to helping our physicians diagnose and cure genetic diseases in Brazil and around the world.”
- David Schlesinger, CEO of Mendelics
"Google Cloud Platform provides the most consistent performance we’ve ever seen. Every VM, every disk, performs exactly as we expect it to and gave us the ability to build fast, low-latency applications."
- Sebastian Stadil, CEO of Scalr
We’re looking forward to this next step for Google Cloud Platform as we continue to help developers and businesses everywhere benefit from Google’s technical and operational expertise. Below is a short video that explains today’s launch in more detail.
Tuesday, December 3, 2013
This robot is looking pretty pleased with itself – and wouldn't you be, if you were off to the International Space Station? Prototype cosmobot SAR-401, with its human-like torso, is designed to service the outside of the ISS by mimicking the arm and finger movements of a human puppet-master indoors.
In this picture, that's the super-focussed guy in the background but in space it would be a cosmonaut operating from the relative safety of the station's interior and so avoiding a risky spacewalk. You can watch the Russian android mirroring a human here.
SAR-1 joins a growing zoo of robots in space. NASA already has its ownRobonaut on board the ISS to carry out routine maintenance tasks. It was recently joined by a small, cute Japanese robot, Kirobo, but neither of the station's droids are designed for outside use.
Until SAR-401 launches, the station's external Dextre and Canadarm2 rule the orbital roost. They were commemorated on Canadian banknotes this year – and they don't even have faces.
Marketers recognize that emotion drives brand loyalty and purchase decisions. Yet, traditional ways of measuring emotional response - surveys and focus groups - create a gap by requiring viewers to think about and say how they feel. Neuroscience provides insight into how the mind works, but it typically requires expensive, bulky equipment and lab-type settings that limit and influence the experience.
Affdex is an award-winning neuromarketing tool that reads emotional states such as liking and attention from facial expressions using an ordinary webcam...to give marketers faster, more accurate insight into consumer response to brands, advertising and media. It uses automated facial expression analysis recognition, also called facial coding, to analyze your face and interpret your emotional state. Offered as a cloud-based software-as-a-service, Affdex is a fast, easy and affordable to add into existing studies. MIT-spinoff Affectiva has some of the best and brightest emotion experts behind the Affdex platform science, providing the most accurate measurement today. This ongoing investment in research and development is focused not just on measuring, but also on predicting...which ads will really work to drive sales and build brands.