Sunday, May 15, 2011

Google I/O 2011: Cloud Robotics, ROS for Java and Android

Article from

Yesterday at Google I/O, developers at Google and Willow Garage announced a new rosjava library that is the first pure-Java implementation of ROS. This new library was developed at Google with the goal of enabling advanced Android apps for robotics.

The library, tools, and hardware that come with Android devices are well-suited for robotics. Smartphones and tablets are sophisticated computation devices with useful sensors and great user-interaction capabilities. Android devices can also be extended with additional sensor and actuators thanks to the Open Accessory and Android @ Home APIs that were announced at Google I/O

The new rosjava is currently in alpha release mode and is still under active development, so there will be changes to the API moving forward. For early adopters, there are Android tutorials to help you send and receive sensor data to a robot.

This announcement was part of a broader talk on Cloud Robotics, which was given by Ryan Hickman and Damon Kohler of Google, as well Ken Conley and Brian Gerkey of Willow Garage. This talk discusses the many possibilities of harnessing the cloud for robotics applications, from providing capabilities like object recognition and voice services, to reducing the cost of robotics hardware, to enabling the development of user interfaces in the cloud that connect to robots remotely. With the new rosjava library, ROS developers can now take advantage of the Android platform to connect more easily to cloud services.

Friday, May 13, 2011

Call for ICPR Contest Proposals

The ICPR 2012 Contest Co-Chairs invite proposals for contests, whose results will be presented at the 21st International Conference on Pattern Recognition ( The aim of the contests is to encourage better scientific development through comparing competing approaches on a common dataset. Each contest needs a contest organizer, who will be responsible for providing the dataset and setting clear competition tasks. The contest organizer should advertise the contest by Oct 1, 2011 (the schedule below is set so as to encourage contest participants to submit papers to main conference). The ICPR web site will also publicize the contest. The contest meeting will be a quarter/half day session (organizer's choice) at the conference on November 11, 2012, immediately before the start of the main conference. The format of the meeting is up to the contest organizer, but we anticipate it will include a description of the challenge and dataset, and a summary of performance on the different competition tasks. There could be presentations by the top scoring teams.

ICPR 2012 will be responsible for the following:

- Providing a meeting venue with necessary equipment and logistical support for the contests including support staff in each room.
- Duplicating contest notes and distributing them to the participants at the conference venue. Contest organizers are encouraged to make summary notes of their contest results for stimulating discussions in their sessions.

Important Contest Session Dates

Contest Proposal Deadline July 15, 2011

Notification of Acceptance August 15, 2011

Contest Announcement to Participants (Tasks, Dataset and Metrics) Oct 1, 2011

Submissions of Contest Results and submission of a 2-3 page report for copying and distribution to the participants. Oct 1, 2012

Results of the competition are announced at the conference Nov 11, 2012

How to Submit Proposal

To propose a contest, a PDF file containing the following information must be sent to the ICPR 2012 Contest Co-Chairs, Yasuyo Kita <y.kita(at)> and Robert Fisher <rbf(at)> by July 15, 2011 (All submissions will be acknowledged by email) :

1. Contest title and abstract
2. Name and contact information of the main organizer and at least 2 other expert committee members
3. General description of the problem
4. Description of the dataset to be used
5. Description of the actual competition tasks
6. Evaluation metrics
7. Plan of how to organize the contest
8. Estimated number of participants
9. Preference of a quarter day session or a half day session

Other points that should be taken into account are as follows:
* The dataset should be interesting, available, and sufficiently large.
* The tasks should be interesting and novel, but also accessible without too much domain specific knowledge.
* The evaluation metrics should be clear and easy to apply.

Tuesday, May 10, 2011

Sort by subject in Google Images

Article From

When you’re searching for images, sometimes it can be hard to come up with exactly the right words to describe what you have in mind. For example, when you think of London, you might picture the iconic clock tower or the big Ferris wheel. You may not always remember the names of those landmarks, but you can visualize them in your mind. To make it easier for you to find images in situations like these, you can now use Google Images with sorting.
When you search for [london], by default you’ll see image results ranked by relevance. Click on “Sort by subject” in the left-hand panel and you’ll see images organized into categories that will narrow down your search and help you find the exact image of London that you want.

Sorting by subject shows that some of the most popular images associated with London are the London Eye, Big Ben, Tower Bridge and the city at night. This organized view helps you find the images you were visualizing more quickly, so you might realize, “Ah, that big clock tower is called Big Ben, that’s what I was looking for.” You can then can click on the Big Ben group to find the best image within that subject group.
You can also use this feature to explore categories of a general topic that may be easier to learn about visually, like flower varieties or dog breeds. For example, if you want to get flowers for someone but you only know what their favorite kind looks like, not the name of it, you can sort by subject to learn different flower types and discover the name of the type you’re looking for. Watch this video to learn more about how sorting can help you find the image you’re seeking:
Sorting by subject uses algorithms that identify relationships among images found on the web and presents those images in visual groups, expanding on the technology developed forGoogle Similar Images and Google Image Swirl. By looking at multiple sources of similarities, such as pixel values and semantic relationships, and by mining massive amounts of data, we can make meaningful connections and groupings among images.

Sorting will be rolling out globally to nearly every domain and language over the next week. Whether you have a particular image in mind or you’re just exploring a general topic, sort by subject can help you find the image you need—even if you don’t have the exact words to describe it.


By Columbia University, University of Maryland, and Smithsonian Institution

Leafsnap is the first in a series of electronic field guides being developed by researchers from Columbia University, the University of Maryland, and the Smithsonian Institution. This free mobile app uses visual recognition software to help identify tree species from photographs of their leaves.

iPhone Screenshot 1iPhone Screenshot 2

Leafsnap contains beautiful high-resolution images of leaves, flowers, fruit, petiole, seeds, and bark. Leafsnap currently includes the trees of New York City and Washington, D.C., and will soon grow to include the trees of the entire continental United States.

This website shows the tree species included in Leafsnap, the collections of its users, and the team of research volunteers working to produce it.