Pages

Wednesday, December 26, 2012

Seasons Greetings to everyone!!!!!

Sunday, December 16, 2012

LuminAR bulb lights path to augmented reality

(Phys.org)—Are we moving closer to a computer age where "touchscreen" is in the room, but it is the counter, desktop, wall, our new digital work areas? Are we moving into a new form factor called Anywhere? Do we understand how locked up we are in on-screen prisons, and that options will come? The drive for options is strong at the MIT Media Lab, where its Fluid Interfaces Group has been working on some AR options, the "Augmented Product Counter" and the "LuminAR." The latter is a bulb that makes any surface a touchscreen. You can even use it to replace the bulb in a desk lamp with the MIT group's "bulb" to project images onto a surface. The LuminAR bulb is small enough to fit a standard light fixture.

LuminAR bulb lights path to augmented reality   (w/ video)

The LuminAR team, Natan Linder, Pattie Maes and Rony Kubat, described what they have done as redefining the traditional incandescent bulb and desk lamp as a new category of "robotic, digital information devices." This will be one of the new looks in AR interfaces. The LuminAR lamp system looks similar to a conventional desk lamp, but its arm is a robotic arm with four degrees of freedom. The arm terminates in a lampshade with Edison socket. Each DOF has a motor, positional and torque sensors, motor control and power circuitry. The arm is designed to interface with the LuminAR bulb. The "bulb," which fits into a lightbulb socket, combines a Pico-projector, camera, and wireless computer and can make any surface interactive. The team uses the special spelling "LuminAR" to suggest its place in the group's other Augmented Reality initiatives.

Read more at: http://phys.org/news/2012-12-luminar-bulb-path-augmented-reality.html#jCp

Saturday, December 15, 2012

www.lire-project.net

image

The site for the upcoming book "Visual Information Retrieval Using Java and LIRE" is on-line

http://www.lire-project.net/

Improving SURF Image Matching Using Supervised Learning

(Suggested Article)

Hatem Mousselly-sergieh [LIRIS] , Elod Egyed-zsigmond [LIRIS] , Mario Döller [FH Kufstein Tirol - UNIVERSITY OF APPLIED SCIENCES] , David Coquil [University of Passau] , Jean-Marie Pinon [LIRIS] , Harald Kosch [University of Passau]

Dans The 8th International Conference on Signal Image and Internet Systems (SITIS 2012), Naples, Italy.

Abstract

Keypoints-based image matching algorithms have proven very successful in recent years. However, their execution time makes them unsuitable for online applications. Indeed, identifying similar keypoints requires comparing a large number of high dimensional descriptor vectors. Previous work has shown that matching could be still accurately performed when only considering a few highly significant keypoints. In this paper, we investigate reducing the number of generated SURF features to speed up image matching while maintaining the matching recall at a high level. We propose a machine learning approach that uses a binary classifier to identify keypoints that are useful for the matching process. Furthermore, we compare the proposed approach to another method for keypoint pruning based on saliency maps. The two approaches are evaluated using ground truth datasets. The evaluation shows that the proposed classification-based approach outperforms the adversary in terms of the trade-off between the matching recall and the percentage of reduced keypoints. Additionally, the evaluation demonstrates the ability of the proposed approach of effectively reducing the matching runtime.

Thursday, December 13, 2012

News on LIRE performance

[Article from http://www.semanticmetadata.net/]

In the course of finishing the book, I reviewed several aspects of the LIRE code and came across some bugs, including one with the Jensen-Shannon divergence. This dissimilarity measure has never been used actively in any features as it didn’t work out in retrieval evaluation the way it was meant to. After two hours staring at the code the realization finally came. In Java the short if statement, “x ? y : z” is overruled by almost any operator including ‘+’. Hence,

System.out.print(true ? 1: 0 + 1) prints '1',

while

System.out.print((true ? 1: 0) + 1) prints '2'

With this problem identified I was finally able to fix the implementation of the Jensen-Shannon divergence implementation and came to new retrieval evaluation results on the SIMPLIcity data set:

image


Note that the color histogram in the first row now performs similarly to the “good” descriptors in terms of precision at ten and error rate. Also note that a new feature creeped in: Joint Histogram. This is a histogram combining pixel rank and RGB-64 color.

All the new stuff can be found in SVN and in the nightly builds (starting tomorrow  Winking smile)
[Article from http://www.semanticmetadata.net/]

Google’s decision to block explicit images is a huge win for Bing & Search.xxx

Google’s decision to block explicit images is a huge win for Bing & Search.xxx

Google has modified its popular image search to block many explicit pictures, a move that could be a big win for competing search engines.

While you used to be able to turn SafeSearch off to easily find questionable material, Google now only lets you “filter explicit images” or “report offensive images.” As you can see in the image above, a search for the word “porn” brings up some questionable material but nothing explicit.

Users on Reddit first noticed the changes this morning, and several were quick to label the move as “censorship.” VentureBeat can confirm that common searches in the U.S. and U.K. have blocked steamy images from showing up in image results and that SafeSearch is on permanently.

A Google spokesperson told us and other outlets the following statement about the changes:

We are not censoring any adult content, and want to show users exactly what they are looking for — but we aim not to show sexually explicit results unless a user is specifically searching for them. We use algorithms to select the most relevant results for a given query. If you’re looking for adult content, you can find it without having to change the default setting — you just may need to be more explicit in your query if your search terms are potentially ambiguous. The image search settings now work the same way as in web search.

Essentially, Google’s decision makes it much harder to find porn using Google. This is a big win for competing search engines, especially Microsoft’s Bing and ICM Registry’s Search.xxx. If Google doesn’t want the traffic, the underdogs certainly will take it.

Microsoft’s Bing, the No. 2 search engine on the web, still offers a robust image search, and we can confirm that it works perfectly well for looking at all kinds of explicit images. (Which is sort of funny considering how Microsoft has serious problems with nudity and pornography being hosted on its servers.)

Search.xxx is another winner. While it does not offer a full-fledged image search, Search.xxx does offer a safe browsing experience when you are looking for adult material. Plus, you know exactly what you’ll find when looking for video or images on it. As we’ve written before, Search.xxx only crawls online pages with the .xxx domain and it claims to be “safer” than using other search engines to find porn because all sites found through it are scanned daily by McAfee.

“We are still digesting exactly what this will mean in real-world search queries for the porn-searching consumer, but this seems to continue a trend we have seen in recent months by the major search engines towards adult content,” ICM Registry CEO Stuart Lawley told us via email. “Google’s decision only serves to reinforce the purpose and usefulness of what ICM Registry has been building: a destination for those adult consumers looking for high quality content.”

Read more at http://venturebeat.com/2012/12/12/google-bing-search-xxx-porn/#fBQ3bmWtieGbfELY.99

World's most anatomically correct musculoskeletal robot is presented in Japan

The University of Tokyo's JSK Lab have developed what could be considered the world's most...

Most human-like robots don't even attempt biological accuracy, because replicating every muscle in the body isn't necessary for a functional humanoid. Even biomimetic robots based on animals don't attempt to replicate every anatomical detail of the animals they imitate, because that would needlessly complicate things. That said, there is much to be learned from how muscle groups move and interact with the skeleton, which is why a team at Tokyo University's JSK Lab has developed what could be considered the world's most anatomically correct robot to date.

Researchers there have been developing increasingly complex musculoskeletal robots for more than a decade. Their first robot, Kenta, was built in 2001, followed by Kotaro in 2005, Kojiro in 2007, and Kenzoh (an upper-body only robot) in 2010. Their latest robot, Kenshiro, was presented at the annual Humanoids conference this month.

It models the average 12 year-old Japanese boy, standing 158 cm (5 feet, 2 inches) tall and weighing 50 kg (110 pounds). According to Yuto Nakanishi, the project leader, keeping the robot's weight down was a difficult balancing act. Nonetheless, the team managed to create muscles which reproduce nearly the same joint torque as real muscles, and that are roughly five times more powerful than Kojiro's.

Muscle and bone

Its artificial muscles – which are a bit like pulleys – replicate 160 major muscles: each leg has 25, each shoulder has 6, the torso has 76, and the neck has 22. Most of these muscles are redundant to Kenshiro's actual degrees of freedom (64), which is why other humanoids don't bother with them. By way of comparison, mechanical robots like Samsung's Roboraytypically have just six servos per leg, and often don't contain any in the torso/spine (the human body actually contains around 650 muscles).

A detailed look at Kenshiro's knee joint, which contains artificial ligaments and a floati...

A detailed look at Kenshiro's knee joint, which contains artificial ligaments and a floating patella

Equally important to the muscles is Kenshiro's bone structure. Unlike its predecessors, Kenshiro's skeleton was made out of aluminum, which is less likely to break under stress compared to plastic. Also, its knee joints contain artificial ligaments and a patella to better imitate the real thing. These are just some of the details considered in its construction, which far surpasses the work done on the upper-torso Eccerobot cyclops, whose creators claimed it to be the world's most anatomically accurate robot a few years ago.

As you'll see in the following video, programming all of those muscles to work in tandem is proving a difficult task – a bit like playing QWOP multiplied by about a hundred. The robot is able to perform relatively simple tasks, like bending its arms and legs, but more complex actions such as walking remain primitive. However, the team has made significant strides over the years, and with Kenshiro they continue to push the limits of musculoskeletal robots further.

 

[Article from http://www.gizmag.com/kenshiro-musculoskeletal-robot/25415/]

OpenArch Adds A “Digital Layer” To The Average Room

Creating a workable Minority Report-like screen isn’t very hard but what about an entire room or building that responds to touch, voice, and movement? Now that’s hard. That, however, is the goal of OpenArch, a project by designerIon Cuervas-Mons that uses projectors, motion sensors, and light to create interactive spaces.

“This project started 3 years ago when I had the opportunity to buy a small apartment in the north of Spain, in the Basque Country. I decided to start my own research in the small apartment. I am architect and I was really interested on integrating physical and digital layers,” said Cuervas-Mons. “Our objective was to create a Domestic Operating System (D.OS) integrating physical and digital realities.”

The project as seen here is about 40% done and there is still more to do. Cuervas-Mons sees a deep connection between how space defines digital interaction and vice-versa. The goal, in the end, is to create a digital component that can live in any space and enliven it with digital information, feedback, and sensors.

He’s not just stopping at projectors and some computing power. His goal is the creation of truly smart environments.

“I think we need smart homes: first because of energy efficiency, visualization of consumptions on real time will help us not to waste energy. If we introduce physical objects into the interaction with digital information everything will be easier and simpler. They are going to be the center of the future smart cities,” he said.

Cuervas-Mons also runs design consultancy called Think Big Factory where he brings the things he’s learned in the OpenArch project to market. The project itself uses off-the-shelf components like Kinect sensors and projectors

The group will launch a Kickstarter project in January to commercialize the product and make it available to experimenters. How this technology will eventually work in “real life” is anyone’s guess, but it looks like the collective of technologists, architects, and designers is definitely making some waves in the smart home space.

 

Openarch || FILM from Openarch on Vimeo.

[Article from http://techcrunch.com/2012/12/12/openarch-adds-a-digital-layer-to-the-average-room/]

Tuesday, December 4, 2012

Master Theses and SW-Internship @ Sensory Experience Lab

SELab is offering a number of Master Theses and an SW-Internship. In the following an overview of the different offers is given.

Interested students should contact SELab for additional information via selab [at] itec [dot] uni-klu [dot] ac [dot] at

The Sensory Experience Lab (SELab) comprises a small team of experts working in the field of Quality of Multimedia Experience (QoMEx) with the focus on Sensory Experience. That is, traditional multimedia content is annotated with so-called sensory effects that are rendered on special devices such as ambient lights, fans, vibration devices, scent, water sprayer, etc.

The sensory effects are represented as Sensory Effects Metadata (SEM) which are standardized within Part 3 of MPEG-V entitled “Information technology — Media context and control – Part 3: Sensory information”. Further details about MPEG-V and Sensory Information can be found in our Standardization section.

Our software and services are publicly available here and the interested reader is referred to our publications. The media section provides some videos of SELab.

The aim of the research within the SELab is to enhance the user experience resulting in a unique, worthwhile sensory experience stimulating potentially all human senses (e.g., olfaction, mechanoreception, termoreception) going beyond traditional ones (i.e., hearing and vision).

The SELab is guided by an advisory board comprising well-recognized experts in the field of QoE from both industry and academia.

In terms of funding the SELab acknowledges the following institutions and projects: Alpen-Adria-Universität Klagenfurt, ICT FP7 IP ALICANTE, COST IC1003 QUALINET, andICT FP7 Ip SocialSensor.

Reconstructing the World's Museums

http://mit.edu/jxiao/museum/

Article from: Jianxiong Xiao and Yasutaka Furukawa

Proceedings of the 12th European Conference on Computer Vision (ECCV2012)

ECCV 2012 Best Student Paper Award

Abstract

Photorealistic maps are a useful navigational guide for large indoor environments, such as museums and businesses. However, it is impossible to acquire photographs covering a large indoor environment from aerial viewpoints. This paper presents a 3D reconstruction and visualization system to automatically produce clean and well-regularized texture-mapped 3D models for large indoor scenes, from ground-level photographs and 3D laser points. The key component is a new algorithm called "Inverse CSG" for reconstructing a scene in a Constructive Solid Geometry (CSG) representation consisting of volumetric primitives, which imposes powerful regularization constraints to exploit structural regularities. We also propose several techniques to adjust the 3D model to make it suitable for rendering the 3D maps from aerial viewpoints. The visualization system enables users to easily browse a large scale indoor environment from a bird's-eye view, locate specific room interiors, fly into a place of interest, view immersive ground-level panorama views, and zoom out again, all with seamless 3D transitions. We demonstrate our system on various museums, including the Metropolitan Museum of Art in New York City -- one of the largest art galleries in the world.

 

Jianxiong Xiao and Yasutaka Furukawa
Reconstructing the World's Museums
Proceedings of the 12th European Conference on Computer Vision (ECCV2012)
Oral Presentation

This work was done when Jianxiong Xiao interned at Google under the supervision of Yasutaka Furukawa.

Lire 0.9.3_alpha – first alpha release for Lucene 4.0

Article from http://www.semanticmetadata.net/

I just submitted my code to the SVN and created a download for Lire 0.9.3_alpha. This version features support for Lucene 4.0, which changed quite a bit in its API. I did not have the time to test the Lucene 3.6 version against the new one, so I actually don’t know which one is faster. I hope the new one, but I fear the old one ;)

This is a pre-release for Lire for Lucene 4.0

Global features (like CEDD, FCTH, ColorLayout, AutoColorCorrelogram and alike) have been tested and considered working. Filters, like the ReRankFilter and the LSAFilter also work. The image shows a search for 10 images with ColorLayout and the results of re-ranking the result list with (i) CEDD and (ii) LSA. Visual words (local features), metric indexes and hashing have not been touched yet, beside making it compile, so I strongly recommend not to use them. However, due to a new weighting approach I assume that the visual word implementation based on Lucene 4.0 will — as soon as it is done — be much better in terms for retrieval performance.

Links