Friday, August 30, 2013

ICME libdash Demo

This is the official MPEG-DASH Reference Software.

Wednesday, August 28, 2013

Top notch AI system about as smart as a four-year-old, lacks commonsense

Article from http://www.gizmag.com/ai-system-iq-four-year-old/28321/

Researchers have found that an AI system has an average IQ of a four-year-old child (Image...

Those who saw IBM’s Watson defeat former winners on Jeopardy! in 2011 might be forgiven for thinking that artificially intelligent computer systems are a lot brighter than they are. While Watson was able to cope with the highly stylized questions posed during the quiz, AI systems are still left wanting when it comes to commonsense. This was one of the factors that led researchers to find that one of the best available AI systems has the average IQ of a four-year-old.

To see just how intelligent AI systems are, a team of artificial and natural knowledge researchers at the University of Illinois as Chicago (UIC) subjected ConceptNet 4 to the verbal portions of the Weschsler Preschool and Primary Scale of Intelligence Test, which is a standard IQ test for young children. ConceptNet 4 is an AI system developed at MIT that relies on a commonsense knowledge base created from facts contributed by thousands of people across the Web.

While the UIC researchers found that ConceptNet 4 is on average about as smart as a four-year-old child, the system performed much better at some portions of the test than others. While it did well on vocabulary and in recognizing similarities, its overall score was brought down dramatically by a bad result in comprehension, or commonsense “why” questions.

“If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and lead author on the study. “We’re still very far from programs with commonsense–AI that can answer comprehension questions with the skill of a child of eight.”

Sloan says AI systems struggle with commonsense because it relies not only on a large collection of facts, which computers can access easily through a database, but also on obvious things that we don’t even know we know – things that Sloan calls “implicit facts.” For example, a computer may know that water freezes at 32° F (0° C), but it won’t necessarily know that it is cold, which is something that even a four-year-old child will know.

“All of us know a huge number of things,” says Sloan. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled.”

Sloan and his colleagues hope their study will hope identify areas for AI research to focus on to improve the intelligence of AI systems. They will present their study on July 17 at the US Artificial Intelligence Conference in Bellevue, Washington.

Article from http://www.gizmag.com/ai-system-iq-four-year-old/28321/

20 Historic Black and White Photos Colorized

Article from twistedsifte

One of the greatest facets of reddit are the thriving subreddits, niche communities of people who share a passion for a specific topic. One of the Sifter’s personal favourites is r/ColorizedHistory. The major contributors are a mix of professional and amateur colorizers that bring historic photos to life through color. All of them are highly skilled digital artists that use a combination of historical reference material and a natural eye for colour.

When we see old photos in black and white, we sometimes forget that life back then was experienced in the same vibrant colours that surround us today. This gallery of talented artists helps us remember that :)

Below you will find a collection of some of the highest rated colorized images to date on r/ColorizedHistory.

 

Albert-Einstein,-summer-1939---Nassau-Point,-Long-Island,-NY-edvos-comparisonAbandoned-boy-holding-a-stuffed-toy-animal_2Baltimore,-1938-photo-chopshop-comparison

Read more twistedsifte

Artist gets a tattoo only visible by smartphone

Article from Dvice

Presuming your mother is anything like mine, you’ve heard that if you got a tattoo you’d later regret it. Not to mention the whole it-being-visible-during-job-interviews thing. But what if you had a tattoo that was only visible sometimes? And I don’t mean because it’s in an unmentionable spot. Instead, because it was literally only visible in certain conditions.

That’s what Anthony Antonellis set out to do when he had a small RFID chip implanted inside the fleshy back of his hand between his thumb and forefinger. The chip, stored in a glass capsule, has 1KB of storage and is completely invisible.

That is, completely invisible until you hold a smartphone to his hand and see the small GIF that’s actually there. At the time of this writing, the GIF is a small rectangle with a rainbow of colors passing through it (click through to the story below to see), but Antonellis has the ability to change it at any time, to anything.

While he’s using it for artistic ends, it wouldn’t be difficult to imagine this as a way to store really, really, really, really, really important data. Though, outside of a movie universe, I can’t imagine what that data would actually be. Maybe it’d be a way to keep it away from the NSA, at least.

Automated image-based diagnosis

Article from Cris's Image Analysis Blog

Nowhere is it as difficult to get a fully automatic image analysis system accepted and used in practice as in the clinic. Not only are physicians sceptical of technology that makes them irrelevant, but an automated system has to produce a perfect result, a correct diagnosis for 100% of the cases, to be trusted without supervision. And of course this is impossible to achieve. In fact, even if the system has a better record than an average (or a good) physician, it is unlikely that it is the same cases where the system and the physician are wrong. Therefore, the combination of machine + physician is better than the machine, and thus the machine should not be used without the physician.

What often happens then is that the system is tuned to yield a near 100%sensitivity (to miss only very few positives), and thus has a very low specificity (that is, it marks a lot of negative tests as positive). The system is heavily biased to the positives. The samples marked by the system as negative are almost surely negative, whereas the samples marked as positive (or, rather, suspect) are reviewed by the physician. This is supposed to lighten the workload of the physician. This seems nice and useful, no? What is the problem?

One example where automated systems are routinely used in the clinic (at least in the rich, western world) is for screening for cervical cancer. This is done with the so-called Pap smear since the 1940′s. It takes about 10 minutes to manually examine one smear, which is made on a microscope glass, stained, and looked at through the microscope. Even before digital computers became common, there have been attempts to automate the analysis of the smear. My colleague Ewert Bengtsson wrote his PhD thesis on the subject in 1977, and is still publishing in the field today. This gives an idea of how hard it is to replicate something that is quite easy, though time consuming, for a trained person. The solution, as is often the case, was to change how the sample is prepared. Instead of smearing the sample on a microscope glass, liquid cytology systems were invented that cleaned the sample (removing slime, blood cells, etc.), and produced a neat deposition of cells on the glass such that they are nicely separated and not likely to overlap each other. Such a preparation makes the automated image analysis much easier. However, these automated systems still do not produce a perfect result, and therefore are only approved to be used together with a trained cytologist. That is, the cytologist still needs to review all the tests. This means that the Pap smear test has become more expensive, rather than cheaper (the liquid-based sample preparation uses expensive consumables).

Read More

3-D mapping in real time, without the drift

New technique creates highly detailed, accurate 3-D maps in real time.

Article from MITnews

Computer scientists at MIT and the National University of Ireland (NUI) at Maynooth have developed a mapping algorithm that creates dense, highly detailed 3-D maps of indoor and outdoor environments in real time.
The researchers tested their algorithm on videos taken with a low-cost Kinect camera, including one that explores the serpentine halls and stairways of MIT’s Stata Center. Applying their mapping technique to these videos, the researchers created rich, three-dimensional maps as the camera explored its surroundings.
As the camera circled back to its starting point, the researchers found that after returning to a location recognized as familiar, the algorithm was able to quickly stitch images together to effectively “close the loop,” creating a continuous, realistic 3-D map in real time.
The technique solves a major problem in the robotic mapping community that’s known as either “loop closure” or “drift”: As a camera pans across a room or travels down a corridor, it invariably introduces slight errors in the estimated path taken. A doorway may shift a bit to the right, or a wall may appear slightly taller than it is. Over relatively long distances, these errors can compound, resulting in a disjointed map, with walls and stairways that don’t exactly line up.
In contrast, the new mapping technique determines how to connect a map by tracking a camera’s pose, or position in space, throughout its route. When a camera returns to a place where it’s already been, the algorithm determines which points within the 3-D map to adjust, based on the camera’s previous poses.
“Before the map has been corrected, it’s sort of all tangled up in itself,” says Thomas Whelan, a PhD student at NUI. “We use knowledge of where the camera’s been to untangle it. The technique we developed allows you to shift the map, so it warps and bends into place.”
The technique, he says, may be used to guide robots through potentially hazardous or unknown environments. Whelan’s colleague John Leonard, a professor of mechanical engineering at MIT, also envisions a more benign application.
“I have this dream of making a complete model of all of MIT,” says Leonard, who is also affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory.  “With this 3-D map, a potential applicant for the freshman class could sort of ‘swim’ through MIT like it’s a big aquarium. There’s still more work to do, but I think it’s doable.”
Leonard, Whelan and the other members of the team — Michael Kaess of MIT and John McDonald of NUI — will present their work at the 2013 International Conference on Intelligent Robots and Systems in Tokyo.

Read More

Sunday, August 18, 2013

New LIRE web demo based on Apache Solr

Article from http://www.semanticmetadata.net/

The new LIRE web demo is based on Apache Solr and features and index of the MIRFLICKR data set. The new architecture allows for extremely fast retrieval. Moreover, there’s a new walk through video with some short peeks behind the screen. The source of the plugin will be released in the near future.

 

Tuesday, August 13, 2013

Information Retrieval Models: Foundations and Relationships

(New Book)

Information Retrieval Models: Foundations and Relationships

Thomas Roelleke, Queen Mary University of London

Synthesis Lectures on Digital Circuits and Systems

Paperback: 9781627050784 / $40.00 / £24.99

eBook ISBN: 9781627050791

July 2013, 163 pages
http://www.morganclaypool.com/doi/abs/10.2200/S00494ED1V01Y201304ICR027

Information Retrieval (IR) models are a core component of IR research and IR systems. The past decade brought a consolidation of the family of IR models, which by 2000 consisted of relatively isolated views on TF-IDF (Term-Frequency times Inverse-Document-Frequency) as the weighting scheme in the vector-space model (VSM), the probabilistic relevance framework (PRF), the binary independence retrieval (BIR) model, BM25 (Best-Match Version 25, the main instantiation of the PRF/BIR), and language modelling (LM). Also, the early 2000s saw the arrival of divergence from randomness (DFR). Read More