Pages

Thursday, April 28, 2011

CEDD descriptor for the ImageCLEF's Wikipedia Retrieval task images

Update your bookmarks: Read more about CEDD desctiptor and download an open source implementation from my personal website http://chatzichristofis.info/?page_id=15 

CEDD descriptor is available for the ImageCLEF's Wikipedia Retrieval task images. Download the compressed file from here. For each image, a .cedd file contains the cedd descriptor. In order to calculate the distance between the descriptors you can use either Euclidean or Tanimoto distance.
The visual features of the image examples are now available HERE.
This descriptor aim to aid those participants who would like to exploit the visual modality without performing feature extraction themselves. Organizers are also providing cime, tlep, and surf features.
clef

The descriptors, which include more than one features in a compact histogram, can be regarded that they belong to the family of Compact Composite Descriptors. A typical example of CCD is the CEDD descriptor. The structure of CEDD consists of 6 texture areas. In particular, each texture area is separated into 24 sub regions, with each sub region describing a color. CEDD's color information results from 2 fuzzy systems that map the colors of the image in a 24-color custom palette. To extract texture information, CEDD uses a fuzzy version of the five digital filters proposed by the MPEG-7 EHD. The CEDD extraction procedure is outlined as follows: when an image block (rectangular part of the image) interacts with the system that extracts a CCD, this section of the image simultaneously goes across 2 units. The first unit, the color unit, classifies the image block into one of the 24 shades used by the system. Let the classification be in the color $m, m \in [0,23]$. The second unit, the texture unit, classifies this section of the image in the texture area $a, a \in [0,5]$. The image block is classified in the bin $a \times 24 + m$. The process is repeated for all the image blocks of the image. On the completion of the process, the histogram is normalized within the interval [0,1] and quantized for binary representation in a three bits per bin quantization.
The most important attribute of CEDDs is the achievement of very good results that they bring up in various known benchmarking image databases.
Example:
File: 1.jpg.CEDD
101001111000000000000000001000222000000000000000101001554000000000000....
File 2.jpg.CEDD
1110000000000011014110001110000000010010001000002620000000010033137240.....
Descriptor consist of 144 integer values in the interval [0-7].
To calculate the Tanimoto distance use the following source code
4

If you use this descriptor please cite:
S. Α. Chatzichristofis and Y. S. Boutalis, “CEDD: COLOR AND EDGE DIRECTIVITY DESCRIPTOR - A COMPACT DESCRIPTOR FOR IMAGE INDEXING AND RETRIEVAL.” « 6th International Conference in advanced research on Computer Vision Systems ICVS 2008», May 12 to May 15, 2008, Santorini, Greece [Download]
OR
S. A. Chatzichristofis, K. Zagoris, Y. S. Boutalis and N. Papamarkos, “ACCURATE IMAGE RETRIEVAL BASED ON COMPACT COMPOSITE DESCRIPTORS AND RELEVANCE FEEDBACK INFORMATION”, «International Journal of Pattern Recognition and Artificial Intelligence (IJPRAI)», Volume 24, Number 2 / February, 2010, pp. 207-244, World Scientific.
[Download the Descriptors]

Saturday, April 9, 2011

Two papers at SIGIR2011 accepted

1: The TREC Files: the (ground) truth is out there (No kidding :))

The results of a retrieval system for a certain benchmark image database can be evaluated by several methods, each one employing various evaluation criteria. Traditional tools for information retrieval (IR) evaluation, such as TREC’s trec_eval, have outdated (command-line) interfaces with many unused features or ‘switches’ accumulated over the years. They are usually seen as cumbersome applications by new IR researchers, steepening the learning curve. We introduce a new, platform independent application for IR evaluation with a graphical easy-to-use interface: The TREC Files Evaluator. The application supports all standard measures used for evaluation in TREC, CLEF, and elsewhere, such as MAP, P10, P20, and bpref, as well as the Averaged Normalized Modified Retrieval Rank (ANMRR) proposed by MPEG for image retrieval evaluation [1]. Additional features include a batch mode and significance testing of the results against a pre-selected baseline run.

2: Bag-of-Visual-Words vs Global Image Descriptors on Two-Stage Multimodal Retrieval

Using Bag-of-Visual Words (BOVW) is fast becoming a widely used representation for content based image retrieval mainly, because of their better retrieval effectiveness over global feature representations on collections with images being near-duplicate to the test queries. In this experimental study we demonstrate that this advantage of BOVW is diminished when visual diversity is enhanced by using a secondary modality, such as text, to pre-filter images. In detail, the TOP-SURF descriptor is evaluated against Compact Composite Descriptors on a two-stage image retrieval system, which first uses a text modality to rank the collection and then perform CBIR only on the top-K items.

Wednesday, April 6, 2011

Hackers Turn a Gmail April Fool’s Joke Into a Reality

Article from http://bits.blogs.nytimes.com/2011/04/04/hackers-turn-a-gmail-april-fools-joke-into-a-reality/?partner=rss&emc=rss

If you happened upon the Internet Friday, you would have been faced with what has now become an annual tradition online: Technology companies trying to one-up each other with April Fool’s jokes posted online.

Google, for one, takes its April Fool’s gags very seriously. This year was no exception. Users of Gmail, Google’s e-mail service, were told about a new product, Gmail Motion, which would allow people to “control Gmail with your body!”

The company created an in-depth video explaining how this new mock service would work, where users could literally bounce around in front of their computer to sift through their inbox. Swinging a fist backward through the air would allow you to reply to a message; swinging two fists would reply-to-all; licking your hand — intended to be a stamp — and then tapping your right knee would send the message.

Of course this was all a joke. But hackers at the University of Southern California Institute for Creative Technologies wanted to make it a reality. To do this, a team of developers took a Microsoft Kinect sensor and some software they had built for previous projects and tied them together to create a fully working version of Gmail Motion.

As you can see from the video above, the same motions Google jokingly presented in its mock Web site all work with the hacked version of the product. The students also took a moment to poke a little fun at Google with the following product description posted with their video:

This morning, Google introduced Gmail Motion, allowing users to control Gmail using gestures and body movement. However, for whatever reason, their application doesn’t appear to work. So, we demonstrate our solution — the Software Library Optimizing Obligatory Waving (SLOOW) — and show how it can be used with a Microsoft Kinect sensor to control Gmail using the gestures described by Google.

Below is the original video Google posted Friday, with Gmail employees explaining how Gmail Motion works.

Tuesday, April 5, 2011

7th INTERNATIONAL SUMMER SCHOOL ON PATTERN RECOGNITION (ISSPR)

4-9 SEPTEMBER 2011, Plymouth, UK
http://pro.expressemail.in/link.php?M=1048309&N=1516&L=707&F=T
Early registration deadline: 22 April, 2011


It is a pleasure to announce the Call for Participation to the 6th International Summer School on Pattern Recognition. I write to invite you, your colleagues, and students within your department to attend this event. In 2010, the 6th ISSPR School held at Plymouth was a major success with
over 90 participants. The major focus of 2011 summer school includes:

- A broad coverage of pattern recognition areas which will be taught in a tutorial style over five days by leading experts. The areas covered include statistical pattern recognition, Bayesian techniques, non-parametric and neural network approaches including Kernel methods, String matching, Evolutionary computation, Classifiers, Decision trees, Feature selection and Dimensionality reduction, Clustering, Reinforcement learning, and Markov models. For more details visit the event website.
- A number of prizes sponsored by Microsoft and Springer for best research demonstrated by participants and judged by a panel of experts. The prizes will be presented to the winners by Prof. Chris Bishop from Microsoft Research.
- Providing participants with knowledge and recommendations on how to develop and use pattern recognition tools for a broad range of
applications.

The early bird registration fee for the 2011 event is available till 22nd April, 2011 so this is an excellent opportunity for participants to register at an affordable cost. The fee includes registration and accommodation plus meals at the event. The registration process is online through the school website www.patternrecognitionschool.com which has further details on registration fees. Please note that the number of participants registering each year at the summer school is high with a limited number of seats available, and therefore early registration is
highly recommended. Should you need any help, then please do not hesitate to contact school
secretariat at enquiries@patternrecognitionschool.com

Comparative Performance Evaluation of Image Descriptors Over IEEE 802.11b Noisy Wireless Networks

My paper "Comparative Performance Evaluation of Image Descriptors Over IEEE 802.11b Noisy Wireless Networks" has been accepted for inclusion in the The 5th FTRA International Conference on Multimedia and Ubiquitous Engineering (MUE 2011) proceedings.

In this paper we evaluate the image retrieval procedure over an IEEE 802.11b Ad Hoc network, operating in 2.4GHz, using IEEE Distributed Coordination Function CSMA/CA as the multiple access scheme. IEEE 802.11 is a widely used network standard, implemented and supported by a variety of devices, such as desktops, laptops, notebooks, mobile phones etc., capable of providing a variety of different services, such as file transfer, internet access et.al. Therefore, we consider IEEE 802.11b being a suitable technology to investigate the case of conducting image retrieval over a wireless noisy channel. The model we use to simulate the noisy environment is based on the scenario in which the wireless network is located in an outdoor noisy environment, or in an indoor environment of partial LOS power. We used a large number of descriptors reported in literature in order to evaluate which one has the best performance under those circumstances. Experimental results on known benchmarking database show that the majority of the descriptors appear to have decreased performance when transferred and used in such noisy environments.