Pages

Wednesday, October 7, 2015

SIMPLE Descriptors

SIMPLE [Searching Images with Mpeg-7 (& Mpeg-7 like) Powered Localized dEscriptors] begun as a collection of four descriptors [Simple-SCD, Simple-CLD, Simple-EHD and Simple-CEDD (or LoCATe)]. The main idea behind SIMPLE is to utilize global descriptors as local ones. To do this, the SURF detector is employed to define regions-of-interest on an image, and instead of using the SURF descriptor, one of the MPEG-7 SCD, the MPEG-7 CLD, the MPEG-7 EHD and the CEDD descriptors is utilized to extract the features of those image’s patches. Finally, the Bag-Of-Visual-Words framework is used to test the performance of those descriptors in CBIR tasks. Furthermore, recently SIMPLE was extended from a collection of descriptors, to a scheme (as a combination of a detector and a global descriptor). Tests have been carried out after utilizing other detectors [the SIFT detector and two Random Image Patches’ Generators (The Random Generator has produced the best results and is portrayed as the preferred choice.)] and currently the performance of that scheme with more global descriptors is being tested.

Searching Images with MPEG-7 (& MPEG-7 Like) Powered Localized dEscriptors (SIMPLE)
A set of local image descriptos specifically designed for image retrieval tasks

Image retrieval problems were first confronted with algorithms that tried to extract the visual properties of a depiction in a global manner, following the human instinct of evaluating an image’s content. Experimenting with retrieval systems and evaluating their results, especially on verbose images and images where objects appear with partial occlusions, showed that the accepted correctly ranked results  are positively evaluated by the extraction of the salient regions of an image, rather than the overall depiction. Thus, a representation of the image by its points of interest proved to be a more robust solution. SIMPLE descriptors, emphasize and incorporate the characteristics that allow a more abstract but retrieval friendly description of the image’s salient patches.

Experiments were contacted on two well-known benchmarking databases. Initially experiments were performed using the UKBench database. The UKBench image database consists of 10200 images, separated in 2250 groups of four images each. Each group includes images of a single object captured from different viewpoints and lighting conditions. The first image of every object is used as a query image. In order to evaluate our approach, the first 250 query images were selected. The searching procedure was executed throughout the 10200 images. Since each ground truth includes only four images, the P@4 evaluation method to evaluate the early positions was used.

In the sequel, experiments were performed using the UCID database. This database consists of 1338 images on a variety of topics including natural scenes and man-made objects, both indoors and outdoors. All the UCID images were subjected to manual relevance assessments against 262 selected images.

In the tables that illustrate the results, wherever the BOVW model is employed, only  the best result achieved by each descriptor with every codebook size, is presented.  In other words, for each local feature and for each codebook size, the experiment was repeated  for all 8 weighting schemes but only the best result is listed in the tables. Next to the result, the weighting scheme for which the result was achieved is noted (using the System for the Mechanical Analysis and Retrieval of Text – SMART notation)

Experimental Results of all 16 SIMPLE descriptors on the UKBench and the UCID dataset. MAP results in bold fonts mark performances that surpass the baseline performance. Grey shaded results mark the highest performance achieved per detector

Read more and download the open source implementation of the SIMPLE descriptors (C#, Java and MATLAB)

http://chatzichristofis.info/?page_id=1479

No comments: