Pages

Wednesday, September 25, 2013

Matlab implementation of CEDD

Finally, the MATLAB implementation of CEDD is available on-line. 

cedd_matlab

The source code is quite simple and easy to be handled  by all users. There is a main function that has the task of extracting the CEDD descriptor from a given image.

Download the Matlab implementation of CEDD (For academic purposes only)

Few words about the CEDD descriptor:

The descriptors, which include more than one features in a compact histogram, can be regarded that they belong to the family of Compact Composite Descriptors. A typical example of CCD is the CEDD descriptor. The structure of CEDD consists of 6 texture areas. In particular, each texture area is separated into 24 sub regions, with each sub region describing a color. CEDD's color information results from 2 fuzzy systems that map the colors of the image in a 24-color custom palette. To extract texture information, CEDD uses a fuzzy version of the five digital filters proposed by the MPEG-7 EHD. The CEDD extraction procedure is outlined as follows: when an image block (rectangular part of the image) interacts with the system that extracts a CCD, this section of the image simultaneously goes across 2 units. The first unit, the color unit, classifies the image block into one of the 24 shades used by the system. Let the classification be in the color $m, m \in [0,23]$. The second unit, the texture unit, classifies this section of the image in the texture area $a, a \in [0,5]$. The image block is classified in the bin $a \times 24 + m$. The process is repeated for all the image blocks of the image. On the completion of the process, the histogram is normalized within the interval [0,1] and quantized for binary representation in a three bits per bin quantization.

image

The most important attribute of CEDDs is the achievement of very good results that they bring up in various known benchmarking image databases. The following table shows the ANMRR results in 3 image databases. The ANMRR ranges from '0' to '1', and the smaller the value of this measure is, the better the matching quality of the query. ANMRR is the evaluation criterion used in all of the MPEG-7 color core experiments.

image

Download the Matlab implementation of CEDD (For academic purposes only)

Monday, September 23, 2013

A Multi-Objective Exploration Strategy for Mobile Robots under Operational Constraints

graphical_abstract

IEEE Access

Multi-objective robot exploration, constitutes one of the most challenging tasks for autonomous robots performing in various operations and different environments. However, the optimal exploration path depends heavily on the objectives and constraints that both these operations and environments introduce. Typical environment constraints include partially known or completely unknown workspaces, limited-bandwidth communications and sparse or dense clattered spaces. In such environments, the exploration robots must satisfy additional operational constraints including time-critical goals, kinematic modeling and resource limitations. Finding the optimal exploration path under these multiple constraints and objectives constitutes a challenging non-convex optimization problem. In our approach, we model the environment constraints in cost functions and utilize the Cognitive-based Adaptive Optimization (CAO) algorithm in order to meet time-critical objectives. The exploration path produced is optimal in the sense of globally minimizing the required time as well as maximizing the explored area of a partially unknown workspace. Since obstacles are sensed during operation, initial paths are possible to be blocked leading to a robot entrapment. A supervisor is triggered to signal a blocked passage and subsequently escape from the basin of cost function local minimum. Extensive simulations and comparisons in typical scenarios are presented in order to show the efficiency of the proposed approach.

Read More

Wednesday, September 18, 2013

3-Sweep: Extracting Editable Objects from a Single Photo

by Tao Chen · Zhe Zhu · Ariel Shamir · Shi-Min Hu · Daniel Cohen-Or

Abstract

We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. Such extraction requires understanding of the components of the shape, their projections, and relations. These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem. Our technique provides the user the means to quickly create editable 3D parts— human assistance implicitly segments a complex object into its components, and positions them in space. In our interface, three strokes are used to generate a 3D component that snaps to the shape’s outline in the photograph, where each stroke defines one dimension of the component. The computer reshapes the component to fit the image of the object in the photograph as well as to satisfy various inferred geometric constraints imposed by its global 3D structure. We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple. Once the 3D object has been extracted, it can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space. We show several examples and present a user study illustrating the usefulness of our technique.

 

Tuesday, September 17, 2013

PhD Position in Multimodal Person and Social Behaviour Recognition

Application Deadline: Tue, 10/01/2013

Location: Denmark

Employer:Aalborg University

At the Faculty of Engineering and Science, Department of Electronic Systems in Aalborg, a PhD stipend in Multimodal Person and Social Behaviour Recognition is available within the general study programme Electrical and Electronic Engineering. The stipend is open for appointment from November 1, 2013, or as soon as possible thereafter. Job description The PhD student will work on the research project “Durable Interaction with Socially Intelligent Robots” funded by the Danish Council for Independent Research, Technology and Production Sciences. This project aims at developing methods to make service robots socially intelligent and capable of establishing durable relationships with their users. This relies on developing the capabilities to sense and express, which will be achieved by the fusion of sensor signals in an interactive way. The PhD student will research on technologies for vision based social behaviour recognition, person identification and person tracking in the context of human robot interaction. Multimodal fusion will be carried out in collaboration with another PhD student who will work on the same research project with a focus on array based speech processing. The successful applicant must have a Master degree in machine learning, statistical signal processing or computer vision. You may obtain further information from Associate Professor Zheng-Hua Tan, Department of Electronic Systems, phone: +45 9940 8686, email:zt@es.aau.dk , concerning the scientific aspects of the position.

Job URL: Visit the job's url

GrowMeUp

Luxand, Inc. in collaboration with Goldbar Ventures released a new application that helps children produce photos of themselves growing up. The new tool is called GrowMeUp. It’s based on Luxand’s years of experience developing biometric identification and morphing technologies.
You can license the technology used in this application for embedding in your own entertainment applications. Please reply to this email if you’re interested.
About GrowMeUp


Using GrowMeUp could not be made simpler (after all, the target audience is very young). The user will upload a picture containing their face, specify their gender and ethnicity, and choose among the many professions available. GrowMeUp will then automatically identify the face and its features, “grow it up” by applying Luxand’s proprietary aging technologies, and carefully embed the resulting “adult” face into a photo showing a working professional.
Kids will have a wide range of professions to choose from. GrowMeUp contains photos of the following persons: Astronaut, Chef, Doctor, Firefighter, Lawyer, Policeman, Musician, Teacher, Pilot, Soldier, and Model.

The app is available at Apple Store. The online version is also available to users without an iOS device at http://growmeup.com/

Tuesday, September 10, 2013

ACM International Conference on Multimedia Retrieval

ACM International Conference on Multimedia Retrieval (ICMR) Glasgow, UK, 1st - 4th April 2014 http://www.icmr2014.org/

Important dates

* October 15, 2013 – Special Session Proposals

* November 1, 2013 – Special session Selection

* December 2, 2013 – Paper Submission

* January 15, 2014 – Industrial Exhibits and multimedia Retrieval

The Annual ACM International Conference on Multimedia Retrieval (ICMR) offers a great opportunity for exchanging leading-edge multimedia retrieval ideas among researchers, practitioners and other potential users of multimedia retrieval systems. ICMR 2014 is seeking original high quality submissions addressing innovative research in the broad field of multimedia retrieval. We wish to highlight significant contributions addressing the main problem of search and retrieval but also the related and equally important issues of multimedia content management, user interaction, and community-based management. The conference will be held in Glasgow during 1-5 April 2014.

Topics of interest include, but are not limited to:

* Content- and context-based indexing, search and retrieval of images and video

* Multimedia content search and browsing on the Web

* Advanced descriptors and similarity metrics for audio, image, video and 3D data

* Multimedia content analysis and understanding

* Semantic retrieval of visual content

* Learning and relevance feedback in media retrieval

* Query models, paradigms, and languages for multimedia retrieval

* Multimodal media search

* Human perception based multimedia retrieval

* Studies of information-seeking behavior among image/video users

* Affective/emotional interaction or interfaces for image/video retrieval

* HCI issues in multimedia retrieval

* Evaluation of multimedia retrieval systems

* High performance multimedia indexing algorithms

* Community-based multimedia content management

* Applications of Multimedia Retrieval: Medicine, Multimodal Lifelogs, Satellite Imagery, etc.

* Image/video summarization and visualization

Friday, September 6, 2013

The GRIRE Library - Pack Release: Timers Pack

Slide 1

This pack provides components that measure the time elapsed by other ones for detailed benchmarking. It contains a component of each type that takes as argument another component of the same type and measures the time it needs for each process and other information like average time per image and minimum/maximum times.

It only works with GRire.v.0.0.3 or later because it needs General Maps to store the results.

Download link:
https://sourceforge.net/projects/grire/files/PluginPacks/TimersPack/

Evaluation of Image Browsing Interfaces for Smartphones and Tablets

Abstract

In this work we propose an early prototype of a video browser for mobile devices with touchscreens. We concentrate on utilizing the  thumbs because of the natural posture used with the devices when watching videos in landscape mode. The controls are only displayed when the user touches the screen and automatically rearrange themselves depending on the position of the thumbs. A combination of a radial menu and an extended seeker control with hierarchical browsing and bookmarking features enables the user to navigate quickly through videos.

2013 IEEE International Symposium on Multimedia

Read More

Wednesday, September 4, 2013

Xirong Li receives SIGMM Best Ph.D. Thesis Award 2013

Article from http://www.ceessnoek.info/

Congratulations to dr. Xirong Li for receiving the SIGMM Award for Outstanding PhD Thesis in Multimedia Computing, Communications and Applications 2013. The committee considered Xirong’s dissertation titled “Content-based visual search learned from social media” as worthy of the award as it substantially extends the boundaries for developing content-based multimedia indexing and retrieval solutions. In particular, it provides fresh new insights into the possibilities for realizing image retrieval solutions in the presence of vast information that can be drawn from the social media.

The committee considered the main innovation of Xirong’s work to be in the development of the theory and algorithms providing answers to the following challenging research questions:
(a) what determines the relevance of a social tag with respect to an image,
(b) how to fuse tag relevance estimators,
(c) which social images are the informative negative examples for concept learning,
(d) how to exploit socially tagged images for visual search and
(e) how to personalize automatic image tagging with respect to a user’s preferences.

The significance of the developed theory and algorithms lies in their power to enable effective and efficient deployment of the information collected from the social media to enhance the datasets that can be used to learn automatic image indexing mechanisms (visual concept detection) and to make this learning more personalized for the user.

Xirong’s thesis is available from the UvA digital academic repository.

Tuesday, September 3, 2013

Post-Doc position on breast cancer image analysis (Computer Vision)

Location: Spain

Employer: Rovira i Virgili University

The Intelligent Robotics and Computer Vision group at the University Rovira i Virgili, Tarragona, Catalonia (Spain) is looking for candidates to work in a project related to breast cancer image analysis.

The Post-Doc positions are funded by the Government of Catalonia under the next program:

http://www10.gencat.cat/agaur_web/AppJava/english/a_beca.jsp?categoria=postdoctorals&id_beca=19944

Prospective candidates must have obtained a PhD degree on Computer Science (or Computer Engineering) between the 01/01/2007 and 31/12/2011.

Interested candidates please contact to Dr. Domenec Puig domenec.puig@urv.cat