Tuesday, May 28, 2013

Multiple Sensorial (MulSeMedia) Multi-modal Media: Advances and Applications

Article from:

Call for Papers

ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)

Multiple Sensorial (MulSeMedia) Multi-modal Media: Advances and Applications

Multimedia applications have primarily engaged two of the human senses – sight and hearing. With recent advances in computational technology, however it is possible to develop applications that also consider, integrate and synchronize inputs across all senses, including tactile, olfaction, and gustatory. This integration of multiple senses leads to a paradigm shift towards a new mulsemedia (multiple sensorial media) experience, aligning rich data from multiple human senses. Mulsemedia brings with itself new and exciting challenges and opportunities in research, industry, commerce, and academia. This special issue solicits contributions dealing with mulsemedia in all of these areas. Topics of interest include, but are not limited to, the following:

  • Context-aware Mulsemedia
  • Metrics for Mulsemedia
  • Capture and synchronization of Mulsemedia
  • Mulsemedia devices
  • Mulsemedia in distributed environments
  • Mulsemedia integration
  • Mulsemedia user studies
  • Multi-modal mulsemedia interaction
  • Mulsemedia and virtual reality
  • Quality of service and Mulsemedia 
  • Quality of experience and Mulsemedia
  • Tactile/haptic interaction
  • User modelling and Mulsemedia
  • Mulsemedia and e-learning
  • Mulsemedia and e-commerce
  • Mulsemedia Standards 
  • Mulsemedia applications (e.g. e-commerce, e- learning, e-health, etc)
  • Emotional response (e.g. EEG) of Mulsemedia
  • Mulsemedia sensor research 
  • Mulsemedia databases

Important Dates

  • Paper Submission: 14/10/2013
  • First Decision: 13/01/2014
  • Paper Revision Submission: 03/03/2014
  • Second Decision: 28/04/2014
  • Accepted Papers Due: 12/05/2014

Guest Editors

  • George Ghinea (Brunel University, UK)
  • Stephen Gulliver (University of Reading, UK)
  • Christian Timmerer (Alpen-Adria-Universität, Klagenfurt, Austria)
  • Weisi Lin (Nanyang Technological University, Singapore)
Prospective contributors are welcome to contact the guest editors atguesteditors2014@kom.tu-
Submission Procedure
All submission guidelines of TOMCCAP, such as formatting, page limits and extensions of previously-submitted conference papers, must be adhered to. Please see the Authors Guide section of the TOMCCAP website for more details ( To submit please follow these instructions:
  1. Submit your paper through TOMCCAP’s online system When submitting please use the Manuscript Type ‘Special Issue: MulSeMedia’ in the ManuscriptCentral system.
  2. In your cover letter, include the information “Special Issue on Mulsemedia” and, if submitting an extended version of a conference paper, explain how the new submission is different and extends previously published work. 
  3. After you submit your paper, the system will assign a manuscript number to it. Please email this number to together with the title of your paper.

Bootstrapping Visual Categorization With Relevant Negatives

Article from

negative-bootstrapThe paper “Bootstrapping Visual Categorization With Relevant Negatives” by Xirong Li, Cees Snoek, Marcel Worring, Dennis Koelma, and Arnold Smeulders appears in the current issue of IEEE Transaction on Multimedia. Learning classifiers for many visual concepts are important for image categorization and retrieval. As a classifier tends to misclassify negative examples which are visually similar to positive ones, inclusion of such misclassified and thus relevant negativesshould be stressed during learning. User-tagged images are abundant online, but which images are the relevant negatives remains unclear. Sampling negatives at random is the de facto standard in the literature. In this paper, we go beyond random sampling by proposing Negative Bootstrap. Given a visual concept and a few positive examples, the new algorithm iteratively finds relevant negatives. Per iteration, we learn from a small proportion of many user-tagged images, yielding an ensemble of meta classifiers. For efficient classification, we introduce Model Compression such that the classification time is independent of the ensemble size. Compared with the state of the art, we obtain relative gains of 14% and 18% on two present-day benchmarks in terms of mean average precision. For concept search in one million images, model compression reduces the search time from over 20 h to approximately 6 min. The effectiveness and efficiency, without the need of manually labeling any negatives, make negative bootstrap appealing for learning better visual concept classifiers.

Vuforia™ Featured Apps Video Spring 2013

Vuforia™ is the software platform that enables the best and most creative branded augmented reality (AR) app experiences across the most real world environments, giving mobile apps the power to see.

The Vuforia platform uses superior, stable, and technically efficient computer vision-based image recognition and offers the widest set of features and capabilities, giving developers the freedom to extend their visions without technical limitations. With support for iOS, Android, and Unity 3D, the Vuforia platform allows you to write a single native app that can reach the most users across the widest range of smartphones and tablets.

Watch this video to see a showcase of apps that use the Vuforia platform to see: Lego(r) Connect, 4D Anatomy, Maxim Motion, Littlest PetShop, Swivel Gun! Delux, BAND-AID(r) Magic Vision, Om Nom: Candy Flick, Toyota 86 AR, Ballard + Catalog Companion, LG Showroom, Nike Hyperdunk

Learn more about augmented reality and Vuforia at
If you want to get started developing on Vuforia today, then visit the Vuforia Developer website at

Saturday, May 25, 2013

Drive Awake

"Drive Awake" The World's First Mobile Application that helps wake up drowsy drivers by using advanced eye and face tracking technology to analyze driver sleepiness while they are behind the wheel.

Friday, May 10, 2013

Golden Retriever Image Retrieval Engine (GRIRE)

Slide 1The GRire library is an open source, light-weight framework for implementing CBIR (Content Based Image Retrieval) methods. It contains various image feature extractors, descriptors, classifiers, databases and other necessary tools. Currently, the main objective of the project is the implementation of the BOVW (Bag of Visual Words) approach so, apart from the image analysis tools, to offer methods from the field of IR (Information Retrieval), e.g. weighting models such as SMART and Okapi, adjusted to meet the Image Retrieval perspective.

The purpose of the project is to help developers to create and distribute their methods and test the performance of their BOVW systems in actual databases with minimum effort and without having to deal with every aspect of the model. For example, a user who has created his own feature extractor and descriptor can integrate it into the GRire library, create a complete BOVW database and test it using GRire’s weighting and similarity models without having to implement anything else from scratch. So apart from a powerful and fast indexing and retrieving mechanism, GRire uses an extremely easy to use plugin system.

Read more about GRire:

The source code and the compiled jar files of the core and the plugins are available at the Google Code page of GRire.

A list of the available plugin packs along with instructions is here.

For a quick tutorial about using GRire you can see the installation instructions and this example

Tuesday, May 7, 2013