Tuesday, June 29, 2010

Icon Labeler

Icon Labeler is a system that was developed for the purposes of the diploma thesis entitled “Development of a semantic search system for Byzantine icons” authored by the undergraduate student N. Theofylaktos of DUTH under the supervision of the associate professor I. Boutalis.

This system aims at high-level feature extraction from Byzantine Icons. These features can be used to allow for ordering, searching and enhancing the existing ontology [1].

This system makes use of the existing methodology of Byzantine Iconography of Dionusios Fournas as well as widely used image processing techniques such as face recognition algorithms, noise reduction filters, edge enhancement and connected component extraction.

In order to extract the icon label, the Edge Histogram Descriptor of MPEG-7 is applied on each extracted character and its results are compared with a pre-constructed character database. Finally, optimization of the resulting labels is achieved through a spell-checking algorithm which compares each label with an extensive dictionary of saints names.


[1] @article{tzouveli2009semantic,
title={{Semantic Classification of Byzantine Icons}},
author={Tzouveli, P. and Simou, N. and Stamou, G. and Kollias, S.},
journal={IEEE Intelligent Systems},
pages={35--43},  year={2009},  publisher={IEEE Computer Society}}

Friday, June 25, 2010

Interactive Multimedia Applications (WIMA)

held in conjunction with USAB 2010: HCI in Work & Learning, Life & Leisure, Nov. 4.-5., 2010
Klagenfurt, Austria
Multimedia applications have become ubiquitous lately. People record and watch videos on mobile and stationary devices, they use non linear video editors, and they share and organize their personal multimedia archives. Lots of research has been done on the technical aspects of multimedia, like streaming, presentation, transcoding, adaptation and content based retrieval and analysis. But handling multimedia is an interactive process and users have to be taken into account. Even consumption is often accompanied with communication, non linear browsing and search. This interactivity in multimedia applications is a challenging yet very promising topic as especially in multimedia applications people more often accept innovative ideas and fundamental changes more easily.
This workshop should bring together researchers and practitioners from the fields of Multimedia and Human-Computer Interaction research (HCI). In this workshop we aim to discuss recent scientific advances and also first results of greenhouse work in the field of interactive multimedia applications. We also encourage the submission of papers presenting studies on interactivity in multimedia systems or discussing the development of applications in this field.

Topics of interest

Topics include the aspects of interactivity in the following fields:
•    Multimedia Retrieval, Browsing & Navigation
•    User Intentions in Multimedia Search, Annotation & Production
•    Multimedia Production and Post-Production
•    Multimedia in Specialized Domains (e.g. Medical Scenarios, Sports, Security, …)
•    Image, Audio and Video Analysis
•    Mobile Multimedia Applications
•    Multimedia on the Web
•    Social Multimedia
•    Multimedia Management and Databases
•    Distributed Multimedia Systems

Important Dates

June 30, 2010                Deadline for Paper Submission

July 23, 2010                    Notification of Acceptance/Rejection

August 15, 2010               Camera-Ready Deadline

November 4 & 5, 2010      USAB 2010: HCI in Work & Learning, Life & Leisure


Authors are invited to submit papers in the following categories:
•    Full papers (14-20 pages)
•    Short papers (6-14 pages)
•    Posters (4 pages)
All papers have to be formatted according to the Springer LNCS Style (LaTeX, MS-Word).

Submission is handled by easychair.

Program Committee

•    Dalibor Mitrovic, TU Wien, AT
•    Frank Hopfgartner, Glasgow University, UK
•    Harald Kosch, University of Passau, DE
•    Mario Döller, University of Passau, DE
•    Markus Strohmaier, Graz University of Technology, AT
•    Oge Marques, Florida Atlantic University, US
•    Ralf Klamma, RWTH Aachen, DE
•    Vincent Charvillait, ENSEEIHT Toulouse, FR
•    Yiwei Cao, RWTH Aachen, DE
•    Yu Cao, California State University of Fresno, US
•    Tao Mei, Microsoft Research Asia, P.R. China

Workshop Chairs

•    Klaus Schöffmann, Klagenfurt University, AT
•    Laszlo Böszörmenyi, Klagenfurt University, AT
•    Mathias Lux, Klagenfurt University, AT

AIAR Workshop 2010

AIAR is a three day workshop that will get together researchers and students with common interest on image annotation and retrieval with the goal of exchanging ideas and promoting collaboration. The format of the workshop includes keynote tutorials in the morning and oral presentations and posters in the afternoon. The workshop focuses on image annotation and on the combination of both textual and visual information into the multimedia image retrieval task. We invite researchers and students to submit extended abstracts of their work. Accepted abstracts will be presented in the workshop and published on-line.

About TIA

TIA is the INAOE's research group on texto + imagenes + aprendizaje (text + images + machine learning). It was established on July 2006 with the goal of doing research that can help to bridge the semantic gap between low-level features (extracted from images) and high-level concepts (in which users are interested on) for multimedia image retrieval.

Our research interests are on automatic segmentation, automatic image annotation, content-based, text-based and annotation-based image retrieval and benchmarking multimedia image retrieval. Please check the TIA website for further details on past and ongoing projects.

  • extended abstract submission: August 15, 2010
  • Acceptance notification: September 15, 2010
  • Deadline for scholarship application: September 16, 2010
  • Camera ready: September 30, 2010
  • Electronic publication: October 4, 2010
  • Workshop: October 6-8, 2010
  • Journal Special Issue Publication: Mid, 2011

Wednesday, June 23, 2010

MMSP 2012-The Workshop on Multimedia Signal Processing

The Multimedia Signal Processing Technical Committee (MMSP-TC) of the IEEE Signal Processing Society, which is the organizer of the MMSP workshop, invites proposals to host the 2012 workshop edition. The primary aim of the MMSP workshop is to promote the advancement of multimedia signal processing research and technology with special emphasis on the interaction, coordination, synchronization, and joint processing of multimodal signals.

Proposals are open to all regions, but to keep with the practice of regional rotation, preference will be given to proposals from Regions 1~7 and 9 (America and Canada). Proposals should be submitted electronically to the MMSP-TC Chair, Dr. Philip A. Chou at by September 15, 2010.

Proponents are strongly encouraged to present their proposals at the next TC meeting to be held during MMSP 2010 in Saint-Malo, France. Further details on the submission requirements and schedule are available here:

Proponents are also encouraged to visit the MMSP TC web site for an overview of TC activities and history of prior workshops:

Tuesday, June 22, 2010

6th International Summer School on Pattern Recognition ISSPR 2010

5-10 September 2010, Plymouth, UK

Early registration deadline: 30th June, 2010

ISSPR2010 Organising Committee

It is a pleasure to announce the Call for Participation to the 6th International Summer School on Pattern Recognition. I write to invite you, your colleagues, and students within your department to attend this event. In 2007, the 5th ISSPR School held at Plymouth was a major success with over 100 participants. The major focus of 2010 summer school includes:

- A broad coverage of pattern recognition areas which will be taught in a tutorial style over five days by leading experts. The areas covered include statistical pattern recognition, Bayesian techniques, non-parametric and neural network approaches including kernel methods, string matching, evolutionary computation, classifiers, decision trees, feature selection and dimensionality reduction, clustering, reinforcement learning, and Markov models.

- A number of prizes sponsored by Microsoft and Springer for best research demonstrated by participants and judged by a panel of experts. The prizes will be presented to the winners by Prof. Chris Bishop from Microsoft Research.

- Providing participants with knowledge and recommendations on how to develop and use pattern recognition tools for a broad range of applications.

The registration fee for the 2010 event has been frozen as the same in 2007, so this is an excellent opportunity for participants to register at an affordable cost. The fee includes registration and accommodation plus meals at the event. The registration process is online through the school website which has further details on registration fees. Please note that the number of participants registering each year at the summer school is high with a limited number of seats available, and therefore early registration is highly recommended.

Monday, June 21, 2010

imageCLEF 2010


Minutes before our submission!!! Good luck to us!!!!

Thursday, June 17, 2010

Pixolution makes finding images fun

Pixolution received a lot of attention at the 2010 CEPIC conference. The organic-looking flow of images on the screen drove a constant stream of people to the companies’ stand. This means not everyone got to see the live demo. We’ve asked the co-founder and CEO of Pixolution to prepare a demo for those that didn’t make it or couldn’t attend the conference. In this demo, that is prepared for visitors of Fast Media Magazine exclusively, Prof. Dr. Kai Barthel walks through all the main services in 3 minutes (yes, it was a challenge, there is a lot to go through). This video will give you an introduction and if you want to dig deeper you can do so on the Pixoluton website where a number of video’s and demo’s are viewable.

The mission of the Pixolution is: ‘to manage visual content visually’. The company sees its application with photo agencies, image search engines, photokiosk systems, online shops, online photo ordering, imaging software and corporations. Pixolution was founded in 2009 in Berlin, Germany and cooperates closely with the University of Applied Sciences, Berlin (Hochschule fur Technik und Wirtschaft, HTW Berlin). The well known and successful image search engine and earlier versions of the ImageSorter were developed at the University.


Wednesday, June 16, 2010

Parametric Reshaping of Human Bodies in Images

Article from ACM Transaction on Graphics (Proceedings of SIGGRAPH 2010)

This new parametric reshaping technique allows users to easily reshape a human body in a single image (Leftmost) by simply manipulating a small set of sliders corresponding to semantic attributes such as height, weight and waist girth.


Authors of this paper presented an easy-to-use image retouching technique for realistic reshaping of human bodies in a single image. A model-based approach is taken by integrating a 3D whole-body morphable model into the reshaping process to achieve globally consistent editing effects. A novel body-aware image warping approach is introduced to reliably transfer the reshaping effects from the model to the image, even under moderate fitting errors. Thanks to the parametric nature of the model, our technique parameterizes the degree of reshaping by a small set of semantic attributes, such as weight and height. It allows easy creation of desired reshaping effects by changing the full-body attributes, while producing visually pleasing results even for loosely-dressed humans in casual photographs with a variety of poses and shapes.


Monday, June 14, 2010

Microsoft Kinect gets official, Video Chat announced

Article From

36diggsdiggYou knew it was coming, right? Hot on the heels of getting leaked a wee bit early Microsoft has made official the rebadging of a device desperately seeking a new name: "Project Natal" is no more, replaced by Microsoft Kinect. At a circus- and celebrity-filled affair, MS wrapped everyone in high-tech panchos (pictured after the break courtesy of Joystiq) and then took the wraps off of the new title. Quite a few game demos were shown, ranging from Star Wars to tiger petting, the Kinect interface to the Dashboard was shown (said by some to be Minority Report-like), and a video chat app called, wait for it, Video Chat. Through here you can naturally talk to friends (up to four total people at once was "shown"), and also share photos.
Sadly, no hands-on time was given nor did MS reveal the two crucial bits of information we're waiting for: price and date. Naturally a holiday release is expected, to give the Xbox 360 a nice sales boost, but we're hearing price rumors as high as $150. These choice bits of intel will surely be unveiled at Microsoft's event tomorrow -- if someone doesn't beat 'em to it. The hardware is still looking exactly like the early picture above, shattering hopes of a slimmer design to match new Slim Xbox 360.
Update: We've got official photos now, though solid textual info is still scarce. Stand by!
Update 2: So we're out of the wild, cult-like experience that was Microsoft's Kinect unveiling. Microsoft still has a lot of details to reveal, but there are a few things we gleaned from watching the demos:

  • Almost everything was one person at a time, particularly in the Kinect Sports games. Even a game like beach volleyball or soccer was boiled down to individual "moments" of interaction that get strung together into some sort of competition. Even the running in place games were one at a time, though the river rafting and mining cart games (both with a similar mechanic of jumping and ducking through an obstacle course while picking up tokens) could be played with two people at a time. You can at least play games like volleyball simultaneously with someone else over Xbox Live.
  • An interesting mechanic we saw was a second player "jumping in" to a game. In the mining cart scenario, when the second player jumped in it immediately went split screen, while in soccer different players took turns by just jumping into position. Sure, some of this stuff was edited for our benefit, but it seems Microsoft is working to make the introduction of a second player or the switching between players something less button-heavy.
  • The Star Wars game was pretty badass-looking -- you play a Jedi, rushing down stormtroopers and deflecting laser bolts left and right, wielding a few Force powers, and confronting a certain deep-voiced Sith Lord for a one-on-one duel. Based on the gestures and action we saw, though, it was a pretty heavily scripted experience. Still, there's no scripting a two-handed light saber grip, and that particular action looked like everything we've ever wanted in a Star Wars game.
  • The yoga game is actually a pretty smart use of the infrared and joint detection software we espied previously. Positions were "checked" by points on the joint -- making it certainly harder to fake the moves on Wii Fit -- and it seemed to have a tai chi element to it. Your avatar glowed a more intense red based on your three-dimensional approximation -- bright red for hands stretched forward, for example.
  • Next up: Kinectimals, a baby tiger pet simulator. You can scratch its ears, snuggle, and teach the little guy to jump and play dead. Adorable? Dangerously so. No one can tell us the developer, but based on the lighting effects, art style, and similarities to the previously-shown Milo, we'd wager a guess that it was Lionhead Studios.
  • The Kinect menu interface is about as simple as could be. You wave your hand to control a glowing cursor of sorts, and you push forward to "click" on the element you want. Of course, there's also a very simplified version of the Dashboard to go along with this control mechanism, so it's unclear if you'll be able to do everything via subtle hand waves, but the Twitter, Facebook, Zune and Netflix icons were clearly present.
  • The MTV Games-developed Dance Central has some on staff divided -- only Ross will actually admit to being interested in playing it. A series of dance moves are presented, including elbow jabs, swinging leg, guitar, "rocking out" (with your hand in the air). The art style is akin to Rock Band / Guitar Hero, and to be fair, this is probably one of those games that can't be done as well on any other console.

Monday, June 7, 2010

IbPRIA 2010

5th Iberian Conference on Pattern Recognition and Image Analysis
Las Palmas de Gran Canaria, Spain.
June 8-10, 2011

IbPRIA is an international event co-organised every two years, by the Spanish and Portuguese Associations for Pattern Recognition, and sponsored by IAPR-International Association for Pattern Recognition. IbPRIA is a single track conference consisting of high quality, previously unpublished papers, presented either orally or as a poster, intended to act as a forum for research groups, engineers and practitioners to present recent results, algorithmic improvements and promising future directions in pattern recognition and image analysis. This conference is referenced in the CORE Conference Ranking.


The conference are looking for new theoretical results, techniques and main applications on any aspect of pattern recognition and image analysis, including but not restricted to the following topics:

  • Pattern Recognition
  • Image Analysis
  • Computer Vision
  • Multimedia Systems
  • Statistical and Structural Pattern Recognition
  • Machine Learning and Data Mining
  • Bioinformatics
  • Image Coding and Processing
  • Shape and Texture Analysis
  • Information Systems
  • Biometric Technologies
  • Speech Recognition
  • Document Processing
  • Character and Text Recognition
  • Robotics
  • Remote Sensing
  • Industrial Applications of Pattern Recognition
  • Special Hardware Architectures

Friday, June 4, 2010

Monkey Controls Robotic Arm With His Brain

Two monkeys with tiny sensors in their brains have learned to control a mechanical arm with just their thoughts, using it to reach for and grab food and even to adjust for the size and stickiness of morsels when necessary, scientists reported on Wednesday.

The report, released online by the journal Nature, is the most striking demonstration to date of brain-machine interface technology. Scientists expect that technology will eventually allow people with spinal cord injuries and other paralyzing conditions to gain more control over their lives.

The findings suggest that brain-controlled prosthetics, while not practical, are at least technically within reach.

In previous studies, researchers showed that humans who had been paralyzed for years could learn to control a cursor on a computer screen with their brain waves and that nonhuman primates could use their thoughts to move a mechanical arm, a robotic hand or a robot on a treadmill.

The new experiment goes a step further. In it, the monkeys’ brains seem to have adopted the mechanical appendage as their own, refining its movement as it interacted with real objects in real time. The monkeys had their own arms gently restrained while they learned to use the added one.