Pages

Thursday, May 28, 2009

3rd Workshop on Many Faces of Multimedia Semantics

Hyatt Regency Mission Bay Spa and Marina
San Diego, California, USA
December 14-16, 2009
https://www-itec.uni-klu.ac.at/ms09/

The 3rd Workshop on Many Faces of Multimedia Semantics will be a one-day
workshop to be held during the IEEE International Symposium on
Multimedia (ISM’09, http://ism2009.eecs.uci.edu/). It will take place in
Dec. 2009 in San Diego, USA.

Objectives
==========

Information is increasingly becoming ubiquitous and all-pervasive, with
the World-Wide Web as its primary repository. The rapid growth of
information on the Web creates new challenges for information retrieval.
Recently, there has been a growing interest in the investigation and
development of the next generation web – the Semantic Web.

Multimedia information has always been part of the Semantic Web
paradigm, but, in general, has been discussed very simplistically by the
Semantic Web community. We believe that, rather than trying to discover
a media object’s hidden meaning, one should formulate ways of managing
media objects so as to help people make more intelligent use of them.
The relationship between users and media objects should be studied.
Media objects should be interpreted relative to the particular goal or
point-of-view of a particular user at a particular time.

Content-based descriptors are necessary to this process. At the same
time, such descriptions are definitely not sufficient. Context is also
important, and should be managed. The area of emergent multimedia
semantics has been initiated to study the measured interactions between
users and media objects, with the ultimate goal of trying to satisfy the
user community by providing them with the media objects they require,
based on their individual previous media interactions.

The arrival of Web 2.0 has added new paradigms to the media mix. Such
concepts as folksonomies, a form of emergent semantics, introduce a
collaborative, dynamic approach to the generation of ontologies and
media object semantics. That such an approach results in a stable
semantics, though surprising, has been recently demonstrated.

As one can see, the field of multimedia semantics is in great flux at
the present time. Approaches which seek to unify these disparate
disciplines are especially necessary.

This will be a one-day workshop to be held during ISM’09. Besides the
standard research contributions, there will also be a poster session and
a session devoted to the presentation of results from current Ph.D.
students, as well as a keynote talk. Based on last year’s workshop, the
keynote will include discussions of necessary research agendas which
will bring together important subsets of the research communities
working on multimedia semantics, the Semantic Web, and Web 2.0. Best
papers of this workshop will be published in IEEE Multimedia.

List of Topics
==============

We welcome all papers relevant to topics in multimedia semantics,
including those at the confluence of multimedia information management,
the Semantic Web, and Web 2.0, such as,

    * Computational semiotics
    * Conceptual clustering
    * Emergent semantics in the social web
    * Event representation and detection
    * Folksonomies in social media sharing
    * Genre detection
    * Industrial use-cases and applications
    * Intelligent browsing and visualization
    * Media ontology learning
    * Media mining in the social web
    * Modeling and recognition of visual objects and actions
    * Multimedia management and consumption in communities
    * Multimedia extraction and social annotation
    * Multimedia ontologies for the social web
    * Multisensory data integration and fusion for decision making
    * Perception and cognition in the context of Web 2.0
    * Semantic metadata for mobile applications
    * Semantics enabled multimedia applications (including search,
      browsing, retrieval, visualization) for the social web
    * Social networking
    * Spectral methods
    * Standards for the social web
    * User interfaces

Important Dates
===============

    * 11:59 PM EST, July 20, 2009 — Submissions due
    * August 20, 2009 — Acceptance notification
    * September 25, 2009 — Camera-ready papers due

Tuesday, May 26, 2009

Firefox add-on puts Wolfram Alpha in your Google

If you've casually been using Wolfram Alpha, but don't want to give up your Google reliance, there's hope for you yet. A new Firefox extension lets you keep using Google, while showing Wolfram Alpha results on the side of the page.

I've been using it all morning and it's a nice addition if you're a search enthusiast. Your Google results come in just as quickly as they usually do, while the Wolfram ones catch-up on the side. This makes it a good way to test some of the limitations of the new search engine, as it only covers so many topics. My favorite use for it is to pull up nutritional information for fast food and cast lists for movies. Both are activities that usually require going off the results page to find the information I was looking for, whereas Wolfram simply grabs and displays it in an orderly fashion.

The only drawback I've run into with this extension is that it can clip off the bottom of the Wolfram Alpha results unless you've got Google set to show 20 or more search results per page. On some of the longer entries this this means you're not seeing potentially important information. On the plus side, there's a quick link to redo the search in Wolfram Alpha, in a different browser tab.

Note: This extension is experimental, which means you need to be registered with Mozilla's add-ons directory to install it in your browser.

Read More

Call for Open Source Software Competition: ACM MM

The ACM Multimedia Open-Source Software Competition celebrates the invaluable contribution of researchers and software developers who advance the field by providing the community with implementations of codecs, middleware, frameworks, toolkits, libraries, applications, and other multimedia software. This year will be the sixth year in running the competition as part of the ACM Multimedia program.

To qualify, software must be provided with source code and licensed in such a manner that it can be used free of charge in academic and research settings. For the competition, the software will be built from the sources. All source code, license, installation instructions and other documentation must be available on a public web page. Dependencies on non-open source third-party software are discouraged (with the exception of operating systems and commonly found commercial packages available free of charge). To encourage more diverse participation, previous years’ non-winning entries are welcome to re-submit for the 2009 competition. Student-led efforts are particularly encouraged.

Authors are highly encouraged to prepare as much documentation as possible, including examples of how the provided software might be used, download statistics or other public usage information, etc. Entries will be peer-reviewed to select entries for inclusion in the conference program as well as an overall winning entry, to be recognized formally at ACM Multimedia 2009.  The criteria for judging all submissions include broad applicability and potential impact, novelty, technical depth, demo suitability, and other miscellaneous factors (e.g., maturity, popularity, student-led, no dependence on closed source, etc.).

Authors of the winning entry, and possibly additional selected entries, will be invited to demonstrate their software as part of the conference program.  In addition, accepted overview papers will be included in the conference proceedings.

Important dates:

  • Extended to June 10, 2009 - Submission Deadline
  • July 10, 2009 - Notification of Acceptance
  • July 24, 2009 - Final Camera- Ready Version
  • Oct 19-24, 2009 - Presentation/demonstration at main conference

Location & Time: ACM International Conference on Multimedia 2009, Beijing Hotel, Beijing, China, October 19 - 24, 2009,

see also http://www.acmmm09.org/

Mammogram Enhancer: Enhancing underexposed mammographies for low-dose mamographic applications

In order to minimize the dose that a patient receives, many digital mammographies are purposely underexposed. Mammogram Enhancer is a software that enhances the underexposed radiographic images, of low-dose mammographic applications. The core algorithm of this software employs characteristics of the center-surround cells of the Human Visual System in order to achieve high-quality local contrast and dynamic range compression, maximizing the available visual information for a correct diagnosis. Additionally, it can be embedded in low-dose digital mammographic units, since its low computational complexity allows fast executions times by conventional microprocessors.

http://sites.google.com/site/vonikakis/software

2009 International Conference on Information Theory and Engineering (ICITE 2009), Malaysia

13 to 15 November 2009, Kota Kinabalu
Website: http://www.iacsit.org/icite/index.htm
Contact name: Conference Secretary
Sponsored by: IACSIT, Purtra University Malaysia
Enquiries: icite@vip.163.com

The idea of the conference is for the scientists, scholars, engineers and students from the Universities all around the world and the industry to present ongoing research activities, and hence to foster research relations between the Universities and the industry.

Papers are invited on research in Information Technology, including but not limited to the following topics:

Artificial Intelligence
Algorithms and Techniques
3G & 4G Mobile Communication Services
Agents and Multi-Agents systems for ICT Integrated Circuits for communications
Antennas & Propagation
Automation, Control and Robotics
Bioinformatics and Bioengineering
Biosignal Processing
Business Information Systems
Broadband & Intelligent networks
Computational Intelligence
Communication Systems
Data Base Management
Data Mining and Data fusion
E-Commerce & E-government
E-Health & Biomedical applications
E-Learning & E-Business
Emerging technologies & Applications
Fuzzy, ANN & Expert Approaches
Grid and Cluster Computing
ICT & Banking
ICT & Education
ICT & Intelligent Transportation
ICT in Environmental Sciences
Image Analysis and Processing
Image & Multimedia applications
Information & data security
Information indexing & retrieval
Information Processing
Information systems & Applications
Internet applications & performances
Knowledge Based Systems
Knowledge Management
Knowledge Management & Decision Making
Machine Learning
Machine Vision & Remote sensing
Management Information Systems
Mobile networks & services
Network Management and services
Networking theory & technologies
Next generation network
Optical Communications
Pattern Recognition
QoS management
Satellite & Space Communications
Signal & Image Processing
Speech and Audio Processing
Software Engineering and Formal Methods
Systems & Software Engineering
Web Engineering

Important Dates
Paper submission: June 15, 2009
Notification of acceptance: July 5, 2009
Final paper submission: July 30, 2009
Authors’ registration: July 30, 2009
Submission Methods
1. Electronic Submission System; (.pdf)

Thursday, May 21, 2009

Research positions in Video Analysis at AIIA lab

A number of research positions have become available in the Artificial Intelligence and Information Analysis (AIIA) Laboratory at the Department of Informatics of the Aristotle University of Thessaloniki, Greece

  • Postdoctoral researchers
  • PhD students holding an MSc degree or Diploma in Electrical Engineering / Computer Science / Computer Engineering or equivalent.
  • System administrator/programmer

The AIIA Lab profile and related information can be found in http://www.aiia.csd.auth.gr. The positions are funded by several competitive FP6 R&D projects (Networks of Excellence and Integrated projects funded European Union). The general research topic is digital signal / image / video processing and analysis, computer vision, and graphics. An indicative list of possible research topics is the following:

  • Digital Image/Video Analysis

The exact research topic of the new researcher will be chosen so as to match his/her experience aiming at maximum productivity. The positions are financed by EU research projects. Appointments can be extended for 3 years or more. Proven research experience in one of the following fields: digital image processing, computer vision, graphics, interfaces, signal processing, very good knowledge of English, C/C++ programming and strong interest in academic research are highly desired.

The deadline for the above positions is the  31st May 2009 .

Prospective applicants should be EU citizens only

Wednesday, May 13, 2009

New version of img(Rummager)

New Features

Demos
1.
Binary, nearest-neighbor, one-dimensional automaton demo
The simplest type of cellular automaton is a binary, nearest-neighbor, one-dimensional automaton. Such automata were called "elementary cellular automata" by S. Wolfram, who has extensively studied their     amazing properties. There are 256 such automata, each of which can be indexed by a unique binary number whose decimal representation is known as the "rule" for the particular automaton.

CA_Demo1

2. A cellular automaton demo for the propagation of circular fronts. Read More 

CA_Demo2

Extras
3.
img(Encryption)
A new method for visual multimedia content encryption using cellular automata. The encryption scheme is based on the application of an attribute of the CLF XOR filter, according to which the original content of a cellular neighborhood can be reconstructed following a predetermined number of repeated applications of the filter. The encryption is achieved using a key image of the same dimensions as the image being encrypted. This technique is accompanied by the one-time pad (OTP) encryption method, rendering the proposed method reasonably powerful, given the very large number of resultant potential security keys. The proposed method is further strengthened by the fact that the resulting encrypted image for a given key image is different each time, since its result depends on a random number generator.
A semi-blind source separation algorithm is used to decrypt the encrypted image. The result of the decryption is a loss-less representation of the encrypted image. Simulation results for grayscale and color images demonstrate the effectiveness of the proposed encryption method.

img(encryption)

Improvements
4. Check for updates menu

Laboratory
5.
New tab "Color Spaces"
6. New tab "Shape Features"
7. Get the image projections
8. Get the TSRD Descriptor
9. Find the connected components

img Retrieval
10. Reset Ground truth bug (X23226) Fixed
11. Preview the first 10 results as sliding show

Read More and Download

Monday, May 11, 2009

Developing a Document Image Retrieval System Part 2

The DLL file which extracts the descriptor from a word image is ready. The descriptor which is called Texture and Shape Representation Descriptor (TSRD) is based on the two following publications (a conference and a journal):

A demo/showcase application of the above descriptor is located at: http://orpheus.ee.duth.gr/irs2_5

The descriptor can be used (in addition for the word spotting) for the retrieval of signatures, gestures and other applications of pattern recognition. All you need is a black/white 24bit image. The black color represents the object and the white the background.

The TSRD DLL file can be download from here (mirror). It has the ability to enable/disable the features that construct the descriptor. For example the Down and Upper Grid Features in not working well for signature recognition. The structure of the descriptor is depicted in the following image:

In addition to the TSRD DLL file, an example solution for the Visual Studio 2008 is provided. This solution uses the TSRD in two scenarios.

The first scenario is to calculate the descriptor of a word image using the full feature set. The code is:

// Creating the TSRD object    
TSRD.TSRD myTSRD = new TSRD.TSRD();

// Load the Image from the pictureBox_word object. Alternately you can load it from the hard disk.
// In the end you need a Bitmap object that it represent a black/white 24bit image.
Bitmap myImage = (Bitmap)pictureBox_word.Image.Clone();

// Get the Descriptor through the TSRD object
double[] myTSRDescriptor = myTSRD.GetTSRDescriptor(myImage);

The second scenario is to calculate the descriptor of a signature. In this scenario the Upper and Down Grid Features (UGF and DGF) are disabled:
// Creating the TSRD object
TSRD.TSRD myTSRD = new TSRD.TSRD();

// Disable the Down Grid Features (DGF),Up Grid Features (UGF) and the Trimming Operation
myTSRD.UseDGF = false; // disable DGF
myTSRD.UseUGF = false; // disable UGF
myTSRD.TrimImage = false; // The trimming operation

// Load the Image from the pictureBox_signature object. Alternately you can load it from the hard disk.
// In the end you need a Bitmap object that it represent a black/white 24bit image.
Bitmap myImage = (Bitmap)pictureBox_signature.Image.Clone();

// Get the Descriptor through the TSRD object
double[] myTSRDescriptor = myTSRD.GetTSRDescriptor(myImage);
This is a screenshot of the application in the example solution: solution The solution can be download for here (mirror).

The evolution of the TSRD is the Compact Shape Portrayal Descriptor. This Descriptor is more inline with the compact descriptors of MPEG-7. It is fast to calculate, quantized and small (46 bins/elements, 3bits per sbin/element). A demo/showcase application of the above descriptor is located at: http://orpheus.ee.duth.gr/cspd/. It uses the Windows Presentation Foundation (WPF) found in .NET 3.5 SP1 for the interaction with the user. It is still a work in progress (but I am in the final stages). The requirements to run the program are:


  • Firefox 2 (and above), Internet Explorer 6 (and above)
  • Microsoft .NET 3.5 SP1 (you can download it from here)
  • Windows XP/Vista
I will write about this descriptor and the accompany relevance feedback technique.

The above two descriptors are expected to merge with img(Anaktisi). At present time, I reorganize the structure of the img(Anaktisi) interface because it is a mess. After this, the descriptors will be added.

For more information or questions email me at kzagoris@gmail.com.

Dr Konstantinos Zagoris (http://www.zagoris.gr) received the Diploma in Electrical and Computer Engineering in 2003 from Democritus University of Thrace, Greece and his phD from the same univercity in 2010. His research interests include document image retrieval, color image processing and analysis, document analysis, pattern recognition, databases and operating systems. He is a member of the Technical Chamber of Greece.

Saturday, May 9, 2009

Handbook of Medical Image Processing and Analysis (Academic Press Series in Biomedical Engineering)

The Handbook of Medical Image Processing and Analysis is a comprehensive compilation of concepts and techniques used for processing and analyzing medical images after they have been generated or digitized. The Handbook is organized into six sections that relate to the main functions: enhancement, segmentation, quantification, registration, visualization, and compression, storage and communication.

The second edition is extensively revised and updated throughout, reflecting new technology and research, and includes new chapters on: higher order statistics for tissue segmentation; tumor growth modeling in oncological image analysis; analysis of cell nuclear features in fluorescence microscopy images; imaging and communication in medical and public health informatics; and dynamic mammogram retrieval from web-based image libraries.

For those looking to explore advanced concepts and access essential information, this second edition of Handbook of Medical Image Processing and Analysis is an invaluable resource. It remains the most complete single volume reference for biomedical engineers, researchers, professionals and those working in medical imaging and medical image processing.

Read More

More Thoughts on Image Retrieval

Article from: http://blog.contentmanagementconnection.com/Home/19295

After my recent posts about Google’s similarity browsing for images, a colleague reached out to me to educate me about some of the recent advances in image retrieval. This colleague is involved with an image retrieval startup and felt uncomfortable posting comments publicly, so we agreed that I would paraphrase them in a post under my own name. I thus accept accountability for the post, but cannot take credit for expertise or originality.

Some of the discussion in the comment threads mentioned scale-invariant feature transform (SIFT), an algorithm to detect and describe local features in images. What I don’t believe anyone mentioned is that this approach is patented–certainly a concern for people with commercial interest in image retrieval.

There’s also the matter of scaling in a different sense–that is, handling large sets of images. People interested in this problem may want to look at “Scalable Recognition with a Vocabulary Tree” by David Nistér and Henrik Stewénius. They map image features to “visual words” using a hierarchical k-means approach. While mapping image retrieval to text retrieval approaches is not new, their large-vocabulary approach was novel and made significant improvement to scalability, as well as being robust to occlusion, viewpoint and lighting change. The paper has been highly cited.

But there are problems with this approach in practice. For example, images from cell phone cameras are low-quality and blurry, and Nistér and Stewénius’s approach is unfortunately not resilient to blur. Accuracy and latency are also challenges.

In general, some of the vision literature about which are the best features to use don’t seem to work so well outside the lab, and the reason may be that images used for such experiments in the literature are of much higher quality than those in the field–particuarly for cell phone images.

An alternative to SIFT is “gist”, an approach based on global descriptors. This approach is not resilient to occlusion or rotation, but it does scale much better than SIFT, and may serve well for some duplicate detection–a problem that, in my view, is a deal-breaker for applications like similarity browsing–and which certainly is a problem for Google’s current approach.

In short, image retrieval is still a highly active area, and different approaches are optimized for different problems. I was delighted to have a recent guest post from AJ Shankar of Modista about their approach, and I encourage others to contribute their thoughts.

Sunday, May 3, 2009

ICSIP 2009 : "International Conference on Signal and Image Processing"

The International Conference on Signal and Image Processing aims to bring together researchers, scientists, engineers, and scholar students to exchange and share their experiences, new ideas, and research results about all aspects of Signal and Image Processing, and discuss the practical challenges encountered and the solutions adopted.

PAPER SUBMISSION

All full paper submissions will be peer reviewed and evaluated based on originality, technical and/or research content/depth, correctness, relevance to conference, contributions, and readability. The full paper submissions will be chosen based on technical merit, interest, applicability, and how well they fit a coherent and balanced technical program. The accepted full papers will be published in the refereed conference proceedings. Prospective authors are kindly invited to submit full text papers including results, tables, figures and references. Full text papers (.doc, .rft, .ps, .pdf) will be accepted only by electronic submission.

WORKSHOPS

Researchers are cordially invited to submit a paper and/or a proposal to organize a workshop and actively participate in this conference. Proposals are invited for workshops to be affiliated with the conference scope and topics. The conference workshops provide a challenging forum and vibrant opportunity for researchers and industry practitioners to share their research positions, original research results and practical development experiences on specific new challenges and emerging issues. The workshop topics should be focused so that the Participants can benefit from interaction with each other and the cohesiveness of the topics.

PROCEEDINGS

The refereed conference proceedings will be published prior to the conference in both Hard Copy Book and CD-ROM, and distributed to all registered participants at the conference. The refereed conference proceedings are reviewed and indexed by Google Scholar, Directory of Open Access Journals (DOAJ), EBSCO, Ulrich’s Periodicals Directory, German National Library of Science and Technology and University Library Hannover (TIB/UB), Electronic Journals Library (Elektronische Zeitschriftenbibliothek, EZB), Genamics, GALE and INTUTE.

SPECIAL JOURNAL ISSUE

ICSIP 2009 has teamed up with the International Journal of Signal Processing for publishing a Special Journal Issue on Advances in Signal and Image Processing. All submitted papers will have opportunities for consideration for this Special Journal Issue. The selection will be carried out during the review process as well as at the conference presentation stage. Submitted papers must not be under consideration by any other journal or publication. The final decision will be made based on peer review reports by the guest editors and the Editor-in-Chief jointly.