Pages

Friday, June 14, 2013

CFP - MTAP Special Issue on Content Based Multimedia Indexing

============================================================
Multimedia Tools and Applications, Journal, Springer
Special Issue on "Content Based Multimedia Indexing"
CALL FOR PAPERS
http://cbmi2013.mik.uni-pannon.hu/index.php/cfp
============================================================
Multimedia indexing systems aim at providing easy, fast and accurate access to large multimedia repositories. Research in Content-Based Multimedia Indexing covers a wide spectrum of topics in content analysis, content description, content adaptation and content retrieval. Various tools and techniques from different fields such as Data Indexing, Machine Learning, Pattern Recognition, and Human Computer Interaction have contributed to the success of multimedia systems.
Although, there has been a significant progress in the field, we still face situations when the system shows limits in accuracy, generality and scalability. Hence, the goal of this special issue is to bring forward the recent advancements in content-based multimedia indexing.

Topics of Interest
==================
Topics of interest for the Special Issue include, but are not limited to:
- Audio content extraction
- Audio indexing (audio, speech, music)
- Content-based search
- Identification and tracking of semantic regions
- Identification of semantic events
- Large scale multimedia database management
- Matching and similarity search
- Metadata generation, coding and transformation, multi-modal fusion
- Multimedia data mining
- Multimedia interfaces, presentation and visualization tools
- Multimedia recommendation
- Multimedia retrieval (image, audio, video, ...)
- Multi-modal and cross-modal indexing
- Personalization and content adaptation
- Summarization, browsing and organization of multimedia content
- User interaction and relevance feedback
- Visual content extraction
- Visual indexing (image, video, graphics)

Submission Details
==================
All the papers should be full journal length versions and follow the guidelines set out by Multimedia Tools and Applications: http://www.springer.com/computer/information+systems/journal/11042. Manuscripts should be submitted online at https://www.editorialmanager.com/mtap/ choosing "Content Based Multimedia Indexing" as article type, no later than September 1st, 2013. When uploading your paper, please ensure that your manuscript is marked as being for this special issue. Information about the manuscript (title, full list of authors, corresponding author’s contact, abstract, and keywords) should also be sent to the corresponding editor Klaus Schoeffmann (ks@itec.uni-klu.ac.at). All the papers will be peer-reviewed following the MTAP reviewing procedures.

Important Dates
===============
Manuscript due: September 1st, 2013
Notification: October 22nd, 2013
Publication date: First quarter 2014

Guest Editors
=============
Klaus Schoeffmann, Klagenfurt University, Klagenfurt, Austria
ks@itec.uni-klu.ac.at
Tamás Szirányi, MTA SZTAKI, Budapest, Hungary
sziranyi@sztaki.hu
Jenny Benois-Pineau, University of Bordeaux 1, LABRI UMR 5800 Universities-Bordeaux-CNRS, France
Jenny.benois@labri.fr
Bernard Merialdo, EURECOM, Nice – Sophia Antipolis, France
Bernard.Merialdo@eurecom.fr

The Video Explorer – A Tool for Navigation and Searching within a Single Video based on Fast Content Analysis

Abstract:

We propose a video browsing tool supporting new efficient navigation means and content-based search within a single video, allowing for interactive exploration and playback of video content. The user interface provides flexible navigation indices by visualizing low-level features and frame surrogates along one or more timelines, called interactive navigation summaries. By applying simple and fast content analysis, navigation summary computation becomes feasible during browsing, enabling addition, removal, and update of navigation summaries at runtime. Semantically similar video segments will be visualized by similar patterns in certain navigation summaries, which enables users to quickly recognize and navigate to potential similar segments. Moreover, free-shape regions of video frames or video segments within navigation summaries can be selected by the user to launch a fast content-based search to find frames with similar regions or segments with similar navigation summaries.

A .NET binary (only tested on Windows) of the Video Explorer tool, described in the publication below, is now available for download here.

Klaus Schoeffmann, Mario Taschwer, and Laszlo Boeszoermenyi. 2010. The video explorer: a tool for navigation and searching within a single video based on fast content analysis. In Proceedings of the first annual ACM SIGMM conference on Multimedia systems (MMSys ’10). ACM, New York, NY, USA, 247-258. DOI=10.1145/1730836.1730867 http://doi.acm.org/10.1145/1730836.1730867
PDF here

Monday, June 10, 2013

VLBenchmarks

LBenchmarks is a MATLAB framework for testing image feature detectors and descriptors. The latest version can be downloaded here.

If you use this project in your work, please cite it as:

K. Lenc, V. Gulshan, and A. Vedaldi, VLBenchmarks,
http://www.vlfeat.org/benchmarks/, 2012. BibTeX

This project is sponsored by the PASCAL Harvest programme as part of this project. VLBenchmkar is a sibling of the VLFeat Library, which it uses as suppot, but is otherwise independent of it.

The authors would like to thank Andrew Zisserman, Jiri Matas, Krystian Mikolajczyk, Tinne Tuytelaars, and Cordelia Schmid for helpful discussion and supports.

Overview

VLBechmarks is a MATLAB framework to evaluate feature detector and descriptors automatically. Benchmarking your own features is as simple as writing a single wrapper class. Then VLBenchmarks takes care of downloading the required benchmarking data from the Internet and running the evaluation(s). The framework ships with wrappers for a number of publicly available features to enable comparing to them easily. VLBenchmarks has a number of functionalities, such as caching of intermediate results, that allow running benchmarks efficiently.

The current version of VLBechmarks implements:

  • The feature extractor repeatability of [1].
  • The descriptor matching score of [1].
  • A new retrieval-based test based on the retrieval method of [2].

The code is distributed under the permissive MIT license.

Changes
1.0-beta
(7/10/2012) Initial release.

Download and install

http://www.vlfeat.org/benchmarks/index.html

A 3D Pottery Content Based Retrieval Benchmark Dataset

About the 3D Pottery Benchmark

image

The existence of benchmark data sets is essential in order to perform the evaluation and the comparison of content based retrieval methods. Although, no standard benchmark is available for 3D data shape matching there is currently a number of 3D data sets have been proposed by several research teams and are freely offered to be used.In order to evaluate our pottery specific 3D shape descriptors, we had to create a dataset of polygonal 3D vessels. The current pottery dataset is composed by a total number of 1012 digitised, manually modelled and semiautomated generated 3D models (3D vessel random generator). The content of the dataset is classified in several generic vessel shape categories such as Ancient Greek (Alabastron, Amphora, Hydria, Kantharos, Lekythos, Psykter, etc), Native American (Jar, Effigy, Bowl, Bottle, etc), modern pottery and others. The current classification has been performed with the help of the Department of Cultural Heritage of the ILSP/Athena Research Centre at an archaeologist-oriented semantic level. Please note that the current classification isn't the final one and we will be soon presenting a shape-oriented one. Furthermore, the 3D models are stored in the Wavefront OBJ file format and are currently followed by a single view thumbnail in JPEG file format and no texture information. A text file and an excel file is also provided where the filename of each model is assigned to a class. This information can be used for the computation of any performance metrics.

Read More: http://www.ipet.gr/~akoutsou/benchmark/

Sunday, June 9, 2013

Peer-review debate should include software

Article from http://www.researchinformation.info/news/news_story.php?news_id=1268

It’s not often that coding errors make the news. But back in April one particular slip-up with formulae on an Excel spreadsheet caused worldwide repercussions. It emerged that Harvard economists Carmen Reinhart and Kenneth Rogoff had made mistakes in the data underlying their influential 2010 paper, ‘Growth in a Time of Debt’ that appeared to undermine the paper’s main contention: that countries with debt-to-GDP ratios above 90 per cent see markedly slower growth.

The data was not published with the paper, and only showed up when 28-year-old PhD Student Thomas Herndon requested the spreadsheets directly from the authors. After a thorough debug, Herndon subsequently rubbished their results as lead author of his own paper, ‘Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff’.

This might merely have been embarrassing for Reinhart and Rogoff, were it not for that fact that that their paper had been widely cited as underpinning the need for austerity measures by politicians across the developed world – including US congressman Paul Ryan, UK chancellor George Osborne, and European Commission vice-president Olli Rehn. If the researchers had got their sums wrong, then this was very big news indeed.

At a time when scholarly publishing is debating the issue of data being published alongside papers, this makes an interesting test case. Reinhart and Rogoff’s errors could not have been detected by reading the journal article alone, so proper scrutiny in this case ought to have included the dataset.

But I would argue that the terms of the debate should go beyond data: we ought also to be thinking about software. In my view, the Reinhart and Rogoff story makes this clear.

Reproducibility is one of the main principles of the scientific method. Initially, Herndon and his Amherst colleagues found that they were unable to replicate Reinhart and Rogoff’s results. This was what caused them to request the underlying data, resulting in their subsequent discovery of errors.

Three areas gave concern about the methodology employed in arriving at the paper’s conclusions, but the most highly-publicised flaw was the Excel coding error – a major part of the lack of reproducibility here. As Mike Konczal showed in the blog post that broke the story, the selection box for one of the formulas in the spreadsheet doesn’t go to the bottom of the column, and this misses some crucial data, skewing the final result.

As the FT was at pains to point out, it might be over-egging things slightly to claim that thousands of people across Europe were thrown out of work simply because two Harvard professors can’t work an Excel spreadsheet. Reinhart and Rogoff have since responded, although critics also found errors in their response.

Interestingly, from the reproducibility point of view, what was flawed in Reinhart and Rogoff’s methodology was not the base data they drew on, but their use of the software tool Excel to explore and analyse that data. Methodological flaws were uncovered by Herndon in examining their use of the tool. Without the use of the same software that Reinhart and Rogoff had employed in reaching their conclusions, Herndon, Ash and Pollin would likely never have got to the bottom of why those results were not reproducible using the same data set.

It follows that not only the data ought to be open to the scrutiny of peer review in an instance like this, but also the software. Since everything that involves data nowadays involves software to some degree, the software becomes a central artifact in the presentation of scholarly results.

In our increasingly computer-centric working environment, without the software used to analyse, explore, model, visualise and in other ways draw inferences from base data, we are missing an important part of the picture.

Excel is a universally familiar piece of software (albeit a relatively unsophisticated one) but other more specialised tools such as MATLAB are routinely used within the scientific community and by researchers in disciplines as diverse as engineering, economics and social science to perform operations on data that result in published science.

MATLAB goes beyond Excel in that its output might not be just a set of numbers (such as Reinhart and Rogoff’s 90 per cent), but an algorithm. Frequently the output, the conclusion, the result of a given piece of research in certain academic disciplines is an algorithm.

If you want to look under the hood of that algorithm, for the purposes of peer-review scrutiny or reproducibility, you might well need to access the software that produced it. And you might also want to see the algorithm in action.

There are many ways to represent algorithms – including as formulae, flowcharts and in natural language. However, arguably the best way is by using the programming languages that were written specifically for the purpose and, of course, programming languages were created not just to represent algorithms, but to actualise them. It is logical therefore that MATLAB produces not only algorithms but also executables – i.e. software.

For a good example of an academic field where the output of research is software, just look at IPOL, the research journal of image processing and image analysis. Each of the articles in this online journal has a text description of an algorithm and source code, but also an ‘online demonstration facility’ that allows you to play with the executable in real time. Both text and source code are peer-reviewed.

In launching the journal GigaScience last year, its editors spoke of ‘overseeing the transition from papers to executable research objects’. This represents a view of academic publishing that embraces the reality of an increasingly ‘born digital’ research process.

Clearly, not every item of published research needs to include a piece of software. But if we restrict our vision of scholarly publishing to just articles and data we risk ignoring the other digital bits and pieces that now rightfully belong in the scholarly record – and without which it cannot properly be understood and scrutinised.

It’s clear that the classic functions of scholarly publication, namely registration, certification, awareness, archiving and reward will all have to apply to data and software as much as they do for textual works at the moment.

There is much for publishers to think about here, and I’m aware that I’m raising questions that do not all presently have answers. Indeed, it was encouraging to see some of these themes addressed at the recent Beyond The PDF 2conference, where projects such as Reprozip were presented as ways of addressing the reproducibility issue. I believe, however, that our view of the scholarly record must be broadened to include software as well as data, as this forms a significant part of research practice today. In this context the practice of publishing plain PDF files as a print analogue will become increasingly antiquated.

Article from http://www.researchinformation.info/news/news_story.php?news_id=1268