This paper presents two fast schemes to speed up the retrieval process for conventional content-based image retrieval systems. The traditional features such as color and invariant histograms are extracted offline from each image to compose a feature vector. All these feature vectors construct the feature database. Then the system performs the online retrieval based on this database as soon as possible. In the case of a small number of returned images, an equal-average equalvariance K nearest neighbour search (EEKNNS) method is used to speed up the retrieval process. In the case of a large number of returned images, an iterative EEKNNS (IEEKNNS) method is given. Experimental results show that the proposed retrieval methods can largely accelerate the retrieval process while guaranteeing the same recall and precision.
Saturday, January 31, 2009
Friday, January 30, 2009
An open-source C-based library for metric space indexing
This library contains source code implementing several indexing algorithms, as well as metric spaces, conforming to an API that permits adding new indexes and spaces. In addition, there are example instances and automatic generators for different metric spaces.
Download all the database instances provided from here (gzipped tar). Beware of its size! This will expand into a directory dbs/ that should go inside directory metricSpaces/ of the main structure (where the directory already exists).
If you only want some databases, then browse them and download those you really want. Expand them in the correct subdirectory of metricSpaces/dbs/.
The manual is inside the main structure, but you can also download it directly.
Database of faces by Karina Figueroa. August 2008.
Counting Distance Permutations by Matthew Skala.
Visual similarity in sign language by Jan Ulrych.
August 29 - 30, 2009
Prague, Czech Republic
The International Workshop on Similarity Search and Applications (SISAP) is a conference devoted to similarity searching, with emphasis on metric space searching. It aims to fill in the gap left by the various scientific venues devoted to similarity searching in spaces with coordinates, by providing a common forum for theoreticians and practitioners around the problem of similarity searching in general
spaces (metric and non-metric) or using distance-based (as opposed to coordinate-based) techniques in general.
SISAP aims to become an ideal forum to exchange real-world, challenging and exciting examples of applications, new indexing techniques, common testbeds and benchmarks, source code, and up-to-date literature through a Web page serving the similarity searching community. Authors are expected to use the testbeds and code
from the SISAP Web site for comparing new applications, databases,
indexes and algorithms.
After the very successful first event in Cancun, Mexico in 2008, the second SISAP will be held in Prague, Czech Republic, on August 29-30, 2009.
General page: http://www.iaria.org/conferences2009/MMEDIA09.html
Call for Papers: http://www.iaria.org/conferences2009/CfPMMEDIA09.html
Submission deadline: February 20, 2009
Technically Co-sponsored by IEEE France Section
Sponsored by IARIA, www.iaria.org
Submissions will be peer-reviewed, published by IEEE CPS, posted in IEEE Digital Library, and indexed with the major indexes.
Extended versions of selected papers will be published in IARIA Journals: http://www.iariajournals.org
Please note the Poster Forum and the Work in Progress track.
MMEDIA 2009 Special Areas (details in the CfP on site):
Fundamentals in multimedia
Multimedia systems, architecture, and applications; New multimedia platforms; Multimedia architectural specification languages; Theoretical aspects and algorithms for multimedia; Multimedia content delivery networks; Network support for multimedia data; Multimedia data storage; Multimedia meta-modeling techniques and operating systems; Multimedia signal coding and processing (audio, video, image); Multimedia applications (telepresence, triple-play, quadruple-play, …); Multimedia tools (authoring, analyzing, editing, browsing, …); Computational multimedia intelligence (fuzzy logic, neural networks, genetic algorithms, …); Intelligent agents for multimedia content creation, distribution, and analysis; Multimedia networking; Wired and wireless multimedia systems; Distributed multimedia systems; Multisensor data integration and fusion; Multimedia and P2P; Multimedia standards
Tuesday, January 27, 2009
IDEAL is an annual conference dedicated to emerging and challenging topics in intelligent data analysis and engineering and their associated learning paradigms. Previous successful IDEAL conferences include Daejeon (2008), Birmingham (2007), Burgos, (2006), Brisbane (2005), Exeter (2004), Hong Kong (2003), Manchester (2002) and Hong Kong (2002, 1998). The 10th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2009) will be held in the historical city of Burgos (northern Spain), one of the most visited cities in Spain.
The 10th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 09) provides an interesting opportunity to present and discuss the latest theoretical advances and real-world applications in Intelligent Data Engineering and Automated Learning.
Session topics would include ensemble learning methods, kernel methods, feature extraction, classification, data mining, agents, evolutionary computation, knowledge and rule-based systems, gene expression, protein data, image analysis, finance, fuzzy sets, time series, spam detection and rule engineering.
Learning and Information Processing
* Machine Learning
* Neural Networks
* Emergent Systems
* Probabilistic Learning
* Clustering and Classification
* Feature Selection and Extraction
* Information Fusion
* Evolutionary Computation
* Fuzzy Logic
Monday, January 26, 2009
Content based image retrieval (CBIR) means that images can be searched by their visual content. For example you can pick landscape image of mountains and try to find similar scenes with similar color and/or similar shapes. Traditional way to find images is to first assign keywords to images and then use textual query to find needed images. Writing keywords to hundreds or thousands of image is a tedious and error prone task. Using CBIR can images be analyzed by different methods which represent different aspects of visual information of images. Image searching and image archival can be greatly speeded up using automatic image analysis tools.
Content based image retrieval (CBIR) is a two phase process: first images are analyzed and inserted to the image database and after that they can be queried. Query is issued by giving an example image or by starting with random images from current images in database. Query continues so that images can be marked as positive or negative samples to refine search and to get better results.
What to expect from Octagon, check out screen shots. Query efficiency and accuracy depends on the images used, so you need to try with your own images before making heavy judgments.
- Free Java software that runs for example on Windows, Linux and Macintosh.
- Searches images by their visual appearance.
- Search images by keywords (beta).
- Keywords are automatically extracted from image files (IPTC headers).
- You can find images by color.
- You can query images by structure.
- Combined color and structure searching.
- Very easy and quick usage.
- Jpeg-support (others will be added)
- Supports many raw image formats for Canon, Nikon, Minolta, Pentax, Olympus and Kodak (check full list of camera models)
The CoPhIR (Content-based Photo Image Retrieval) Test-Collection has been developed to make significant tests on the scalability of the SAPIR project infrastructure (SAPIR: Search In Audio Visual Content Using Peer-to-peer IR) for similarity search.
We are extracting metadata from the Flickr archive, using the EGEE European GRID, through the DILIGENT project.
For each image, the standard MPEG7 image feature have been extracted. Each entry of the test-bed contains:
- The link to the corresponding entry into Flickr Web site
- The photo image thumbnail
- An XML structure with the Flickr user information in the corresponding Flickr entry: title, location, GPS, tags, comments, etc.
- An XML structure with 5 extracted standard MPEG7 image features:
- Scalable Colour
- Colour Structure
- Colour Layout
- Edge Histogram
- Homogeneous Texture
The data collected so far represents the world largest multimedia metadata collection that would be available for research on scalable similarity search techniques
Zezula, P., Amato, G., Dohnal, V., Batko, M.
The proliferation of information housed in computerized domains makes it vital to find tools to search these resources efficiently and effectively. Ordinary retrieval techniques are inadequate because sorting is simply impossible. Consequently, proximity searching has become a fundamental computation task in a variety of application areas.
Similarity Search focuses on the state of the art in developing index structures for searching the metric space. Part I of the text describes major theoretical principles, and provides an extensive survey of specific techniques for a large range of applications. Part II concentrates on approaches particularly designed for searching in large collections of data. After describing the most popular centralized disk-based metric indexes, approximation techniques are presented as a way to significantly speed up search time at the cost of some imprecision in query results. Finally, the scalable and distributed metric structures are discussed.
Capture stunning multi-gigapixel images with most point and shoot digital cameras.
The GigaPan System is the first completely integrated solution for creating multi-gigapixel images. With three easy-to-use components, the GigaPan System provides a complete, low-cost and powerful solution for creating incredibly high-resolution photos.
The GigaPan System
- GigaPan Imager The revolutionary robotic camera mount for capturing gigapixel images.
- GigaPan Stitcher Automatically combines thousands of photos into a single panorama.
- GigaPan.org View, share, explore and discover using the amazing GigaPan Viewer.
Join the GigaPan Beta Program!
You are invited to join our beta program and provide us with valuable feedback about the GigaPan System. For your participation you may purchase the GigaPan Imager at the exclusive beta price of $279.00. This program is only available for a limited time.
Sunday, January 25, 2009
Fast and accurate 3D object reconstruction and partial 3D component retrieval from 2D image slices represent a difficult and challenging problem. To group related objects on different layers in an image stack, image segmentation and sequential matching of adjacent 2D objects have to be preformed. Object matching involves heavy computing and is time consuming. In this paper, we propose a new approach for parallel implementation of object contour matching and partial 3D component retrieval based on image contour structure. The method has been implemented in MPI on a SGI Origin 2000 machine. The experimental results show a good speedup for sequential object matching and partial 3D component retrieval.
Friday, January 23, 2009
Article From timbourne
The IET hosted a stimulating session from Dr. Daniel Heesch the man behind Pixsta on Tuesday evening (20th Jan).
Although Google are pre-eminent in the field of search there are other ways we might consider initializing searches and with the number of images on the web growing at an extraordinary rate there is potentially a whole new field of image search to monetize.
Having undertaken his doctorate around the issues of image retrieval Daniel has since started a company to exploit the knowledge he’s gained and its called, rather neatly PIXSTA.
So far the technology of image search is in its’ infancy and can be seen demo'd in the fields of shopping - mainly Shoes, Handbags and Dresses (remember how important shoes are to Jen from the IT crowd ?)
It was was pointed out by one of the bright young things in attendance at the session that there’s also great potential in ‘porn image searches’ too!
Currently image searches tend to rely on the tags (meta data) attributed to images rather than an actual analysis of the images, the use of images is intuitive for humans and if someone can get the search engine to do the job well it could be of great use (and commercial value).
Be interesting to see if Google acquires this company or if it has already created a tool to achieve the same goal itself.
Issues for me are how the images index out from an 'optimum' handbag (or whatever the image class is) - to meet other criteria (colour, size material etc.) - I imagine the real 'holy grail' would be around faces as human beings are so incredibly good at differentiating here (be a great way of online dating too).
Having stayed awake for most of the session I now at least am able to drop the Turing Test into conversation to impress my peers
The Asia Information Retrieval Symposium (AIRS) aims to bring together researchers and developers to exchange new ideas and latest achievements in the field of information retrieval (IR). The scope of the symposium covers applications, systems, technologies and theory aspects of information retrieval in text, audio, image, video and multimedia data. The Fifth AIRS (AIRS 2009) welcomes submissions of original papers in the broad field of information retrieval.
Technical issues covered include, but are not limited to:
IR Theory and Formal Models;
IR Evaluation, Test collections, Evaluation methods and metrics,
Experimental design, Data collection and analysis;
Interactive IR, User interfaces and visualization, User studies,
User models, Task-based IR, User/Task-based IR theory;
Web IR, Intranet/enterprise search, Citation and link analysis,
Distributed IR, Fusion/Combination, Digital libraries;
Cross-language retrieval, Multilingual retrieval, Machine
translation for IR;
Video and image retrieval, Audio and speech retrieval,
Topic detection and tracking, Routing, Content-based filtering,
Collaborative filtering, Agents, Spam filtering;
Question answering, Information extraction, Summarization, Lexical
acquisition, NLP for IR;
Text Data Mining and Machine Learning for IR.
Accepted papers will be published as part of the LNCS series from
Springer, and will be EI-indexed. The AIRS 2009 organizers are also
planning on a post-conference special issue in a renowned
international journal, and the authors of the best papers at AIRS
2009 will be encouraged to contribute to this issue.
Thursday, January 22, 2009
The progress in digitalization techniques gave the impetus for the development of Content-Based Image Retrieval Systems (CBIRS) that use automatically extracted features to find images in large repositories. Up to now, the benefit from the incorporation of knowledge about the human visual system in the CBIRS design and implementation process has been mostly overlooked. In this context, the author developed a new eye-tracking based retrieval technique, called Vision-Based Image Retrieval (VBIR), where users' eye movements are used online to dynamically adjust the weights for locally calculated image features. Thus, the search can be directed towards information of increasing relevance, leading not only to better retrieval performances, but also to higher correlation of the systems' retrieval results with human measures of similarity. Furthermore, this book is about other central aspects for image retrieval: the estimation of the optimal feature weights, the evaluation of the chosen image features and the design of computer models. This book addresses researchers and developers who are interested in the design of more natural and intuitive image retrieval techniques.
Article from Codeproject
Use OpenGL in your C# applications with SharpGL, it's a breeze! Just drag an OpenGLControl onto your Windows Form and handle the 'OpenGLDraw' function - now just call ordinary OpenGL functions!
SharpGL provides you with two controls for designing forms. The OpenGLControl, which lets you do standard OpenGL drawing in a C# application, and the SceneControl, which does the same with added support for polygons/persistence/picking and more. The screenshot above shows the SceneControl in action, with the supplied 'SceneBuilder' application. The screenshot below shows some 'old fasioned' OpenGL drawing, with calls to 'glBegin' and 'glEnd' etc.
If you want to get OpenGL in your application quickly, there's no easier way. There are five example applications in the download that show you how to use some common features. The SharpGL Website also has a set of tutorials that is regularly updated - as well as support information.
Wednesday, January 21, 2009
Article From GeNeura Team
Last Friday I presented the paper “PicSOM - content-based image retrieval with self-organizing maps ” of Laaksonen et al. This authors uses T-SOMS (Tree-based SOMs) to classify images based in their content: using distinct measure types, like colour, sFFT (shape Fast Fourier Transform) and others several SOM maps are created (one per measure). The solution to how to give weight to that measures is simple: the user feedback. A set of images is presented to the user in a web-based application, so he can select the interested ones (example: Wynona Ryder’s face) to obtain the next set of images (an example can be seen in Figure 1). The algorithm learns with the selected images and gives weight to select the images of an specific map (in our example, shape measures are more important than colour measure).
To test the performance of every map and the global performance of the whole system the authors use sumatory things to stablish if the selected images belongs to a determined class (i.e. Planes, dinosaurs of faces). The conclusion is that it is necessary to use the whole set of measures at the same time to acquire the best performance. But the most interesting part is that you can read the paper and test the algorithm by yourself, a not so common practice.
Tuesday, January 20, 2009
In most biomedical disciplines, digital image data is rapidly expanding in quantity and heterogeneity, and there is an increasing trend towards the formation of archives adequate to support diagnostics and preventive medicine.
Exploration, exploitation, and consolidation of the immense image collections require tools to access structurally different data for research, diagnostics and teaching. Currently, image data is linked to textual descriptions, and data access
is provided only via these textual additives. There are virtually no tools available to access medical images directly by their content or to cope with their structural differences. Therefore, visual-based (i.e. content-based) indexing and retrieval
based on information contained in the pixel data of biomedical images is expected to have a great impact on biomedical image databases. However, existing systems for content-based image retrieval (CBIR) are not applicable to the biomedical imagery special needs, and novel methodologies are urgently needed.
This special issue grew from the work-shop, Content-Based Image Retrieval: Major Challenges for Medical Applications at SPIE’s International Symposium on Medical Imaging 2008 (Content-Based, 2008), which was convened to assess status of CBIR within the biomedical clinical and research worlds, and to collect
opinion from leading CBIR researchers about the most productive way forward. The workshop was structured around the concept of “gaps” (Deserno, Antani, & Long, 2008) between desired capabilities and use for medical CBIR, and what
has actually been realized.
Thomas M. Deserno, RWTH Aachen University, Germany
Sunday, January 18, 2009
From a student co-authoring his first research paper to a busy post-doc or a strategy-thinking professional group leader, every scientist knows that research success is heavily based on interaction with coworkers and fellow scientists. This interaction happens visibly or invisibly in our daily lives as researchers: the chat in the library, the call to an author of a new paper or the emotional discussion at a convention reception. More formal interactions include searching for the latest research papers, talks or patents. Everybody develops his or her own habits in interacting with other researchers.
We have also experienced that research collaboration, the exchange of promising ideas or a cooperative grant application work best if the co-researcher is a trusted and known person. In the best case, he's a good friend.
"Social scientists have long recognized the importance of boundary-spanning individuals in diffusing knowledge (Allen 1977; Tushman 1977), and recently, several papers have rigorously demonstrated that technological knowledge diffuses primarily through social relations, not through publications."Sorenson, Olav and Singh, Jasjit, "Science, Social Networks and Spillovers" (December 26, 2006)
We'd like to give you the possibility to support this interaction with other researchers in form of a dedicated Web 2.0 application.
ResearchGATE offers tools tailored to researchers' need. Whether you are working with a co-researcher in a different country or even continent, or would like to find a forum to discuss your research ideas and results, ResearchGATE keeps you in touch with scientists all over the world. You can find new research contacts in people performing in the same field or in different fields using the same techniques as you do.
ResearchGATE connects researchers and information.
This article is about an epiphany I had awhile back when I read an article by Alex Hildyard called:Build a Reusable Graphical Charting Engine with C#. I realized that the TimeLine in a video editing program like Adbode AfterEffects is just an ordinary horizontal bar graph like the one described in Alex Hildyard's article with a movable Thumb added. WOW! In this article I created a basic Video TmeLine Control using the code from Alex Hildyard's article as a starting point and I added a Thumb Slider to finish it off. Essentially it is an ordinary horizontal bar graph with zoom that displays video frames as horizontal bars on a graph. I decided to use DirectShow to play video to illustrate how the TimeLine control works. However, this is NOT intended as an article on DirectShow and doesn't belong in the section for DirectShow. This article and the enclosed sample project are designed to illustrate how to create a horizontal bar graph with zoom as a TimeLine Control. I used the DirectShowLib-2005.dll from the DirectShow.NET library to play the video in this sample but this project is NOT intended as a DirectShow project. This control ad its code are completelty independent of any code to play video and you can use this control with VideoLAN or any other means of playing video.
TimeControl, Horizontal Bar Graph with Thumb Slider & Zoom
I added a Thumb Slider using part of the code from an article by Michal Brylka here on CodeProject, namely:Owner-drawn trackbar(slider) By Michal Brylka
I also used the following control for sliders on the TimeLine Control, namely: Advanced TrackBar Control with MAC Style By NicolNghiaIn order to select a video track I added buttons on the left-side of the graph. I could have used ordinary buttons but I deciided to jazz up the look of the buttons and use the skinned buttons by ZapSolution found in another project on CodeProject, namely:Skinned Form Playing Audio and OpenGL Altogasdadasdsad
Download TimeLine_src - 1.44 MB
Sunday, January 11, 2009
The PCI is an event established by the Greek Computer Society. The 1st Conference took place at Athens (1984), the 2nd at Thessaloniki (1988), the 3rd at Athens (1991), the 4th at Patras (1993), the 5th at Athens (1995), the 6th at Athens (1997), the 7th at Ioannina (1999), the 8th at Nicosia Cyprus (2001), the 9th at Thessaloniki (2003), the 10th at Volos (2005), the 11th at Patras (2007) and the 12th at Samos (2008). This year PCI 2009 will take place at Corfu Island.
Saturday, January 10, 2009
The IRMA database consists of 10000 annotated radiographs taken randomly frommedical routine at the RWTH Aachen University Hospital-Germany. The images are separated into 9000 training images and 1000 test images. The images are subdividedinto 57 classes. For CBMIR, the relevances are defined by the classes, given a query image from a certain class, all database images from the same class are considered relevant. The IRMA database was used in the ImageCLEF 2005 image retrieval evaluation forthe automatic annotation task.
Thursday, January 8, 2009
By Mohammad Reza Khosravi
Do you like some fun? If you had a busy day and want a distraction from your work, this is for you!
The logic behind Stereoscopy is very simple, but the results are amazing and amusing! Especially when you realize that you can make your own 3D environments with only 2D objects without any difficulties.
When you work with professional 3D software like 3D Studio or Maya, you work on a 3D environment, and see the results on your monitor in two dimensions (usually), but here it's different. It means that you work on a two dimensional environment, but the results are totally in 3D.
What is Stereoscopy?
Stereoscopy is a technique for viewing pictures in three dimensions; when you are looking at a stereogram, you can imagine that you are viewing the real scene from a window. Size, depth, and distance are perceptible as when viewing the original.
How Is It Possible?
Our eyes are separated by a distance of about 6-7 cm. It makes a difference in the point of view of each eye, and therefore the aspect of every scene is slightly different in the eyes. When these two different pictures fuse in the brain, it makes a 3D scene. Read More
Simple Image Editor with Crop and Resize while Maintaining Aspect Ratio
By Member 3647417
I was developing a project that required adding images to a database. Most of the images were acquired from an 8 megapixel digital camera, so the sizes were quite large. I originally just re-sized the images proportionally to 1024 x 768 and called it good. But it bothered me that some images contained busy backgrounds or distractions that could be cropped out. This project is the result of my efforts.
Note that I don't claim to have written most of the code for this project. I have stood on the shoulders of giants, and learned and reused code from many different CodeProject articles. Hopefully, the whole is greater than the sum of its parts. This article is based upon code from ImageResizer.aspx and various CodeProject articles. Read More
Rendering Shapefile in OpenGL
By Durga Prasad Dhulipudi
ESRI shapefiles are well known vector data formats used for Mapping a.k.a GIS applications .There are many open source softwares like JUMP,SharpMap that let the users to view the shapefiles.This article focuses on rendering them in OpenGL console. I assume that the intended is familiar with OpenGL and understands concepts like linking with static or dynamic libraries in Microsoft Visual Studio environment.
Shapefiles hold geographic information in the form of points,lines and polygons.For instance- political boundaries like Countries,States are treated as Polygon shapefiles;linear features such as roads,rivers are Line shapefiles and;Airports or BusBays are sometimes treated as point shapefiles. Read More
Wednesday, January 7, 2009
BRISC is a recursive acronym for BRISC Really IS Cool, and is (convieniently enough) also an anagram of Content-Based Image Retrieval System.
BRISC provides a framework for texture feature extraction and similarity comparison of computed tomography (CT) lung nodule images. It was written in C# .NET 2.0 using Visual Studio .NET 2005 and is designed to be functional and extensible. To browse this website and/or obtain BRISC, use the links on the left.
This project is funded by the National Science Foundation (NSF).
Here is a description of the project from SPIE abstract:
In this paper we will present a content-based image retrieval (CBIR) system for a database of pulmonary nodule images, with a comparison of the effectiveness of various texture features and similarity measures in retrieving similar images from a medical database. We are particularly interested in how well texture feature analysis performs with lung nodules obtained from the Lung Image Database Consortium (LIDC). The LIDC provided a set of lung CT images along with information about nodules shown in these images. In our paper we will compare three different types of texture features: (1) Co-occurrence matrices, (2) Gabor filters, and (3) Markov random fields. These methods are used to extract a “feature vector” (a series of numbers) from images that represent the image’s signature. This vector is then compared with the vectors of other images by various similarity measures.
We have decided to base our evaluation on the idea that the first results returned by the system for a particular nodule should be other instances of that same nodule, perhaps on a different CT slice or marked and rated by a different radiologist. Thus, ground truth is determined by objective, a priori knowledge about the nodules. In this way, precision is defined as the number of retrieved instances of the query nodule divided by the number of retrieved images and recall is defined as the number of retrieved instances of the query nodule divided by the number of total instances of the query nodule. We have determined that Gabor-based image features generally perform better than global co-occurrence measures for the images in the LIDC database, with a maximum average precision of 68%.
Author: Shashank J, Kowshik P, Kannan Srinathan and C.V. Jawahar
Download Tha Paper (TR)
Autor: Gartheeban Ganeshapillai
Download the paper
Saturday, January 3, 2009
56th Annual Meeting
location: Toronto, Ontario, Canada
date: 13 June 2009 until 17 June 2009
deadline: 13 January 2009 for abstract
9th Annual Meeting of the International Society for Computer Assisted Orthopaedic Surgery
location: Boston, Massachusetts, USA
date: 17 June 2009 until 20 June 2009
deadline: 30 January 2009 for abstract
15th Annual Meeting of the Organization for Human Brain Mapping
location: San Francisco, California, USA
date: 18 June 2009 until 22 June 2009
deadline: 11 January 2009 for abstract
IEEE Computer Society Conference on Computer Vision and Pattern Recognition
location: Miami Beach, Florida, USA
date: 20 June 2009 until 26 June 2009
deadline: 13 November 2008 for abstract
Computer Assisted Radiology and Surgery
location: Berlin, Germany
date: 23 June 2009 until 27 June 2009
deadline: 10 January 2009 for paper
6th IEEE International Symposium on Biomedical Imaging: From Nano to Macro
location: Boston, Massachusetts, USA
date: 28 June 2009 until 01 July 2009
deadline: 19 January 2009 for paper
24th International Symposium on Cerebral Blood Flow, Metabolism and Function
location: Chicago, Illinois, USA
date: 29 June 2009 until 03 July 2009
deadline: 14 January 2009 for abstract
21st Biennial International Conference on Information Processing in Medical Imaging
location: Williamsburg, Virginia, USA
date: 05 July 2009 until 10 July 2009
deadline: 12 January 2009 for paper
International Conference on Image Analisys and Recognition
location: Halifax, Canada
date: 06 July 2009 until 08 July 2009
deadline: 23 January 2009 for paper
Demo sessions of video retrieval systems are ideal venues to disseminate scientific results. Existing demo sessions, however, fail to engage the audience fully. Real-time evaluation of several video retrieval systems in a single showcase increases impact. Encouraged by the success of previous editions, we will again organize a VideOlympics showcase at the ACM International Conference on Image and Video Retrieval.
The major aim of the VideOlympics is to promote video retrieval research. An additional main goal of the VideOlympics is giving the audience a good perspective on the possibilities and limitations of the current state-of-the-art systems. Where traditional evaluation campaigns like TRECVID focus primarily on the effectiveness of collected retrieval results, the VideOlympics also allows to take into account the influence of interaction mechanisms and the advanced visualizations in the interface. Specifically, we aim for a showcase that goes beyond the regular demo session: it should be fun to do for the participants and fun to watch for the conference audience. For all these reasons, the VideOlympics should only have winners. Similar to previous years, a number of TRECVID participants will simultaneously do an interactive search task during the VideOlympics showcase event.
New in 2009
For the first time, we will include in the 2009 edition of the VideOlympics a round with novice users, in addition to the round with expert users. The novice users will be selected from a group of high-school teenagers from the island of Santorini, for whom it can be assumed that they have a decent English language level. Moreover it is allowed to provide each novice user with a short training session with your video search engine (amount to be defined).
The details of all the databases for ImageCLEF 2009 will still need to be defined. Registration will open in early February and registration forms will be available from the CLEF web pages.
The following tasks are planned in 2009:
* a photographic retrieval task,
* a medical retrieval task,
* a robotic image visual task,
* a medical automatic image annotation task, and
* an image retrieval task from a collection of wikipedia images.
More information on the details of these tasks will follow shortly.
The exact schedule will depend on the individual tasks and is still subject to changes as we have not yet received the copyright for all databases we would like to use.
A tentative global schedule can be found here (please look at the page of each task for more details):
* 15.1.2009 : registration opens for all CLEF tasks
* 15.3.2009 : data release
* 15.4.2009 : topic release
* 15.5.2009 : submission of runs
* 15.7.2009 : release of results
* 15.8.2009 : submission of working notes papers
* 30.9-2.10.2009 : CLEF workshop in Corfu, Greece
Adding a simple wireless camera to your Lego NXT Mindstorms can turn your robot into a machine vision solution. We added a camera to our TriBot and now it can find balls and cones. In this video you will see the robot performing its task of picking up a blue ball and moving it towards an orange cone. You can find out how we did all of this using the RoboRealm robot vision software at http://www.roborealm.com/ Just check out the tutorials section. Read more