Monday, August 31, 2009

CCCT 2010

Call for Papers/Abstracts and Invited Sessions Proposals for The 8th International Conference on Computing, Communications and Control Technologies: CCCT 2010 (April 6 -9, 2010 - Orlando, Florida, USA). http://www.2010iiisconferences.org/CCCT

Deadlines:

Papers/Abstracts Submissions and Invited Sessions Proposals: September 30th, 2009 Authors Notifications: November 16th, 2009 Camera-ready, full papers: December 16th, 2009

All Submitted papers/abstracts will go through three reviewing processes: (1) double-blind (at least three reviewers), (2) non-blind, and (3) participative peer reviews. These three kinds of review will support the selection process of those papers/abstracts that will be accepted for their presentation at the conference, as well as those to be selected for their publication in JSCI Journal.

Pre-Conference and Post-conference Virtual sessions (via electronic forums) will be held for each session included in the conference program, so that sessions papers can be read before the conference, and authors presenting at the same session can interact during one week before and after the conference. Authors can also participate in peer-to-peer reviewing in virtual sessions.

Submissions for Face-to-Face or for Virtual Participation are both accepted. Both kinds of submissions will have the same reviewing process and the accepted papers will be included in the same proceedings.

Authors of accepted papers who registered in the conference can have access to the evaluations and possible feedback provided by the reviewers who recommended the acceptance of their papers/abstracts, so they can accordingly improve the final version of their papers. Non-registered authors will not have access to the reviews of their respective submissions.

Registration fees of an effective invited session organizer will be waived according to the policy described in the web page (click on 'Invited Session', then on 'Benefits for the Organizers of Invited Sessions'), where you can get information about the ten benefits for an invited session organizer. For Invited Sessions Proposals, please visit the conference web site, or directly to http://www.2010iiisconferences.org/ccct/organizer.asp

Authors of the best 10%-20% of the papers presented at the conference (included those virtually presented) will be invited to adapt their papers for their publication in the Journal of Systemics, Cybernetics and Informatics.

Dictionary of Computer Vision and Image Processing

John Wiley have agreed to extending the amount of online definitions to 0,A-G from the book

Dictionary of Computer Vision and Image Processing

Robert Fisher, Ken Dawson-Howe, Andrew Fitzgibbon,
Craig Robertson, Emanuele Trucco
John Wiley and Sons, June 2005

It's on the web at:

http://homepages.inf.ed.ac.uk/rbf/CVDICT/

Should you feel so inclined, you can purchase the full book at:

http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470015268.html

Saturday, August 29, 2009

The Seventh IASTED International Conference on Signal Processing, Pattern Recognition and Applications ~SPPRA 2010~

February 17 – 19, 2010
Innsbruck, Austria

Purpose

The Seventh IASTED International Conference on Signal Processing, Pattern Recognition and Applications (SPPRA 2010) will be an international forum for researchers and practitioners interested in the advances in, and applications of, signal processing and pattern recognition. It is an opportunity to present and observe the latest research, results, and ideas in these areas.

Concurrent Conferences

SPPRA 2010 will be held in conjunction with the IASTED International Conferences on:

Innsbruck is nestled in the valley of the Inn River and tucked between the Austrian Alps and the Tuxer mountain range. It has twice hosted the Winter Olympics and is surrounded by the eight ski regions of the Olympic Ski World, including the Stubai Glacier, which offers skiing year round. Climbing the 14th century Stadtturm on Herzog Friedrich Strasse provides a stunning view of the town and the breathtaking scenery that surrounds it. Concerts at Ambras Castle provide listening pleasure in a beautiful renaissance setting. The sturdy medieval houses and sidewalk cafés of Old Town Innsbruck beckon you to sit for a while and watch people stroll by.

With its unique blend of historical, intellectual, and recreational pursuits, Innsbruck offers something for every visitor. SPPRA 2010 will be held at the world-famous Congress Innsbruck, located in the heart of the city, near the historical quarter.


Scope

The topics of interest to be covered by SPPRA 2010 include, but are not limited to:

APPLICATIONS
  • Economics
  • Engineering
  • Manufacturing
  • Medicine
  • Ocean Engineering
  • Others
  • Radar
  • Remote Sensing
  • Robotics
  • Seismic
  • Telecommunications
PATTERN RECOGNITION

Computer Vision

  • 3D and Range Data Analysis
  • Computational Geometry
  • Content-based Retrieval
  • Geometric and Morphologic Analysis
  • Neural Network Applications
  • Stereo Vision
  • Visualization

Image Analysis

  • Image Database Indexing
  • Image Processing
  • Image Sequence Processing
  • Image Synthesis
  • Medical Image Analysis
  • Pattern Recognition

Friday, August 28, 2009

Elevator Pitch: Imprezzeo aims to be the Google of image search

Article from www.guardian.co.uk

Imprezzeo is initially targeting the business-to-business market with its image search but sees even bigger potential with retail, social media and dating sites

There's huge, untapped potential in the image search sector, according to the business-to-business service Imprezzeo. Backed by Independent News & Media. Imprezzeo is initially targeting news agencies, photo-sharing sites and commercial photo libraries but thinks the bigger potential could include retail, social media and even dating sites - all of whom would benefit from searching by image, rather than text, says chief executive Dermot Corrigan.

Set up in October 2007 and launched in beta one year later, Imprezzeo employs seven staff in London and at its development base in Sydney, Australia.

Imprezzeo chief executive Dermot Corrigan Imprezzeo chief executive Dermot Corrigan

• Explain your business to my Mum

"Imprezzeo allows users to click on images to find other similar images. Think of it as a 'more like this' feature for photos and pictures. It does not rely on the text associated with an image to find similar stuff but the actual content of the image itself. So by selecting or uploading a relevant example, your mum can find the image she wants on a photo-sharing site, a search engine or even a retail site, much more accurately and much faster.

"Most image or picture searches use text tags to produce their results which means you have to sift through pages of irrelevant results to get what you want. Imprezzeo uses a combination of content-based image retrieval and facial recognition technology that identifies images that closely match a sample. So you pick an image that is close to what you want from the initial search results, or you can upload your own, and the technology will find other similar images."

• How do you make money?

"We sell our search technology to companies that have large image libraries - newspapers, stock libraries and so on - but we're talking to all sorts of companies to develop tools for a whole range of markets beyond that: retailers, for example, can use it to recommend products (if you liked this red bag, you might also like these similar products) and search engines can use it to improve the search experience. We're even looking at rolling out an application to let consumers better search and organise their personal photo collections, online or on the desktop."

Imprezzeo image search Imprezzeo image search

• How many users do you have now, and what's your target within 12 months?

"We launched our beta product in October 2008 and have a number of trials going on in our initial target market segments. When these go live that will expose us to many millions of users. 2009 though will see us move beyond these segments into those suggested above and so we are optimistic 2009 should see Imprezzeo become the major power behind image search on the web.

• What's your background?

"Mainly in large media businesses - information, news and communications. I started out at Frost & Sullivan, the technology market analyst firm and then moved into the news business with PR Newswire. At LexisNexis I ran the news aggregation business and led a number of its initiatives in technology-led markets. Before Imprezzeo I worked with a number of digital media businesses, which I still have interests in, and did a stint doing some strategic consultancy for Wolters Kluwer, a large publishing, software and services group."

• How do you plan to survive the downturn?

"We're keeping the business lean and focusing on clear sales targets. We're in a strong position as we can prove value and return-on-investment to prospects.

"I'd argue that web businesses in the main will fare better than many others I could mention. There will be casualties but we have some very talented people and three other very important assets: a sound revenue model, a compelling value proposition and technology with a definite 'wow' factor."

• What's your biggest challenge?

"Not taking on too much too quickly. The potential applications for this are huge, and we're always thinking about the next stage of development."

• Name your closest competitors

"Idee do something similar (though we see their focus as on image recognition rather than proximity search) and I have no doubt this is a development area for the big web search players. It may in the end come down to who has the best mousetrap and right now I think that's us."

• Which tech businesses or web thinkers are the ones to watch?

"While I have to declare an interest as one of the backers of strategyeye.com, I do think it is essential intelligence if you want to know what's what in the digital media world. I tend to appreciate sites for their utility rather than fun which explains why LinkedIn continues to impress (as much as a business development tool as anything else) and Videojug is essential. Like.com is pioneering visual search for online shopping in the US which is all to the good for a company like us and it looks like 'social investing' (in the sense of observing the investment decisions of other as opposed to ethical investment) has arrived with covestor.com - one for the long haul though."

• Who's your mentor?

"I've had a number who have been positive influences in my career. Arsene Wenger inspires me as much as any of them."

• How's you work/life balance?

"Having three children means that you have to keep a balance. My wife understands what we are trying to achieve here so she takes the trips to Australia in her stride (less so when she finds out I also get to spend time with a friend who lives in Bondi). While I work long hours, working at weekends tends to be a no-no."

• What's the most important piece of software or web tool that you use each day?

"Google desktop search."

• Where do you want the company to be in five years?

"Providing the benchmark for image search. Once people realise what they can do to find images, they won't accept the old way of doing things any more."

imprezzeo.com

http://www.guardian.co.uk/media/pda/2009/aug/26/image-searchengines

Wednesday, August 26, 2009

AIAI 2010

The abundance of information and increase in computing power currently enables researchers to tackle highly complicated and challenging computational problems. Solutions to such problems are now feasible using advances and innovations from the area of Artificial Intelligence. The general focus of the AIAI 2010 conference is to provide insights on how Artificial Intelligence may be applied in real world situations and serve the study, analysis and modelling of theoretical and practical issues. Also, research papers describing advanced prototypes, innovative systems, tools and techniques are encouraged. General survey papers indicating future directions and professional work-in-progress reports are of equal interest. Acceptance will be based on quality, originality and practical merit of the work.

Authors are invited to electronically submit original, English-language research contributions or experience reports. Submitted papers must present unpublished work, not being considered for publication in other journals or conferences.



Topics

Suggested topics include, but are not limited to, the following:

Theoretical Advances

  • Machine Learning
  • Adaptive Control
  • Data Fusion
  • Reasoning Methods
  • Knowledge Acquisition and Representation
  • Planning and Scheduling
  • Artificial Neural Networks
  • Expert Systems
  • Fuzzy Logic and Systems
  • Genetic Algorithms and Programming
  • Particle Swarm Optimisation
  • Bayesian Models

Knowledge Engineering

  • Data Mining and Information Retrieval
  • Decision Support Systems
  • Knowledge Management for e-Learning and Enterprise Portals
  • Intelligent Information Systems
  • Web- and Knowledge-Based Information Systems
  • Ontologies

Signal Processing Techniques and Knowledge Extraction

  • Computer Vision
    Human-Machine Interaction / Presence
    Learning and Adaptive Systems
    Pattern Recognition
  • Signal and Image Processing
    Speech and Natural Language Processing

Multimedia, Graphics and Artificial Intelligence

  • Multimedia Computing
  • Multimedia Ontologies
  • Smart Graphics
  • Colour/Image Analysis
  • Speech Synthesis

Trends in Computing

  • Accessibility and Computers
  • Affective Computing
  • Agent and Multi-Agent Systems
  • Autonomous and Ubiquitous Computing
  • Distributed AI Systems and Architectures
  • Grid-Based Computing
  • Intelligent Profiling and Personalisation
  • Robotics and Virtual Reality

Artificial Intelligence Applications

  • eBusiness, eCommerce, eHealth, eLearning
  • Engineering and Industry
  • Environmental Modelling
  • Finance
  • Telecommunications - Transportation
  • Crisis and Risk Management
  • Medical Informatics and Biomedical Engineering
  • Political Decision Making
  • Natural Language Processing
  • Planning and Resource Management
  • Project Management
  • Emerging Applications
  • Forensic Science

Other

  • AI and Ethical Issues
  • Evaluation of AI Systems
  • Social Impact of AI

http://www.cs.ucy.ac.cy/aiai2010/index.html

Tuesday, August 25, 2009

The problem of search engines and keyword searches

Article from Jason Slater

Introduction

For my ongoing search engine research project I need to understand much more about search mechanics, and to gain a deeper understanding of where it is heading into the future. Search engines have come a long way since their humble beginnings out of directory listings and approachable and accessibly keyword search techniques have driven their popularity for finding information on the Internet (Li et al, 2008). However, perform a few searches and you may discover that using keyword searching for non-trivial information is as much a problem today as it was in the early days of search (Finkelstein et al, 2002).

Searching for non-trivial information can be broadly split into three areas (Torrey, et al 2009).

  • Locating and navigating to sources of information
  • Making sense of the content presented
  • Engaging in the process of social seeking of information

When considering the future impact of search engine mechanics in the context of information retrieval there are three factors which may be useful in measuring success which are coverage of information, unbiased content, and user focus – the information should be presented fairly, be accurate and be accessible and relevant to the searchers needs (Datta, et al 2008).

When looking into search using keyword techniques it may be useful to talk about what we would do if search was not an option – this may offer some insight into how we look for information and the decisions we make in deciding which information is useful to us.

The starting point for the analysis is the question: if the Internet did not exist – what would be the process for finding new information?

For example, if I wanted to know more about black holes – what steps might I take?

I will probably break this down into a few steps and consider what I am looking for, where I might find more information, and why I need this information. Answering these three questions appropriately will offer some useful insight that may be able to apply later.

Step 1: What…?

So we start with the question What…? – What are we looking for? We already know that – we want to know more about black holes. Where next? The next thing might be to decide the format of the information I need. For example:

  • Do I have just a passing interest? If so, I could simply ask someone.
  • Am I writing an academic paper?  If so, I need researched, peer reviewed material.
  • Do I need an image of a black hole? Could I use an image library?
  • Is it for a competition? What level of detail do I need?
  • Have I seen a black hole and wanted to find out if it was dangerous?
  • Do I have concerns about black holes in my immediate vicinity?
  • Is my interest similar to black holes but not exactly black holes?

The last three points start to clarify our requirement for information further and have indicated some new areas of information which might lead to further information.

Step 2: Where…?

There are many sources of information, ranging from local gossip or research academic papers. The next step in our process would be to decide where to start looking for this information:

  • Ask someone close to me for more information
  • Buy a book or magazine related to my interest
  • Contact a professional who might have detailed knowledge
  • Telephone someone – for example the local observatory
  • Do a college course – this may take longer but could give a good grounding into what we are looking for
  • Call someone out – a builder or pest control perhaps?
  • Borrow a book from someone
  • Visit the library
  • Watch television

Hopefully, you may have noticed the “call a builder or pest control” point – what sort of black holes do I really need more information about? Now we are starting to explore the context of the question. Context and Clarification are becoming important factors in finding a solution to our problem.

Read More

Sunday, August 23, 2009

Numenta Vision Toolkit

The Numenta Vision Toolkit allows you to easily create, train and optimize an HTM Network for categorizing images. No development skills required.

Beta 2 is now available. The Toolkit can now search the web for images and download them into your project. It greatly accelerates the process of collecting data. This release also includes many bug fixes, and all current Toolkit users should upgrade. Download the new version below.


The Numenta Vision Toolkit is a graphical application that lets you train an HTM on your own images. You can use the trained system to recognize new images. The Toolkit does not require any programming.

The bulk of the effort is in collecting and preparing your training images. To accelerate this process, the Toolkit can now search the web for images and download them automatically. You will still need to spend time selecting and cleaning up your training images. Your training images should be uncluttered, so you may need to use the mask tool in the Toolkit to remove distracting objects. Refer to the [tutorial] for more information.

Once you have trained an HTM, you can upload it to Numenta Web Services. Then you can access it online and even use it in a web or mobile application.

http://www.numenta.com/vision/vision-toolkit.php

Numenta

Numenta is creating a new type of computing technology modeled on the structure and operation of the neocortex. The technology is called Hierarchical Temporal Memory, or HTM, and is applicable to a broad class of problems from machine vision, to fraud detection, to semantic analysis of text. HTM is based on a theory of neocortex first described in the book On Intelligence by Numenta co-founder Jeff Hawkins, and subsequently turned into a mathematical form by Numenta co-founder Dileep George.
Numenta is a technology tools and platform provider rather than an application developer. We work with developers and partners to configure and adapt HTM systems to solve a wide range of problems.
HTM technology has the potential to solve many difficult problems in machine learning, inference, and prediction. Some of the application areas we are exploring with our customers include recognizing objects in images, recognizing behaviors in videos, identifying the gender of a speaker, predicting traffic patterns, doing optical character recognition on messy text, evaluating medical images, and predicting click through patterns on the web. The world is becoming awash with data of all types, whether numeric, video, text, images or audio, making it challenging for humans to sort through it and find what’s important. HTM technology offers the promise of making sense of all that data.
An HTM system is not programmed in the traditional sense; instead it is trained. Sensory data is applied to the bottom of the hierarchy of an HTM system and the HTM automatically discovers the underlying patterns in the sensory input. HTMs learn what objects or movements are in the world and how to recognize them, just as a child learns to identify new objects.
Numenta's first implementation of HTM technology is a software platform called NuPIC, the Numenta Platform for Intelligent Computing. Numenta has also released a Vision Toolkit (beta) and is developing a Prediction Toolkit. These toolkits simplify the task of creating HTM networks for specific problems. We invite you to download the Vision Toolkit or NuPIC to start experimenting with HTM technology. These programs are available for free under a research license. Also be sure to register for the Numenta Newsletter to learn about future releases of the Toolkits as well as other developments in the HTM world.

http://www.numenta.com/

Maxthon Tests Tear-off Video For New Browser Feature

Beijing – Maxthon International is testing a feature for its award-winning browser that would allow users to watch two Internet videos at the same time, or detach videos that continue to play while the user works on a different Web page.

When the Float button is enabled in Maxthon’s configuration panel, a small button labeled “Detach Video” appears in the upper right corner of any video when it’s played. Clicking the button makes Maxthon detach the video from the rest of the page. The user can drag it anywhere on his screen–or to a second screen–and it will continue to play as the user surfs other pages.

video button

The detached video automatically is set to stay on top of other windows, but it can be hidden by clicking a push-pin button.  When the video window is closed Maxthon reattaches it to the original Web page.

Maxthon developers said that they did not know when the feature would be added to browser, but they said it already is very stable.

Detached video
Floating Windows Let Users Watch Video While Surfing at a Different Web Site

http://blog.maxthon.com/

Monday, August 17, 2009

Definition of an automated Content-Based Image Retrieval (CBIR) system for the comparison of dermoscopic images of pigmented skin lesions

New generations of image-based diagnostic machines are based on digital technologies for data acquisition; consequently, the diffusion of digital archiving systems for diagnostic exams preservation and cataloguing is rapidly increasing. To overcome the limits of current state of art text-based access methods, we have developed a novel content-based search engine for dermoscopic images to support clinical decision making.
Methods: To this end, we have enrolled, from 2004 to 2008, 3415 caucasian patients and collected 24804 dermoscopic images corresponding to 20491 pigmented lesions with known pathology.
The images were acquired with a well defined dermoscopy system and stored to disk in 24-bit per pixel TIFF format using interactive software developed in C++, in order to create a digital archive.
Results: The analysis system of the images consists in the extraction of the low-level representative features which permits the retrieval of similar images in terms of colour and texture from the archive, by using a hierarchical multi-scale computation of the Bhattacharyya distance of all the database images representation with respect tothe representation of user submitted (query).
Conclusions: The system is able to locate, retrieve and display dermoscopic images similar in appearance to one that is given as a query, using a set of primitive features not related to any specific diagnostic method able to visually characterize the image. Similar search engine could find possible usage in all sectors of diagnostic imaging, or digital signals, which could be supported by the information available in medical archives.
Author: Alfonso BaldiRaffaele MuraceEmanuele DragonettiMario ManganaroOscar GuerraStefano BizziLuca Guerra
Credits/Source: BioMedical Engineering OnLine 2009, 8:18

Read More

Friday, August 14, 2009

Visual Information MAnagement

VIMA (Visual Information MAnagement) licenses image filtering software and image search software to solution providers who require accurate and real-time image content filtering to block pornographic images or who are looking to add the most advanced image search refinement methodology to their large image collections.   VIMA is a leader in visual search technology and image content filtering.

VIMA's Pornographic image blocking modules are the most accurate, the quickest in execution, and have the smallest software footprint among all alternatives.   We have achieved this by incorporating our proprietary image feature extraction, the latest machine learning technology, our proprietary indexing schema designed specifically for visual features, and the most advanced adaptive learning methods.   VIMA's technology is ideal for those solution providers who want to emphasize multi-modal image filtering and image search solutions that incorporate each image's visual character together with other existing metadata to create the perceptually most accurate image categorization, image filtering, and image matching solutions.

VIMA's family of products offer two functionalities: image search and image filtering (image categorization).   The search products are uniquely effective because they incorporate VIMA's patented adaptive learning and dynamic partial matching functions.   The categorization products use the most advanced machine-learning techniques so users can tune filters to their particular culture, sensitivities, and categories.

http://www.vimatech.com/

Image Nudity Filter

Image Nudity Filter can be used to determine whether an image may contain nudity.

This class can be used to determine whether an image may contain nudity.
It analyses the colors used in different sections of an image to determine whether those colors match the human skin color tones.
As result of the analysis it returns a score value that reflects the probability of the image to contain nudity.
Additionally, it can output a the analysed image marking the pixels with color skin tones with a given color.
Currently it can analyze images in the PNG, GIF and JPEG images.

Download

Thursday, August 13, 2009

When did the Cranfield tests become the “Cranfield paradigm”?

Article From http://blog.codalism.com/?p=817

It is common these days to see the traditional method of evaluating an information retrieval system against a test collection referred to as the “Cranfield paradigm”. For instance, Emine Yilmaz and Javed Aslam in their 2006 CIKM paper, Estimating Average Precision with Incomplete and Imperfect Judgments, denote “the test collection methodology adopted by TREC” as “the Cranfield paradigm”, and similar uses can be found in recent papers by Sakai, Scholer et al., Harman and Hiemstra, and many others besides. It is such a distinctive usage, that I came to wonder when it was introduced.

The phrase “Cranfield paradigm” does not, of course, appear in any of the Cranfield reports themselves, nor in the early literature describing the experiments at Cranfield. Contributors to Sparck Jones’s 1981 book Information Retrieval Experiment speak of work done in the same “tradition” as Cranfield (Sparck Jones, page 2), of “the ‘normal’ or archetypal retrieval test” of which Cranfield is an (but not the only) example (Robertson, page 19), or a “body of practice” based on Cranfield and later investigations (Tague, p 59), but nowhere are paradigms mentioned, nor is Cranfield even treated in a particularly paradigmatic way (despite a chapter being devoted to the Cranfield tests, and the book being dedicated to Cleverdon, the director of those tests). By the time of the 1992 Information Processing and Management special issue on information retrieval evaluation, the word “paradigm” had entered the lexicon, with Donna Harman observing in the introduction that “the test collection paradigm has … caused some major problems”, Tague-Sutcliffe declaring that “a paradigmatic shift has occurred in the research front, to user-centered from system-centered models” (page 467), and Michael Keen noting that “there is no perfect paradigm for the laboratory test” (page 491). Robertson and Hancock-Beaulieu even talk about the lack of “any kind of paradigm or consensus” regarding the concept of relevance (page 458). However, while a reference to Cranfield often lurks nearby, none of these authors actually use the phrase “Cranfield paradigm” directly. It seems that it had not yet entered mainstream usage.

The first usage of the phrase “Cranfield paradigm” appears (judging in part from Google Scholar) to be in an early, little-cited paper by B. C. Brookes, presented at SIGIR in 1980. Brookes sets out to “question the continued usefulness of what I call the `Cranfield paradigm’”, a formulation that suggests that Brookes is introducing what he believes to be a novel usage. Brookes’ paper is a discursively theoretical one, reflecting on the theory of science, Shannon’s definition of information, whether “information retrieval” should actually be called “document retrieval”, whether it should be measured on a linear or a logarithmic scale, as well as philosophical monism and dualism, the nineteenth century debate between the vitalist and physicalist schools of organic chemistry, and other such matters. He cites Bishop Berkeley, Socrates, and Einstein, describes Karl Popper’s World 3, and quotes Thomas Kuhn at length (of whom more later). Brookes ends in a manner not frequently repeated in later SIGIR papers by stating that “we need a firmer metaphysic for our studies”.

Brookes’s paper did not cause a revolution in the science of information retrieval, nor does it seem to have popularised the phrase “Cranfield paradigm” (which he repeats in a 1983 paper in the Journal of Information Science). The next usage appears to be in Towards an information logic, a paper presented by Keith van Rijsbergen at SIGIR 1989. This has a section entitled “The Cranfield Paradigm”, which van Rijsbergen defines as one in which relevance is treated as a hidden variable, only indirectly accessible by collecting data from the user, a method which “represents an extreme descriptivist approach to the science of IR” (page 78). Van Risjbergen also goes on to claim (rather boldly, given subsequent history) that the use of IR techniques in multimedia-rich environments means that “one might say that we have come to the end of the empirical era in IR” (page 79).

Van Risjbergen does not cite Brookes, so whether his use of the phrase derives from Brookes, or is an independent coinage, or is derived from somewhere else, is unclear. An author (one of the few) who does cite Brookes’ work is David Ellis, in his 1984 article Theory and explanation in information retrieval research, but without reference to the “Cranfield paradigm”. However, paradigms later become a recurrent theme in Ellis’s work, beginning in 1992 with The Physical and Cognitive Paradigms in Information Retrieval Research. Ellis traces the concept of a paradigm back to Kuhn, and then attempts to describe what makes Cranfield a paradigm in the Kuhnian sense. For Ellis, Cranfield is a “physical paradigm”: a model of the information retrieval system as a physical machine, and of the retrieval experimentation as a physical experiment. Ellis quotes Cleverdon’s description of the Cranfield approach as being like testing in a wind-tunnel to underline this point.

Tuesday, August 11, 2009

A new book on the history of scientific imagery explores the promises and pitfalls of the easily-manipulated medium.

Faith and the Scientific Image

Seedmagazine/ Review / by Veronique Greenwood / May 30, 2009

When I snapped my first picture under the electron microscope, I was breathless at the detail of the image: I could see the long, lovely arch of the interior of a seminiferous tubule and a great mass of flagella whipping out into the lumen. I turned to the grad student who was teaching me the technique, agape at what I’d been able to capture, and he smiled. “I have my first micrograph framed and hanging on my wall,” he said. To this day I keep my micrographs on my desk at the Seed offices, where they stand ready to deliver inspiration.

Imaging is one of the foundations of modern science. It can also be one of its most exciting elements for young scientists—nowhere is the pursuit of truth and the revelation of the invisible as well embodied as in the scientific image. Whether it’s the fluorescence of a protein, the X-ray shadow of a crystal, or the tracks of a radioactive nucleus, seeing raw data can be as thrilling as making a discovery. In the midst of these riches, it’s easy to forget that science underwent a dramatic metamorphosis when photography became possible.

Andrew Davidhazy, Tape-dispenser as seen in colour when placed between polarizers, 2005

The Exposures Series’ Photography and Science (Reaktion Books, May 2009, Buy) is a meaty, detailed treatise on the history of scientific photography, as well as the science of photography. The book, written by Kelley Wilder, a senior research fellow in the Department of Imaging and Communication Design at the UK’s De Montfort University, is split into four sections—observation, experimentation, building archives, and art and the scientific photograph. Each explores how the development of light-sensitive emulsions and their descendants, including micrographs and radiograms, reinvented the way science was done. When cameras and emulsions first became more widely available in the mid-1800s, photography seemed to promise true scientific objectivity for the first time, helping to catalyze the shift away from theory towards observation. But how truly reliable was it? “Within the little-told tale of sensitivity data and characteristic curves,” writes Wilder, “exists a struggle over faith in the photographic image as an experimental instrument and, eventually, as evidence.”  This is a tale about our reliance on imaging technology for the truth, and how much has stood between its reality and its promise.

Every section of Photography and Science is loaded with curious revelations on the tribulations of scientific photography. When Venus crossed in front of the sun in 1874, Wilder recounts, massive expeditions were dispatched around the world to observe and photograph the planet. Every effort was made to standardize the teams’ emulsions, but the results were still so varied in sensitivity that Venus sometimes appeared to be square with round corners, and its edges in most cases were too soft for good measurements of diameter. As a replacement for a naturalist’s guidebooks or for anatomical drawings, photography also yielded mixed results, but for different reasons: In its ability to record everything, photography has no way of emphasizing what information is important or characteristic in a specimen or malady. A photograph of a person with elephantiasis, for example, cannot capture the “undifferentiated tissue abnormalities that occurred between one specimen and another,” Wilder writes. Thus for doctors and naturalists, a photograph has often proven much less useful than a drawing.

http://seedmagazine.com/content/article/photography_and_science/

Monday, August 10, 2009

A ROI image retrieval method based on CVAAO

Image and Vision Computing,
Volume 26, Issue 11, 1 November 2008, Pages 1540-1549
Yung-Kuan Chan, Yu-An Ho, Yi-Tung Liu, Rung-Ching Chen

Abstract
A novel image feature called color variances among adjacent objects (CVAAO) is proposed in this study. Characterizing the color variances between contiguous objects in an image, CVAAO can effectively describe the principal colors and texture distribution of the image and is insensitive to distortion and scale variations of images. Based on CVAAO, a CVAAO-based image retrieval method is constructed. When given a full image, the CVAAO-based image retrieval method delivers the database images most similar to the full image to the user. This paper also presents a CVAAO-based ROI image retrieval method. When given a clip, the CVAAO-based ROI image retrieval method submits to the user a database image containing a target region most similar to the clip. The experimental results show that the CVAAO-based ROI image retrieval method can offer impressive results in finding out the database images that meet user requirements.
Article Outline
1. Introduction
2. Related works
2.1. The ROI image retrieval methods reviewing
2.2. The Generic algorithm
2.3. ANMRR
3. CVAAO and CVAAO-based image retrieval method
4. The CVAAO-based ROI image retrieval method
4.1. Database creating
4.2. Image querying aspect
4.2.1. The candidate region image segmenting stage
4.2.2. The region image matching stage
4.3. Suitable parameters decision
5. Experiments
5.1. Performances of CVAAO-based and CVAAO-based ROI image retrieval methods
5.2. The robustness in resisting the variations of images
6. Conclusions
References

Read More

Sunday, August 9, 2009

Mario AI Competition

This competition is about learning, or otherwise developing, the best controller (agent) for a version of Super Mario Bros. 
The controller's job is to win as many levels (of increasing difficulty) as possible. Each time step (24 per second in simualated time) the controller has to decide what action to take (left, right, jump etc) in response to the environment around Mario.
We are basing the competition on a heavily modified version of the Infinite Mario Bros game by Markus Persson. That game is an all-Java tribute to the Nintendo's seminal Super Mario Bros game, with the added benefit of endless random level generation. We believe that playing this game well is a challenge worthy of the best players, the best programmers and the best learning algorithms alike.
One of the main purposes of this competition is to be able to compare different controller development methodologies against each other, both those based on learning techniques such as artificial evolution and those that are completely hand-coded. So we hope to get submissions based on evolutionary neural networks, genetic programming, fuzzy logic, temporal difference learning, human ingenuity, hybrids of the above, etc. The more the merrier! (And better for science.)
There are cash prizes associated with each phase of the competition; USD 500 for the winner of the CIG phase, and USD 200, 100 and 50 respectively to the winners of the ICE-GIC phase. At least one member of the winning team need to be registered and present at the relevant conference to receive the prize money, however it is possible to win the competition and receive the certificate for this without attending the conference.
We welcome feedback on both web page, organization and software.

How to participate (it's easy!)

If you plan to participate, you should join the Mario Competition Google Group. All technical and organizational questions should be posted to this group, where they will be answered by the organizers and stored in a searchable achive.
You participate in the competition by submitting a controller. Your submission could consist of a piece of Java code and/or a WOX file; see the submission instructions for details.
But first you will have to develop your controller, using your method of choice and the Java software package. First of all, look at the getting started page; more technical information coming soon.
As people submit their controllers, we will publish a league table for the controllers submitted so far. At the end of the competition, source code for all controllers will be posted on the final league table.

In association with the IEEE Consumer Electronics Society Games Innovation Conference 2009 and with the IEEE Symposium on Computational Intelligence and Games

http://julian.togelius.com/mariocompetition2009/

Friday, August 7, 2009

Touchable Holography

Recently, mid-air displays are attracting a lot of attention in the fields of digital signage and home TV, and many types of holographic displays have been proposed and developed. Although we can "see" holograhpic images as if they are really floating in front of us, we cannot "touch" them, because they are nothing but light. This project adds tactile feedback to the hovering image in 3D free space. Tactile sensation requires contact with objects, but including a stimulator in the work space dilutes the appearance of holographic images. The Airborne Ultrasound Tactile Display solves this problem by producing tactile sensation on a user's hand without any direct contact and without diluting the quality of the holographic projection

Read More

Thursday, August 6, 2009

New Robust OCR dataset

Article From Machine Learning, etc

I've collected this dataset for a project that involves automatically reading bibs in pictures of marathons and other races. This dataset is larger than robust-reading dataset of ICDAR 2003 competition with about 20k digits and more uniform because it's digits-only. I believe it is more challenging than the MNIST digit recognition dataset.
I'm now making it publicly available in hopes of stimulating progress on the task of robust OCR. Use it freely, with only requirement that if you are able to exceed 80% accuracy, you have to let me know ;)
The dataset file contains raw data (images), as well as Weka-format ARFF file for simple set of features.
For completeness I include matlab script used to for initial pre-processing and feature extraction, Python script to convert space-separated output into ARFF format. Check "readme.txt" for more details. <Yaroslav Bulatov>
Dataset

Monday, August 3, 2009

CFP ICAART 2010 - Int'l Conf. on Agents and Artificial Intelligence: extended deadline

Let me kindly inform you that the paper submission deadline for ICAART 2010 (International Conference on Agents and Artificial Intelligence) has been extended to September 3rd, which is rapidly approaching in case you're interested in submitting a paper. ICAART will be held in Valencia (Spain) next year, on January 22 - 24.
In cooperation with the Association for the Advancement of Artificial Intelligence (AAAI), the Portuguese Association for Artificial Intelligence (Associaηγo Portuguesa Para a Inteligκncia Artificial - APPIA), the Spanish Association for Artificial Intelligence (Asociaciσn Espaρola de Inteligencia Artificial - AEPIA), the Workflow
Management Coallition (WfMC) and the Association for Computing Machinery (ACM SIGART), ICAART brings together top researchers and practitioners in several areas of Artificial Intelligence, from multiple areas of knowledge, such as Agents, Multi-Agent Systems and Software Platforms, Distributed Problem Solving and Distributed AI in general, including web applications, on one hand, and within the area of non-distributed AI, including the more traditional areas such as Knowledge Representation, Planning, Learning, Scheduling, Perception and also not so traditional areas such as Reactive AI Systems, Evolutionary Computing and other aspects of Computational Intelligence and many other areas related to intelligent systems, on the other hand.
Submitted papers will be subject to a double-blind review process. All accepted papers will be published in the conference proceedings, under an ISBN reference, on paper and on CD-ROM support. The proceedings will be indexed by DBLP and INSPEC and we are awaiting the confirmation of indexation by Thomson Conference Proceedings Citation Index and EI.
Additionally, a selection of the best papers of the conference will be published in a book, by Springer-Verlag. Best paper awards will be given during the conference.
Please check further details at the ICAART conference web site (http://www.icaart.org/). There you will find detailed information about the conference structure and its main topic areas.
Workshops and special sessions are also invited. If you wish to propose a workshop or a special session, for example based on the results of a specific research project, please send a proposal to the ICAART secretariat. Workshop chairs and Special Session chairs will benefit from logistic support and other types of support, including secretariat and financial support, to facilitate the development of a valid idea.

CFP: 3DPVT'10

5th International Symposium 3D Data Processing, Visualization and Transmission
Espace Saint Martin, Paris, France, May 17-20, 2010
http://www.3dpvt2010.org
This meeting presents new research ideas and results related to the capture, representation, compact storage, transmission, processing,  editing, optimization and visualization of 3D data. These topics span a number of research fields from applied mathematics, computer science, and engineering: computer vision, computer graphics, geometric modeling, signal and image processing, bioinformatics, and statistics. This symposium follows previous highly successful events in Padova 2002, Thessaloniki 2004, Chapel Hill 2006 and Atlanta 2008.

Scope of the Conference

Topics of interest include those listed below.
ACQUISITION & RECOGNITION   
    - 3D scanning components, software, and systems
    - 3D view registration 
    - 3D photography algorithms
    - Multi-view geometry and calibration   
    - 3D shape retrieval and recognition
    - Shape and reflectance reconstruction
    - Surface reflectance recovery and modeling
PROCESSING & TRANSMISSION   
    - Shape registration and similarity measures
    - Interpolation, smoothing, and feature extraction 
    - Shape analysis and morphology
    - Medial axis and segmentation 
    - Statistical analysis of families of shapes
    - Simplification and resampling of shapes with photometric data
    - Compression and transmission of still and dynamic 3D scenes   
    - Streaming and progressive refinements
    - 3D Video 
INTERACTION & VISUALIZATION
    - Image-based rendering and modeling
    - Man/Machine interaction with 3D data
    - Interactive visualization of complex scenes   
    - Multi-resolution rendering
    - Haptic sensors and new human-shape interaction modalities
    - Psychophysics of 3D sensing and haptics   
    - 3D printing and rapid prototyping
    - Augmented reality and virtual environments
    - 3D tele-immersion and remote collaboration
APPLICATION AREAS INCLUDING: 
    - Architecture and urban modeling   
    - Medical and biomedical
    - Cultural heritage and forensic
    - Terrain modeling, archaeology, and GIS
    - 3D television and free-viewpoint video
    - Games and digital animations 
    - Design and reverse engineering
    - Manufacturing and inspection
    - Tourism and real estate   
    - Security and training   

Important dates

5pm GMT December 16, 2009: Submission of full paper
May 17-20, 2010: Conference