Tuesday, March 31, 2009

Special Session on Low-Level Color Image Processing (APSIPA ASC 2009)

Color perception plays an important role in object recognition and scene
understanding both for humans and intelligent vision systems. Recent advances
in digital color imaging and computer hardware technology have led to an
explosion in the use of color images in a variety of applications including
medical imaging, content-based image retrieval, biometrics, watermarking,
digital inpainting, remote sensing, digital multimedia, visual quality
inspection, among many others. As a result, automated processing and analysis
of color images has become an active area of research, which is witnessed by
the large number of publications during the past two decades. The multivariate
nature of color image data presents new challenges for researchers and
practitioners as the numerous methods developed for single channel images are
often not directly applicable to multichannel images.
The goal of this special session is to bring together researchers and
practitioners working in the area of Color Image Processing. We are soliciting
original contributions, which address a wide range of theoretical and practical
issues related to the early stages of the Color Image Processing pipeline
including, but not limited to:

* Color Image Coding and Compression
* Color Image Quantization and Halftoning
* Color Image Filtering and Enhancement
* Color Morphology
* Color Edge Detection
* Color Image Segmentation
* Digital Camera Image Processing (Demosaicking, Zooming, Postprocessing, etc.)
* Multispectral Image Processing
* Applications and Future Trends

Prospective authors are invited to submit either long papers up to 10 pages in
length, or short papers up to 4 pages in length. The long papers will be for
the single-track oral presentation, whereas the short ones will be mostly for
poster presentation. In submitting a manuscript to APSIPA ASC, the authors
acknowledge that no paper substantially similar in content has been or will be
submitted to another conference during the review period. Paper submissions can
be made at

Submission deadline: April 30, 2009
Notification of acceptance: July 1, 2009
Camera-ready deadline: August 1, 2009



In this paper, an image clustering method that is essential for content-based image retrieval in large image databases efficiently is proposed by color, texture, and shape contents. The dominant triple HSV (Hue, Saturation, and Value), which are extracted from quantized HSV joint histogram in the image region, are used for representing color information in the image. Entropy and maximum entry from co-occurrence matrices are used for texture information and edge angle histogram is used for representing shape information. Due to its algorithmic simplicity and the several merits that facilitate the implementation of the neural network, Fuzzy ART has been exploited for image clustering. Original Fuzzy ART suffers unnecessary increase of the number of output neurons when the noise input is presented. Therefore, the improved Fuzzy ART algorithm is proposed to resolve the problem by differently updating the committed node and uncommitted node, and checking the vigilance test again. To show the validity of the proposed algorithm, experimental results on image clustering performance and comparison with original Fuzzy ART are presented in terms of recall rates.

International Journal of Information Technology and Decision Making.2007;06(02).

The MIR Flickr Retrieval Evaluation Database is now supported in img(anaktisi)

The MIR Flickr Retrieval Evaluation Database is now supported in img(anaktisi). The new MIRFLICKR-25000 collection consists of 25000 images downloaded from the social photography site Flickr through its public API.

M. J. Huiskes, M. S. Lew (2008). The MIR Flickr Retrieval Evaluation. ACM International Conference on Multimedia Information Retrieval (MIR'08), Vancouver, Canada – Read More about MIR Flickr

Visit img(Anaktisi) or Read more

Saturday, March 28, 2009

Color indexing

Article From

Citation: Swain, M. J. and Ballard, D. H. Color indexing. International Journal of Computer Vision 7, 1 (Nov. 1991), 11-32. (PDF)

Abstract: Computer vision is embracing a new research focus in which the aim is to develop vision skills for robots that allow them to interact with a dynamic, realistic environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot’s goals. Two fundamental goals are [identifying an object at a known location and] determining the location of a known object. Color can be successfully used for both tasks.

This article demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection, which allows real-time indexing into a large database of stored models. For solving the location problem it introduces an algorithm called Histogram Backprojection, which performs this task efficiently in crowded scenes.

Discussion: This 1991 computer vision paper introduced the concept of identifying and finding objects in images using their color histograms.

In 1991, as computer vision systems were moving away from offline processing of static photographs and into real-time use by mobile robots with inexpensive cameras attached, there was a pressing need for efficient algorithms for doing simple vision tasks like identifying, finding, and tracking objects. Until the publication of this paper, most of these techniques were based on the most obvious attribute, shape recognition, but this was both computationally expensive and fragile: the slightest rotation or occlusion of an object (stuff going in front of it) could radically alter its perceived shape. Color-based recognition is more robust: the colors of an object don’t change very much as it moves, rotates, or becomes occluded by other objects. It can work with extremely low-resolution images (in one experiment, the authors got acceptable performance on 8 × 5 pixel images!). On the other hand, it’s very sensitive to the color and intensity of lighting, and the image has to be normalized to account for this. It also, obviously, has difficulty distinguishing objects that have similar colors.

The details of the system are simple: divide the space of all colors up into fairly large “buckets” - typical would be 16 buckets along each color axis (e.g. red, green, blue). For example, the RGB color (170, 24, 255) might get bucketed in the bucket (10, 1, 15). Then, for each object you want to recognize or find, take a model photo of it and count the number of pixels falling into each bucket; this is called the color histogram. To compare two color histograms, they use the Histogram Intersection operator: this takes the minimum of the two counts (from each image) for each bucket, and then adds them up. This value is normalized by dividing it by the number of pixels in the source image. Images with very similar histograms will have intersection values close to 1, whereas ones with different histograms will have much smaller values. By comparing an image to be identified against each image in a database of models, each with a precomputed color histogram, this can rapidly locate the desired image.

Locating objects is very similar: each pixel is assigned a value based on its bucket and how common that bucket is in the model image of the object being sought. Then it looks for a region containing many large values. This is facilitated by using a convolution with a circle - effectively “blurring” the image and mixing nearby values - followed by location of the pixel with the largest value.

The approach of the paper is actually more general than it sounds: virtually any image feature that you can construct a histogram of, can be applied to the same tasks using the same approach. This includes features such as local geometry, local texture, rough estimated size, and so on. The sensitivity of the algorithm to any particular feature can be tuned by adjusting the number of bucket divisions along that dimension. For example, Niblack et al’s QBIC system (1993), still in use today by IBM in DB2, uses color, texture, and shape simultaneously. In 1999 a combination of features using this approach was applied effectively to content-based image retrieval in Tao and Grosky’s “Spatial Color Indexing: A Novel Approach for Content-Based Image Retrieval” (at Citeseer).

Unsurprisingly, the paper demonstrated that the performance of color indexing is severely degraded by changes in lighting; this is one reason that the work did not appear until 1991, after work on color correction and normalization had appeared that can be used in a preprocessing step to effectively cope with these issues. This original paper only examined differences in brightness, and performed a trivial normalization of brightness values to demonstrate its advantage. Dealing with illumination invariant color matching - particularly when the light changed in color - would be the subject of several later publications such as Matas et al’s “Color-Based Object Recognition under Spectrally Nonuniform Illumination” and “On Representation and Matching of Multi-Coloured Objects” (1995).

The color indexing paper shows its age when it comes to scale: concerned only with robotics applications, they considered 66 objects to be a “large database.” Today, object identification, classification, and location is studied primarily in the context of image retrieval and image filtering, where there are frequently millions of images to consider with thousands of objects and with no control over image conditions. The “incremental” histogram intersection optimization presented in this paper - essentially, only looking at a few of the buckets with the largest counts - enables it to scale to moderate-sized databases, but not anything as large as modern applications require. Since then, more scalable approaches to color indexing databases have been developed, such as Albuz et al’s “Scalable Color Image Indexing and Retrieval Using Vector Wavelets” (2001) and hierarchical clustering based approaches such as those of Abdel-Mottaleb et al (”Performance Evaluation of Clustering Algorithms for Scalable Image Retrieval”, 1998).

The paper employs combinatorial logic, and a bit of guesswork, to argue that the number of distinct color profiles is sufficiently large to allow a very large number of potential objects to be distinguished. In Stricker and Swain’s “The Capacity of Color Histogram Indexing” (1994, Citeseer) an interesting connection between color indexing and coding theory provides a bound on the actual number of distinguishable images in a color indexing database, which they call its capacity.

However, objects are not necessarily well-distributed among these in practice; in the original experiments the test objects were relatively easy to identify and distinguish and have clear color histograms, such as household items with illustrated packaging. Humans regularly distinguish objects with nearly identical color histograms, such as different types of trees or different brands of cars of the same color; and also regularly identify objects with very different color histograms as being the same, such as a person wearing two different sets of clothes. Color indexing has not, as far as I am aware, been extended to such difficult identification problems.

A peculiar property of color indexing is that, unlike many other object identification and location methods, it has no clear analogy in human visual processing - indeed, the paper cites work by cognitive psychology researcher Anne Treisman showing that humans have demonstrated poor performance at locating objects based on their colors. I’m not aware of any new psychology research investigating the role of color histograms in object location and attention in humans.

Keith Price maintains a bibliography of papers related to recognition by color indexing.


Europe poised for exponential growth in digitized medical imaging storage space

Europe poised for exponential growth in digitized medical imaging storage space
Written by Sam Collins, Contributing Writer

Article From

LONDON -- Medical images are increasingly becoming digitized. However, the exponential growth of digitized medical images poses an immense challenge in terms of management, compression and retrieval.

It is essential that image archive storage solution providers, picture archiving and communication system vendors and image modality manufacturers become aware of the growing requirements of storage space.

New analysis from Frost & Sullivan, Strategic Outlook Into Archive Requirements For Image Management In Medical Imaging, finds that the total European storage requirement in 2007 was 106,044 terabytes (TB). In this research, Frost & Sullivan's expert analysts thoroughly examine medical image storage solutions markets in the UK, France, Spain, Germany, Scandinavia, Benelux and Italy.

"There is an increasing demand for digitizing medical images as opposed to the traditional film-based images," said Frost & Sullivan Research Analyst Shriram Shanmugham.

"Unlike film-based images, digital images do not decay over time and can easily be stored for longer periods of time. Digitized images require less inventory space and the same image can be accessed by multiple physicians simultaneously."

Moreover, the turn-around time from the initial meeting with the physician to availing complete diagnosis is reduced. As a result, patients can expect quicker appointments with physicians and they can have permanent access to the images from remote sites.

However, certain images are not DICOM compatible and require a service-oriented approach to be archived. This is primarily because evolving healthcare standards such as DICOM and HL7 are being updated at a much slower pace than image archiving and image modality technology.

Other challenges include ensuring interoperability with hospital-based information systems. Another issue is that diagnostic procedures such as echo and angiogram generate a high resolution, large file-size images, and their long retrieval times pose a concern for hospitals.

"Some PACS vendors provide their own unique solution to archiving images that are not DICOM compatible, while others think it is wise to work around the evolving healthcare standards so that, in the future, systems interoperability is streamlined," said Shanmugham. "This trend of providing solutions to images that are not DICOM compatible will be prevalent over the next five to seven years."

The digitized medical imaging archives market requires complete cooperation among the following three major industry participants: PACS vendors, image modality manufacturers and storage solution providers. Some PACS vendors have indicated that it would be convenient for them if image modality manufacturers provided them with test data before an image modality is released into the market. By having the test data before hand, PACS vendors affirmed that they could easily establish connectivity (interoperability) of their module with the image modality.

"Hospitals cannot afford to experience an image server downtime," said Shanmugham. "It is therefore essential that storage solution providers devise innovative technology that obviates the possibility of such server downtime."

Two new improvements to Google results pages

Today we're rolling out two new improvements to Google search. The first offers an expanded list of useful related searches and the second is the addition of longer search result descriptions -- both of which help guide users more effectively to the information they need.
More and better search refinements
Starting today, we're deploying a new technology that can better understand associations and concepts related to your search, and one of its first applications lets us offer you even more useful related searches (the terms found at the bottom, and sometimes at the top, of the search results page).
For example, if you search for [principles of physics], our algorithms understand that "angular momentum," "special relativity," "big bang" and "quantum mechanic" are related terms that could help you find what you need. Here's an example (click on the images in the post to view them larger):

Let's look at a couple of examples in other languages. In Russian, for the query [гадание на картах] (fortune-telling with cards), the algorithms find the related terms "таро" (tarot), "ленорман" (lenormand) and "тибетское гадание мо" (tibetan divination mo). In Italian, if you search for [surf alle canarie] (surf at the canary islands), we now offer suggestions based on the three most famous Canary Islands: "lanzarote," "gran canaria," and "fuerteventura":

We are now able to target more queries, more languages, and make our suggestions more relevant to what you actually need to know. Additionally, we're now offering refinements for longer queries — something that's usually a challenging task. You'll be able to see our new related searches starting today in 37 languages all around the world.
And speaking of long queries, that leads us to our next improvement...

Read More

Thursday, March 26, 2009

Call for Papers "Recent Patents on Computer Science"

Bentham Science Publishers has launched a series of innovative journals publishing review articles on recent patents in major therapeutic areas of drug discovery as well as biotechnology, nanotechnology, engineering, computer science and material science disciplines. Please refer to Bentham Science’s website at for further details.

An exciting journal entitled Recent Patents on Computer Science (CSENG) was launched in January 2008. This journal publishes review articles written by experts on recent patents in the field of Computer Science. Please visit the journal‘s website at for the Editorial Board, first journal issue, abstracts of recent issues and other details.

Recent Patents on Computer Science (CSENG) is indexed in Genamics JournalSeek, Compendex.

Artificial Intelligence and Soft Computing (ASC) 2009

The International Conference on Artificial Intelligence and Soft Computing (ASC 2009) will be a major forum for international researchers and professionals to present their latest research, results, and ideas in all areas of artificial intelligence and soft computing. ASC 2009 aims to strengthen relations between industry, research laboratories, and universities. All submissions will be double blind reviewed by at least two reviewers. Acceptance will be based primarily on originality and contribution.

ASC 2009 will be held in conjunction with the IASTED International Conferences on:


Topics covered by ASC 2009 include, but are not limited to:

  • Knowledge Acquisition
  • Knowledge Representation
  • Logic Programming
  • Probabilistic Reasoning
  • Natural Language Processing

and many more...

Sunday, March 22, 2009


Product Description
The crescendo way of transmission accumulation ingest is probable to qualify creating an imperative requirement of providing a country effectuation of capturing, storing, indexing, retrieving, analyzing, and summarizing accumulation finished ikon data.

Artificial Intelligence for Maximizing Content Based Image Retrieval discusses earth aspects of content-based ikon feat (CBIR) using underway technologies and applications within the staged info (AI) field. Providing state-of-the-art investigate from directive planetary experts, this aggregation offers a academic appearance and applicatory solutions for academicians, researchers, and business practitioners.


Saturday, March 21, 2009

Pixelmator 1.4

Pixelmator, the beautifully designed, easy-to-use, fast and powerful imageeditor for Mac OS X has everything you need to create, edit and enhance yourimages.

Someone who is editing images must be able to select the right shapes,portions or objects in images. With Pixelmator's powerful, pixel-accuratecollection of selection tools you can quickly and easily select any part of yourimages. That means you can edit and apply special effects to portions of yourpictures, remove unwanted objects or even cut out objects from one picture toput on another. Thanks to the masks palette in Pixelmator, you can even saveyour selections for later. Now, that's handy.

Thanks to Pixelmator's graphics drawing tablet support, you can now freelyhand-draw or paint with the Pencil, Brush, and Clone Stamp tools, or erase withthe tablet's eraser. What's more, you can take advantage of the tablet'spressure sensitivity to play with incredibly fast Blur and Sharpen tools.

Pixelmator is based on Core Image technology that uses your Mac's video cardfor image processing. Core Image utilizes the graphics card for image processingoperations, freeing the CPU for other tasks. And if you have a high-performancecard with increased video memory (VRAM), you'll find real-time responsivenessacross a wide variety of Pixelmator operations. Pixelmator is blistering-fast onthe latest PowerPC and all Intel-based Mac's.

What if you just love having fun with filters, but think that Pixelmatordoesn't have enough of them? Well, think again?Pixelmator significantlyoutshines other applications with its powerful plug-in architecture that takesadvantage not only of Core Image units, but also of Quartz Composercompositions. This means you can simply download or create your own Core Imageunit or even Quartz Composer composition and play with it right away inPixelmator.

Thursday, March 19, 2009

Sixth Sense

Tuesday, March 17, 2009

The MIRFLICKR-25000 Image Collection

The new MIRFLICKR-25000 collection consists of 25000 images downloaded from the social photography site Flickr through its public API.

We are doing our best to make sure the image collection is going to be:

  • OPEN
    Access to the collection is simple and reliable, with image copyright clearly established. This is realized by selecting only images offered under the Creative Commons license. See the copyright section below.
    Images are also selected based on their high interestingness rating. As a result the image collection is representative for the domain of original and high-quality photography.
    In particular for the research community dedicated to improving image retrieval. We have collected the user-supplied image Flickr tags as well as the EXIF metadata and make it available in easy-to-access text files. Additionally we provide manual image annotations on the entire collection suitable for a variety of benchmarks.

MIRFLICKR-25000 is an evolving effort with many ideas for extension. So far the image collection, metadata and annotations can be downloaded below. If you enter your email address before downloading, we will keep you posted of the latest updates.


Numenta is creating a new type of computing technology modeled on the structure and operation of the neocortex. The technology is called Hierarchical Temporal Memory, or HTM, and is applicable to a broad class of problems from machine vision, to fraud detection, to semantic analysis of text. HTM is based on a theory of neocortex first described in the book On Intelligence by Numenta co-founder Jeff Hawkins, and subsequently turned into a mathematical form by Numenta co-founder Dileep George.
Numenta is a technology platform provider rather than an application developer. We work with developers and partners to configure and adapt HTM systems to solve a wide range of problems.
HTM technology has the potential to solve many difficult problems in machine learning, inference, and prediction. Some of the application areas we are exploring with our customers include recognizing objects in photos, recognizing behaviors in videos, identifying the gender of a speaker, predicting traffic patterns, doing optical character recognition on messy text, evaluating medical images, and predicting click through patterns on the web. The world is becoming awash with data of all types, whether numeric, video, text, images or audio, making it challenging for humans to sort through it and find what’s important. HTM technology offers the promise of making sense of all that data.
An HTM system is not programmed in the traditional sense; instead it is trained. Sensory data is applied to the bottom of the hierarchy of an HTM system and the HTM automatically discovers the underlying patterns in the sensory input. HTMs learn what objects or movements are in the world and how to recognize them, just as a child learns to identify new objects.
Numenta's first implementation of HTM technology is a software platform called NuPIC, the Numenta Platform for Intelligent Computing, which is available to developers under a free research license. Numenta also is developing a Vision Toolkit and a Prediction Toolkit that will simplify the task of creating HTM networks for specific problems. Interested partners and developers should download NuPIC for experimentation and register for the Numenta Newsletter to learn about future releases of the Toolkits as well as other developments in the HTM world.

Vision4 Demo Application


This demonstration application, called Vision4, gives a sense of how Hierarchical Temporal Memory (HTM) performs in recognizing objects in static images. 
At the Numenta HTM workshop in June 2008, we released an example of a NuPIC vision application that was similar to Vision4. We are releasing the Vision4 demo for two reasons. First, we have structured Vision4 as a self contained application, allowing non-technical people to install and use it, unlike the prior release, which required programming skills. Second, we have made improvements to the accuracy of the included HTM network. As a result, Vision4 performs much better than the workshop release.

Vision4 contains an HTM network trained on four image categories. It also includes a set of 50 novel test images and allows you to experiment with your own test images. Although Vision4 only recognizes only four categories of objects, the number of images and possible variations of images within those categories is huge. The Vision4 application isn't perfect but it performs well on what is widely acknowledged to be a difficult pattern recognition task.

Monday, March 16, 2009

Google SketchUp 7.0.10247

Google SketchUp (free) is an easy-to-learn 3D modeling program that enables you to explore the world in 3D. With just a few simple tools, you can create 3D models of houses, sheds, decks, home additions, woodworking projects - even space ships. And once you've built your models, you can place them in Google Earth, post them to the 3D Warehouse, or print hard copies.

  • Click on a shape and push or pull it to create your desired 3D geometry.
  • Experiment with color and texture directly on your model.
  • Real-time shadow casting lets you see exactly where the sun falls as you model.
  • Select from thousands of pre-drawn components to save time drawing.

FastStone Image Viewer 3.7

FastStone Image Viewer is a fast, stable, user-friendly image browser, converter and editor. It has a nice array of features that include image viewing, management, comparison, red-eye removal, emailing, resizing, cropping and color adjustments.

Its innovative but intuitive full-screen mode provides quick access to EXIF information, thumbnail browser and major functionalities via hidden toolbars that pop up when your mouse touch the four edges of the screen. Other features include a high quality magnifier and a musical slideshow with 150+ transitional effects, as well as lossless JPEG transitions, drop shadow effects, image frames, scanner support, histogram and much more.

  • Common image formats support, including loading of JPEG, JPEG2000, GIF, BMP, PNG, PCX, TIFF, WMF, ICO, CUR, TGA and saving to JPEG, JPEG2000, TIFF, GIF, PCX, BMP, PNG, TGA
  • Digital camera RAW formats support, including CRW, CR2, NEF, PEF, RAF, MRW, ORF and DNG
  • Full screen viewer with Select - Zoom support
  • Crystal clear and customizable magnifier
  • Resizing, flipping, rotating, cropping, emailing and color adjusting tools
  • Powerful crop-board that crops images into pre-defined and customized print sizes
  • Image EXIF metadata support
  • Batch image converter/resizer
  • Slideshow with 150+ transitional effects and MP3/WAV/MIDI/WMA background music support
  • Compare images side by side
  • Undo, Redo and Mouse Wheel support
  • Simple and effective red-eye removal

Saturday, March 14, 2009

Multimedia Systems and Content-Based Image Retrieval

Article From
How to use Information Science Publishing Multimedia

Product Description Systems and Content-Based

Multimedia system and content-based doll recapture be intensely essential area of research in computer technology. But standing several important issues in these areas hang around unresolved and further research works are needed to be done in favour of greater technique and application. Numerous research works are human being done in these field presently. These two areas are shifting our life-styles because they in cooperation assure create, conservation, access and retrieval of video, acoustic, image, textual and evocative facts. Multimedia Systems and Content-Based Image Retrieval address these unresolved issues and highlights purposeful research. Business studies Multimedia Systems and Content-Based.

About the Author Economics Multimedia Systems and Content-Based

Sagarmay Deb received his Master of Business Administration from Long Island University, New York, USA. His research interests are multimedia databases, content-based image retrieval, a variety of indexing techniques and electronic industry and individual contribute to several book and journal by multimedia databases. Currently he be beside The University of Southern Queensland, Australia and is enmeshed with ability of an International clinic and canvasser in Information Technology. Marquis Whos Who, a important publisher of biographies of family of celebrity achievement, has elected his biography for publication. finance Multimedia Systems and Content-Based.

Required To Read But Still Enjoyed It business & management Multimedia.

Nevertheless, it seem to be more for heart quality manager and above given that the halcyon days paper appear to be you want a effort foal CEO and not a present craft besides as one who digs downhill into what their camaraderie can accomplish select few and after drag the company through his dogeddness and precipitous will into makeing any and all change mandatory to go in pursuit of that confidence. Systems and Content-Based.

My boss executive love this and I be made to read it as helping of my annual P-928. I do comprehend the explanation as to the choice the researchers made and why. Its a resourcefully comfortable research be educated on company who be pennant performer who then have 15 years of never-ending 2. I read it on the jumbo complete to Rome from Detroit and it was a apt dense read. 5 or 3x farm animals souk presentation. Information Science Publishing Image Retrieval.

MAR Digital Document Processing: Major Directions and Recent Advances

With the advent of the Digital Library initiative, web document processing and biometric aspects of digital document processing, together with new techniques of printed and handwritten Optical Character Recognition (OCR), a good overview of this fast-developing field is invaluable. In this book, all the major and frontier topics in the field of document analysis are brought together into a single volume creating a unique reference source.

Highlights include:

• Document structure analysis followed by OCR of Japanese, Tibetan and Indian printed scripts.

• Online and offline handwritten text recognition approaches;

• Japanese postal and Arabic check processing;

• Document image quality modelling, mathematical expression recognition, graphics recognition, document information retrieval, super resolution text, metadata extraction in digital library;

• Biometric and forensic aspects: individuality of handwriting detection;

• Web document analysis, text and hypertext mining and bank check data mining.

Containing chapters written by some of the most eminent researchers active in this field, this book can serve as a handbook for the research scholar as well as a supporting book for advanced graduate students interested in document processing or image analysis

Radioengineering Journal

For the last 17 years, the Radioengineering journal quarterly has been publishing original scientific and engineering papers from the area of radio engineering and science.

The nature of the Radioengineering journal is interdisciplinary. This journal is interested in covering a wide area of radio electronics starting from wave propagation and antennas, continuing to high-frequency circuits and optoelectronics, and finishing with signal processing and multimedia. The journal can, therefore, present a wide view on all aspects of today's radio engineering and science, initiate a mutual inspiration between disciplines; and support their cohesion in the frame of complex radio electronic systems.

The Radioengineering journal makes an effort to encourage a younger generation of scientists and engineers. The journal offers them their first publication opportunity and gives them their first experience in writing a scientific paper. Independent reviewers carefully review each submitted paper, and attempt to explain its stronger and weaker aspects to the authors in detail.

Each December and June, the Radioengineering journal prepares special issues focused on selected topics of importance and current interest. In the past, special issues published in December were devoted to mobile communications, multimedia, utilization of MATLAB, advances in antennas and microwaves, electromagnetic compatibility, etc. In 2008, the December special issue was focused on advanced electronic circuits, both analog and digital, from low frequencies to millimeter waves. The special issues published in June contain papers focused on a very narrow area of the radio engineering field.

In June 2009, the special issue will be focused on artificial electromagnetic materials and meta-materials. We ask perspective authors for high-quality tutorials and state-of-the art papers. This special issue is prepared by the guest editors Dr. Milan Polívka and Dr. Jan Machac.

The Radioengineering journal is the member of the Sister Societies' Publications of the IEEE Communications Society. Since volume 16 (2007), the Radioengineering journal has been selected for coverage in Thomson Reuters products and custom information services; the journal is indexed and abstracted in Science Citation Index Expanded (Impact Factor coming in 2010) and Journal Citation Reports. The Radioengineering journal is covered by the Directory of Open Access Journals, is listed in INSPEC, and aspires to be covered by SCOPUS. That way, a good accessibility of the published matter is ensured.

Thursday, March 12, 2009

Nstein Technologies to Roll Out Imprezzeo Visual Search Tool

Image-Based Search Software Will Help Nstein's Digital Publishing Networks Clients Achieve Faster, More Accurate Image Searches

LONDON -- (Marketwire) -- 03/11/09 -- Imprezzeo, an image search software company, today announced a partnership with Nstein Technologies, Inc., a leading supplier of digital publishing solutions, including Text Mining, Web Content Management, and Digital Asset Management. Nstein will integrate Imprezzeo's image-based search engine with its Text Mining and search products to allow its customers to conduct faster, more accurate image searches. With this partnership, Imprezzeo offers a software development kit (SDK), access to Imprezzeo developer resources and dedicated technical support. This allows Nstein Technologies to rapidly and seamlessly integrate and deploy Imprezzeo's Image Search technology within digital publishing networks and into broader business processes and workflows.

Imprezzeo Image Search is the first image recognition and search product to use both content-based image retrieval (CBIR) and facial recognition (FR), allowing customers to use images to search for images, rather than textual search terms. The technology generates image search results that closely match a sample image either chosen by the user from an initial set of search results that can then be refined, or from an image uploaded by the user. Imprezzeo is capable of searching millions of images in seconds. By focusing on the centralization, management and automated indexing of digital assets, Nstein enables its content-driven customer base to reduce operational costs and identify new revenue streams.

"Nstein has a trusted reputation for innovative digital publishing solutions and a continued effort to meet the highest levels of customer satisfaction," said Dermot Corrigan, CEO at Imprezzeo. "We're honored to partner with Nstein not only to complement their existing technology, but also as a strategic component to meet the needs and demands of its customers."

"At Nstein, we are committed to connecting people to relevant and valuable content," said Luc Filiatreault, President and CEO of Nstein Technologies. "Imprezzeo's Image Search product will offer our customers new tools to efficiently manage and publish their images."

Imprezzeo Image Search helps companies manage image content much more effectively while increasing sales and improving internal operations and processes. To view a demonstration, visit To learn more about the image search market, visit

Tuesday, March 10, 2009

ISKO UK Event - Human-Machine Symbiosis for Data Interpretation

The next ISKO UK open meeting will be held on 23 April 2009, at University College London.
David Snowden will talk about "Human-Machine Symbiosis for Data Interpretation" followed by a panel discussion. The seminar will be preceded by the ISKO UK Annual General Meeting to which all potential members are also invited.
Tea will be provided, and there will be drinks and an opportunity for networking. We hope that it will be an enjoyable afternoon and are looking forward to seeing you at this ISKO UK event.
Cost: 10 GBP (students and ISKO members free)

Read More

Monday, March 9, 2009

Int Workshop on 3D Digital Imaging and Modeling (3DIM 2009)

The field of 3D imaging, modeling and visualization has undergone a rapid growth over the last decade. While some research issues have received significant attention and matured into stable solutions, new problems and goals are emerging as the focus of the 3-D research community. Improved methods for the acquisition of 3-D information by optical means are driven by new algorithmic approaches in computer vision and image processing. Advanced methods for the processing and transformation of geometric information open new types of applications.

As part of ICCV 2009, the 3DIM 2009 Workshop will bring together researchers interested in all aspects of the acquisition, processing and modeling of 3-D information and their applications.

The 3DIM 2009 Committee invites you to submit high quality original full papers by *May 01, 2009*.

The papers will be reviewed by an international Program Committee.

Accepted papers will be presented in single-track oral sessions as well as a poster session (all papers are allocated the same number of pages). The Workshop Proceedings will be published by the IEEE Computer Society Press as part of the ICCV 2009 Proceedings, and archived in their Digital Library. Full details on the paper format, electronic submission procedure and conference venue are available on the 3DIM 2009 Workshop web site. Double submissions with ICCV 2009 are accepted, but only in accordance with the policies of the conference.

Friday, March 6, 2009

10 related workshops of CVPR'09

1. Workshop on Visual and Contextual Learning from Annotated Images and Videos
2. Second IEEE Workshop on CVPR for
Human Communicative Behavior Analysis
3. MMBIA 2009: IEEE Computer Society Workshop on Mathematical Methods in Biomedical Image Analysis
4.IEEE Computer Society Workshop
on Biometrics
5.SIG-09: First International Workshop on Stochastic Image Grammars
6. First Workshop on Egocentric Vision
7. Eleventh IEEE International Workshop on
Performance Evaluation of Tracking and Surveillance
8. 4th International Workshop on Semantic Learning
and Applications in Multimedia
9. The First IEEE Workshop on Visual Place Categorization
10. 1st International Workshop on Visual Scene Understanding

Tuesday, March 3, 2009


Article From Codeproject

This article is about an httpmodule designed to create one image out of many for faster loading and fewer web server HTTP requests. Module creates auto generated CSS image maps of positions for displaying on a webpage using background positioning. The module also handles creating mouse over image effects. This module was designed to run with ASP.NET websites and uses the web.config to control the image and CSS directories. It can also append to existing CSS files instead of creating new ones to keep old webpage CSS intact.

The general purpose to implement this code is to increase web server response time by removing multiple image HTTP requests. When clients access your website through their browser, they request all of the images for your webpages one by one. If you have many images like I do, this can create a giant workload for your web server to serve up. So to resolve this issue, why can't we just create one image out of many with CSS mappings to the position within the image file. By doing this, your server then only receives one request for all your images, which it can handle efficiently fast. The other benefits of oneimage are listed below:

  1. Less chance of corrupt file transmission
  2. Faster image downloads because it's all in one file
  3. Enable caching for one image to keep it in memory
  4. Less HTTP Requests and Responses
  5. Keeps people from using your images directly on their sites
  6. Makes image rollovers simple
  7. Turns GIF formats into PNG, to not be binded by GIF copyrights

Read More

Polar View of an Image

Article From Codeproject

Image warping gives interesting results. Many special effects can be achieved through image warping. Warping is a process where an image is transformed in some way, to arrive at another image. This is as though the image is printed on a rubber sheet and that sheet is stretched in a non-uniform way. Polar mapping is an example of image warping. This is similar to the standard Log Polar mapping available in image processing. The receptor distribution in the retina of our eye resembles a log polar array. The difference between a log polar map and a polar map is that the concentric circles are non-uniformly spaced in a log polar map, whereas they are uniformly spaced in a polar map. In this article, we illustrate code to do polar mapping of an image.

Polar Mapping

The basic geometry of polar mapping is shown in the figure below. Equally spaced concentric circles are drawn centered at the image centre, and a number of equally spaced sectors are drawn. Pixels at the points of intersection of these circles and radial lines are plotted on a rectangular grid, and the resulting image is a polar view. In a log polar mapping, the radii of the concentric circles vary on a logarithmic scale.



Read More

Monday, March 2, 2009

HMMD color space

HMMD color space, is supported in MPEG-7. The hue has the same meaning as in the HSV space, and max and min are the maximum and image minimum among the R, G, and B values, respectively. The diff component is defined as the difference between max and min. Only three of the four components are sufficient to describe the HMMD space. This color space can be depicted using the double cone structure as shown in the figure. In the MPEG-7 core experiments for image retrieval, it was observed that the HMMD color space is very effective and compared favorably with the HSV color space. Note that the HMMD color space is a slight twist on the HSI color space, where the diff component is scaled by the intensity value.

HMMD color space in now implemented in img(Rummager)

Sunday, March 1, 2009

Biometric System and Data Analysis

First book that focuses on the aspects common to all biometric recognition systems, including the input data (biometrics images, and person meta data) and output (scores) of these systems. This leads to a focus on user and group level performance

  • Evaluation techniques are accompanied by intuitive explanations
  • Case studies and examples from several major biometric modalities are included

Biometric  System and Data Analysis: Design, Evaluation, and Data Mining brings together aspects of statistics and machine learning to provide a comprehensive guide to evaluate, interpret and understand biometric data. This professional book naturally leads to topics including data mining and prediction, widely applied to other fields but not rigorously to biometrics.

This volume places an emphasis on the various performance measures available for biometric systems, what they mean, and when they should and should not be applied. The evaluation techniques are presented rigorously, however are always accompanied by intuitive explanations that convey the essence of the statistical concepts in a general manner.

Designed for a professional audience composed of practitioners and researchers in industry, Biometric  System and Data Analysis: Design, Evaluation, and Data Mining is also suitable as a reference for advanced-level students in computer science and engineering.

Read More

MPEG7 Video Pattern Recognition Control Monitor

Server-based OCR and PDF Conversion Solution

ABBYY Recognition Server 2.0 is a robust and powerful server-based OCR solution for automating document conversion processes across corporate departments and enterprises. It is designed for mid- to high-volume OCR processing. ABBYY Recognition Server can be used as both a turnkey solution and an integral part of document capture, document management, and back-end systems.

ABBYY Recognition Server is the ideal solution for:

Highly Accurate Recognition in 191 Languages
The award-winning ABBYY’s OCR technology delivers unprecedented recognition accuracy for any kind of documents.

Unattended Server-based Processing
Document conversion tasks are performed automatically on a server, during scheduled hours or round-the-clock.

Unmatched Scalability
With its ability to use resources of additional computers and CPUs during the processing, Recognition Server can convert virtually any volume of documents within the required timeframe. In addition, there is no need for complex system configuration – it takes just a few minutes to extend the processing power by plugging additional stations into the system.

Centralized Management
Recognition Server provides a remote management console as a central administration point for defining processing parameters, creating specific “workflows” for particular projects and managing recognition stations across the enterprise.

Learn more about ABBYY Recognition Server functionality...

Interactive Visualization of Color Spaces

This ImageJ plugin shows the color distribution within a 3D-color-space. The viewing angle can be adjusted with the mouse.

Eleven different color spaces and five display modes are supported. By switching between color spaces - the relationship between different color spaces can be made visible.
In addition the effect of image manipulations can be studied on an image and the corresponding color distribution.


This plugin can be used with all image types. In addition it contains several typical example images.

If a ROI or a freehand mask is used, then only the colors within this region will be displayed.


X360 Image Processing ActiveX OCX 4.37

X360 Tiff Image Processing ActiveX OCX help you to create and maintain multiple Tiff. The ActiveX works on most of the Windows operating system include Vista, and can be accessed from most programming languages like ASP, C++, Visual Basic, Visual FoxPro, Delphi, MS Access, VB.NET web page, and C#. The main features are append,delete,insert,move and swap pages within existing Tiff.You can also view and save images to different formats include Bmp,Emf,Gif,Jpeg, Pdf,multipaged Pdf,Png,Tiff,multipaged Tiff,Wmf.Major functions include flip,rotate,resize and zoom the image,fully control the scroll action,draw text and image,convert color to grayscaled and blackwhite,get tiff tags and Exif information,provide hand tool to move image using mouse, provide selection tools to crop or copy partial image to clipboard,print the image.Supported Tiff compressions include CCITT Group3,Group4,LZW and Packbits RLE.

Read More

Near-infrared spectroscopy decodes thought processes

Article From

A brain-computer interface (BCI) that can decode thought processes could enable people with severe or multiple disabilities to communicate and control external devices via thought alone. Bringing such a system one step closer, Canadian researchers have developed a way to use optical imaging to decode preference by measuring the intensity of near-infrared light absorbed in brain tissue. The system is based on the use of near-infrared spectroscopy (NIRS) to study cerebral haemodynamics during the decision-making process. NIRS has been investigated before as a non-invasive tool for reading thoughts, but previous NIRS-BCI set-ups required user training. For example, in order to indicate "yes" to a question, a subject would need to perform a specific unrelated task, such as a mental calculation. The key difference in this latest system - developed by researchers at the Bloorview Research Institute and the University of Toronto - is that the BCI is trained to directly decode neural signatures corresponding to specific decision. As no secondary task is required to indicate preference, the design should be more intuitive to use - decreasing the cognitive load required to operate the interface and removing the need to train the user. For more information, go to:

Head Tracking for Desktop VR Displays using the Wii Remote

Article From

Using the infrared camera in the Wii remote and a head mounted sensor bar (two IR LEDs), you can accurately track the location of your head and render view dependent images on the screen. This effectively transforms your display into a portal to a virtual environment. The display properly reacts to head and body movement as if it were a real window creating a realistic illusion of depth and space.
The program only needs to know your display size and the size of your sensor bar. The software is a custom C# DirectX program and is primarily provided as sample code for developers without support or additional documentation. You may need the most recent version of DirectX installed for this to work.
To run the DesktopVR program you see in the video:
1. Connect your wiimote to your PC via Bluetooth. If you don't know how to do this, you can follow this tutorial. I've been told it works with other Bluetooth drivers, but I have not tested them myself.
2. Download the WiiDesktopVR (v02) sample program. Read the README file on program usage and configuration. Launch the "WiiDesktopVR.exe" in the main folder. A potentially more stable/Vista/64-bit compatible version has been created by Andrea Leganza. There also may be more variants on the web.
NOTE: If you are having trouble with running the program, you can check my project blog post about it or check the forum for assistance. I am unable to replicate these problems, so it hard for me to debug them. But, other people have figured it out. Things that have been identified to help: delete the "config.dat" file and re-run the program, install a new version of Direct X, or istall .NET 2.0.
Developers Notes: The code is built on top of this Wiimote library. To compile the program, you will need a C# IDE and the DirectX SDK. More notes are in the README.
A visit to this project's FAQ and Advanced Discussion post may be very englightening. You may also find the official discussion forums for my wiimote projects helpful:

Tracking Your Fingers with the Wiimote

Article From

Using an LED array and some reflective tape, you can use the infrared camera in the Wii remote to track objects, like your fingers, in 2D space. This lets you interact with your computer simply by waving your hands in the air similar to the interaction seen in the movie "Minority Report". The Wiimote can track upto 4 points simultaneously. The multipoint grid software is a custom C# DirectX program.
To run the grid program you see in the video:
1. First, follow this walkthrough on using the wiimote with C#. You may need to download a copy of Visual C# Express to compile/run this sample if you don't have it yet.
2. Download a copy of the DirectX SDK. You may not need this to simply run the sample grid program, but you will need it if you want to make any changes to it.
3. Download the Wiimote Multipoint Grid sample program. Make sure your wiimote is connected via bluetooth, and then run the ".exe" shortcut in the main folder.
A visit to this project's FAQ and Advanced Discussion post may be very englightening. You may also find the official discussion forums for my wiimote projects helpful: