Friday, December 14, 2007

50th International Symposium ELMAR-2008

The 50th International Symposium ELMAR-2008, the oldest conference in Europe, will be traditionally held in the beautiful old town Zadar on the Croatian Adriatic coast. While the scientific program is expected to create stimulating professional interaction, the crystal clear Adriatic Sea, warm summer atmosphere and wealth of historic monuments promise a pleasant and memorable stay.

During the 50 years of activity ELMAR symposium became a significant scientific conference in the field of multimedia communications, image and video processing, navigation systems, speech and audio processing, telecommunications, wireless commununications, electronics in marine, naval architecture, sea ecology, and other advanced research areas. Besides, every year ELMAR symposium gathers specialists of various kinds (government representatives, navy, industry, universities and various business people from the region) to discuss the most recent issues and contribute to appropriate market development in Croatia.

The scientific program includes keynote talks by eminent international experts and contributed papers. Papers accepted by two independent reviewers will be published in symposium proceedings available at the symposium and abstracted in the INSPEC and IEEExplore database. ELMAR-2008 symposium is sponsored by the Croatian Society Electronics in Marine (ELMAR), technically co-sponsored by IEEE Region 8, IEEE Croatia Section, IEEE Croatia Section Chapter of the Signal Processing Society, IEEE Croatia Section Joint Chapter of the Antennas and Propagation / Microwave Theory and Techniques Societies and organized in cooperation with EURASIP (European Association for Signal, Speech and Image Processing).


Image and Video Processing
Multimedia Communications
Speech and Audio Processing
Wireless Commununications
Antennas and Propagation
e-Learning and m-Learning
Navigation Systems
Ship Electronic Systems
Power Electronics and Automation
Naval Architecture
Sea Ecology
Special Session Proposals - A special session consist of 5-6 papers which should present a unifying theme from a diversity of viewpoints

Windows Vista SP1 Release Candidate

Install Windows Vista SP1 Release Candidate through Windows Update
Windows Vista Service Pack 1 (SP1) Release Candidate (RC) is available through Windows Update.

Windows Vista SP1 RC requires the installation of either two or three prerequisite updates prior to installing the service pack itself. These prerequisite updates will be delivered to most users over Windows Update as part of regularly scheduled monthly updates prior to the release of the service pack. This will help ensure that reboots required by the prerequisite updates occur with other updates that require a reboot. However, because these prerequisite updates have not been released, installing Windows Vista SP1 RC will require 3 to 4 separate installations over Windows Update. Please note that the instructions below are primarily required for the RC installation and will not be required for most users using Windows Update to install the final Service Pack.

The prerequisite updates consist of two updates which service specific Windows components prior to the installation of the service pack and a third update which services the installation software built into Windows Vista. The following are the prerequisite updates.
KB935509 This updates is only required on Windows Vista Enterprise and Windows Vista Ultimate editions (which have Bitlocker capabilities). This update is required prior to installing KB938371, the second prerequisite update and is required to prevent potential loss of data on Bitlocker encrypted systems during updating.
KB938371 This update consists of fixes for several components (including the TrustedInstaller), increases the success rate for installing the service pack and enables the service pack to be uninstalled successfully.
KB937287 This is an update to the “Servicing Stack” or the Windows Vista component installer technologies built into Windows Vista. This update enables the built-in installer to properly and successfully install the service pack.

Tuesday, December 11, 2007

32nd OAGM/AAPR workshop, May 26-27 2008

The 32nd annual workshop of the Austrian Association for Pattern Recognition (OAGM/AAPR) provides a platform bringing together researchers of the fields of image analysis, image processing, and pattern recognition for discussing relevant and important topics of the computer vision discipline.
It is organized in workshop form presenting the latest work of Austrian and international institutes in the domain of computer vision and pattern recognition.
Special attention will be paid to Aspects of Image Analysis and Pattern Recognition that focus on Challenges in the Biosciences. People working in this field are especially encouraged to participate by submitting their work and attending the workshop.

16. February 2008 Submission of full papers
05. March 2008 Notification of acceptance
30. March 2008 Submission of final papers
26. May 2008 Start of Workshop
27. May 2008 End of Workshop

Monday, December 10, 2007


The International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS) is one of the main international fora for the presentation and discussion of the latest technological advances in interactive multimedia services. The objective of the workshop is to bring together researchers and developers from academia and industry working in all areas of image, video and audio applications, with a special focus on analysis. After Louvain (1997), Berlin (1999), Tampere (2001), London (2003), Lisboa (2004), Montreux (2005), Incheon (2006), and Santorini (2007), WIAMIS 2008 is held at Klagenfurt University, Austria.

Topics of interest include, but are not limited to

Multimedia content analysis and understanding
Content-based browsing, indexing and retrieval of images, video and audio
2D/3D feature extraction
Advanced descriptors and similarity metrics for audio and video
Relevance feedback and learning systems
Segmentation of objects in 2D/3D image sequences
Identification and tracking of regions in scenes
Voice/audio assisted video segmentation
Analysis for coding efficiency and increased error resilience
Analysis and understanding tools for content adaptation
Multimedia content adaptation tools, transcoding and transmoding
Content summarization and personalization strategies
End-to-end quality of service support for Universal Multimedia Access
Semantic mapping and ontologies
Multimedia analysis for advanced applications
Multimedia analysis hardware and middleware
Multimedia standards: MPEG-7 (incl. Query Format), MPEG-21

The proceedings of the workshop will be published by the IEEE Computer Society. Accepted papers will be available through IEEE Xplore™ and through the IEEE Computer Society Digital Library. The format of papers will have to be prepared according to IEEE Computer Society standards. Only IEEE Xplore™-compliant PDF files can be accepted for final paper submissions

Important Note:
The Workshop proceedings will be published by the IEEE Computer Society and made available through IEEE Xplore™ and the IEEE Computer Society Digital Library. Papers must be formatted according to the IEEE Computer Society standards and their length must not exceed 4 IEEE double column style pages including all figures, tables, and references.

I have contact Mr Hermann Hellwagner asking him about extra page charge. A fifth page can be submitted at the cost of USD 50!

Monday, December 3, 2007

Visual Studio 2008 Overview

Microsoft® Visual Studio® 2008 delivers on the Microsoft vision of smart client applications by enabling developers to rapidly create connected applications that deliver the highest quality, rich user experiences. With Visual Studio 2008, organizations will find it easier than ever before to capture and analyze information to help them make effective business decisions. Visual Studio 2008 enables organizations of every size to rapidly create more secure, manageable, and reliable applications that take advantage of Windows Vista™ and the 2007 Office system.

Visual Studio 2008 delivers key advances for developers in three primary pillars:

Rapid application development
Effective team collaboration
Break through user experiences
Visual Studio 2008 provides advanced development tools, debugging features, database functionality, and innovative features for quickly creating tomorrow's cutting-edge applications across a variety of platforms.

Visual Studio 2008 includes enhancements such as visual designers for faster development with the .NET Framework 3.5, substantial improvements to Web development tools and language enhancements that speed development with all types of data. Visual Studio 2008 provides developers with all the tools and framework support required to create compelling, expressive, AJAX-enabled Web applications.

Developers will be able to take advantage of these rich client-side and server-side, frameworks to easily build client-centric Web applications that integrate with any back-end data provider, run within any modern browser, and have complete access to ASP.NET application services and the Microsoft platform.

Rapid Application Development
To help developers rapidly create modern software, Visual Studio 2008 delivers improved language and data features, such as Language Integrated Query (LINQ), that make it easier for individual programmers to build solutions that analyze and act on information.

Visual Studio 2008 also provides developers with the ability to target multiple versions of the .NET Framework from within the same development environment. Developers will be able to build applications that target the .NET Framework 2.0, 3.0 or 3.5, meaning that they can support a wide variety of projects in the same environment.

Break Through User Experience
Visual Studio 2008 offers developers new tools that speed creation of connected applications on the latest platforms including the Web, Windows Vista, Office 2007, SQL Server 2008, and Windows Server 2008. For the Web, ASP.NET AJAX and other new technologies will enable developers to quickly create a new generation of more efficient, interactive, and personalized Web experiences.

Effective Team Collaboration
Visual Studio 2008 delivers expanded and improved offerings that help improve collaboration in development teams, including tools that help integrate database professionals and graphic designers into the development process.

Use the Microsoft .NET Framework 3.5
The .NET Framework enables the rapid construction of connected applications that provide outstanding end-user experiences by providing the building blocks (pre-fabricated software) for solving common programming tasks. Connected applications built on the .NET Framework model business processes effectively and facilitate the integration of systems in heterogeneous environments.

Together Visual Studio and the .NET Framework reduce the need for common plumbing code, reducing development time and enabling developers to concentrate on solving business problems.

The .NET Framework 3.5 builds incrementally on the .NET Framework 3.0. Enhancements have been made to feature areas including the base class library, Windows Workflow Foundation, Windows Communication Foundation, Windows Presentation Foundation, and Windows CardSpace.

Friday, November 16, 2007

Real Time Face Detection

A new descriptor for real time face detection is now available in img(Rummager) 1.8 Beta. This descriptor uses 3 fuzzy systems in order to detect Skin Color, Eyes Possition and Face Shape. More details about the method will be added soon.

Thursday, November 15, 2007

Moments From Mallorca

Moments from the «IASTED International Conference on Artificial Intelligence and Soft Computing (ASC 2007)», August 29 to August 31, 2007, Palma De Mallorca, Spain.

Sunday, November 4, 2007

Particle Swarm Optimization

Particle swarm optimization (PSO) is a population based stochastic optimization technique developed by Dr. Eberhart and Dr. Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling.
PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles.
Each particle keeps track of its coordinates in the problem space which are associated with the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle. This location is called lbest. when a particle takes all the population as its topological neighbors, the best value is a global best and is called gbest.
The particle swarm optimization concept consists of, at each time step, changing the velocity of (accelerating) each particle toward its pbest and lbest locations (local version of PSO). Acceleration is weighted by a random term, with separate random numbers being generated for acceleration toward pbest and lbest locations.
In past several years, PSO has been successfully applied in many research and application areas. It is demonstrated that PSO gets better results in a faster, cheaper way compared with other methods.
Another reason that PSO is attractive is that there are few parameters to adjust. One version, with slight variations, works well in a wide variety of applications. Particle swarm optimization has been used for approaches that can be used across a wide range of applications, as well as for specific applications focused on a specific requirement.

A Hybrid Scheme for Fast and Accurate Image Retrieval based on Color Descriptors

This paper proposes a new image retrieval system, that uses only color features and it's based on a hybrid scheme which combines crisp and fuzzy techniques, in order to retrieve color based similar images. The system comprises 2 units. The first unit uses Binary Haar Wavelet Descrip tor in a histogram that has been proposed in MPEG-7. A new fuzzy-linking method of color histogram creation is also proposed, based on the HSV color space. The second unit provides this histogram and decides about the simi larity. The system is suitable for accurately retrieving im ages even in distortion cases such as deformations, noise and smoothing. It is tested on a large number of images selected from proprietary image data bases or randomly retrieved from popular search engines. The retrieval rate approximates 45 images (size of 250 X 250 pixels) per second, assuming no prior stored feature information in the searched image data bases. To evaluate the perform ance of the proposed system, objective measure called ANMRR is used.

From Proceeding (584) Artificial Intelligence and Soft Computing - 2007

Wednesday, October 31, 2007

C# Tutorials

C# 3.0 Specification

The authoritative C# 3.0 Specification was written by the people who created and implemented the C# language. This 500 plus page document is now available for download.
Unified C# 3.0 Specification Now Available

“How Do I” Videos — Visual C#

On this page you will find dozens of videos designed for all Visual C# developers, from the novice to the professional. New videos are added regularly, so check back often.

Tuesday, October 30, 2007

Neural Networks on C#

It is known fact, that there are many different problems, for which it is difficult to find formal algorithms to solve them. Some problems cannot be solved easily with traditional methods; some problems even do not have a solution yet. For many such problems, neural networks can be applied, which demonstrate rather good results in a great range of them. The history of neural networks starts in 1950-ies, when the simplest neural network's architecture was presented. After the initial work in the area, the idea of neural networks became rather popular. But then the area had a crash, when it was discovered that neural networks of those times are very limited in terms of the amount of tasks they can be applied to. In 1970-ies, the area got another boom, when the idea of multi-layer neural networks with the back propagation learning algorithm was presented. From that time, many different researchers have studied the area of neural networks, what lead to a vast range of different neural architectures, which were applied to a great range of different problems. For now, neural networks can be applied to such tasks, like classification, recognition, approximation, prediction, clusterization, memory simulation, and many other different tasks, and their amount is growing.
In this article, a C# library for neural network computations is described. The library implements several popular neural network architectures and their training algorithms, like Back Propagation, Kohonen Self-Organizing Map, Elastic Network, Delta Rule Learning, and Perceptron Learning. The usage of the library is demonstrated on several samples:
Classification (one-layer neural network trained with perceptron learning algorithms);
Approximation (multi-layer neural network trained with back propagation learning algorithm);
Time Series Prediction (multi-layer neural network trained with back propagation learning algorithm);
Color Clusterization (Kohonen Self-Organizing Map);
Traveling Salesman Problem (Elastic Network).
The attached archives contain source codes for the entire library, all the above listed samples, and some additional samples which are not listed and discussed in the article.
The article is not intended to provide the entire theory of neural networks, which can be found easily on the great range of different resources all over the Internet, and on CodeProject as well. Instead of this, the article assumes that the reader has general knowledge of neural networks, and that is why the aim of the article is to discuss a C# library for neural network computations and its application to different problems.
By Andrew Kirillov.

Friday, October 26, 2007

JIU - The Java Imaging Utilities - An image processing library

JIU, the Java Imaging Utilities, is a library which offers functionality to load, analyze, process and save pixel images.
It is written in Java and comes with full source code under the GNU General Public License (GPL) version 2.
JIU requires Java version 1.2 or higher.
Get the latest JIU version from the download page.
The canonical address of the JIU website is

Advanced Image Coding

Advanced Image Coding (AIC) is an experimental still image compression system that combines algorithms from the H.264 and JPEG standards. More specifically,it combines intra frame block prediction from H.264 with a JPEG-style discretecosine transform, followed by context adaptive binary arithmetic coding as usedin H.264. The result is a compression scheme that performs much better than JPEGand close to JPEG-2000.

Published in Non-Commercial Programs, Source Code, Image Compression, Tutorials, Reference, Presentations

Wednesday, October 24, 2007

Image Retrieval Commercial Software

Image Comparer 3
Have you tried showing a set or a large collection of digital snapshots to a friend or relative? Weren't they underwhelmed and a little bored by the number of all too similar shots of the same subject? Get rid of the duplicates automatically! Image Comparer™ scans your entire collection of images, analyzes their contents and locates files that look alike.
Manually locating similar images may be fine if you have just a dozen images. But what if you have a hundred? If you do it by hand, it'll take you quite a while. If you are like most digital shooters, you probably have several hundred or even a few thousand digital pictures stored in various folders. Locating and removing duplicates can easily become a time-consuming nightmare, and may eventually even take away the fun of taking pictures.
Difficult lighting and exposure problems, camera shake and digital noise can pollute your images. If you encounter difficult shooting conditions, you are probably taking a few duplicates with somewhat different settings. Selecting the best shot out of a few duplicates is relatively easy, but what if you have hundreds of duplicate shots? Your viewers won't be overly impressed to see a dark shot, a blurry shot, and then just the perfect one followed by an overexposed view.
Image Comparer™ analyzes your digital images and automatically selects the best shot out of the many duplicates on your system, allowing you to move or delete duplicate images in a couple of mouse clicks. Image Comparer ™ uses a content based image search also known as a content based image retrieval (CBIR). This allows the program to search images by visual similarity. You can search for rotated and flipped images as well.
Unlike similar products, Image Comparer™ does not just look for exact duplicates. Instead, it analyzes and recognizes an image's content (this technology is known as content based image search), and groups pictures that look alike. You can specify the level of visual similarity that is sufficient to consider pictures to be duplicates. View them in pairs or see the top ten similar images and keep the best one!
Image Comparer ™ is extremely useful to professional photographers, designers, and webmasters, who have "image-heavy" sites to maintain. The program is incredibly fast; after a minute or two one can see how many duplicate images are stored and how much disk space will be saved by removing the duplicates. The "dupes" can then be removed all at once with one click. Alternatively, a user can specify which images need to be deleted, moved or copied.
The list of supported image file formats includes RAW, JPEG, J2K, BMP, GIF, PNG, TIFF, TGA and other.
Image Comparer ™ is ready for immediate download; a free 30-days evaluation version is available. This trial version identifies duplicates, but does not allow moving, deleting or copying them.

Monday, October 22, 2007

Document Retrieval Web Site

The document retrieval of this web site encounters the problem using a word matching procedure through a web-oriented approach. This technique performs the word matching directly in the document images bypassing OCR and using word-images as queries
©2005 created by Konstantinos Zagoris Ph.D. student
Professor Nikos Papamarkos Image Processing and Multimedia Laboratory, Department of Electrical & Computer EngineeringDemocritus University of Thrace, Xanthi, Greece

Saturday, October 13, 2007

AForge.NET 1.5.0

AForge.NET is a C# framework designed for developers and researchers in the fields of Computer Vision and Artificial Intelligence - image processing, neural networks, genetic algorithms, machine learning, etc.
At this point the framework is comprised of 5 main and some additional libraries:
AForge.Imaging – a library for image processing routines and filers;
AForge.Neuro – neural networks computation library;
AForge.Genetic – evolution programming library;
AForge.Vision – computer vision library;
AForge.Machine Learning – machine learning library.
The work on the framework's improvement is in constants progress, what means that new feature and namespaces are coming constantly. To get knowledge about its progress you may track source repository's log or visit project discussion group to get the latest information about it.
The framework is provided with not only different libraries and their sources, but with many sample applications, which demonstrate the use of this framework, and with documentation help files, which are provided in HTML Help format.

HSV Fuzzy Linking

In 2005, Konstantinidis et al proposed the extraction of a fuzzy-linking histogram based on the color space CIE-La*b*. The necessity though, of the transportation of the image from RGB field to CIEXYZ and finally to CIELab field made this method noticeably timeconsuming. HSV color space demands smaller computation power in comparison with CIELAB, because it emerges after a direct transportation of the RGB color space. In [1], a fuzzy system is proposed to produce a fuzzy-linking histogram, which regards the three channels of HSV as
inputs, and forms a 10 bin histogram as an output. Each bin represents a present color as follows: (0) Black, (1) White, (2) Grey, (3) Red, (4) Orange, (5) Yellow, (6) Green, (7) Cyan, (8) Blue and (9) Magenta. The inputs of the system are analyzed as follows: Hue is divided into 8 fuzzy areas. (0) Red to Orange, (1) Orange, (2) Yellow, (3) Green, (4) Cyan, (5) Blue, (6) Magenta, (7) Blue to Red.

S is divided in only 2 fuzzy areas. This channel defines the shade of a color based on white. The first area in combination with the position of the pixel in channel V is used to define if the color is clear enough to be ranked in one of the categories which are described in H histogram, or if it is a shade of white or gray color.

The third input, channel V, is divided in 3 areas. The first one is actually defining substantially when the pixel will be black, independently from the values that gives to the other inputs. The second fuzzy area, in combination with the value of channel S gives the gray color.

For the evaluation of the consequent variables two methods have been used. Initially LOM algorithm (Largest of Maximum) was used. This method assigns the input to the output bin which is defined from the rule that gives the greater value of activation. Next a Multi Participate algorithm was used, which assigns the input to the output bins which are defined from all the rules that are being activated. The experimental results show that the second algorithm performs better.


Friday, October 12, 2007

The coordinate logic filters (CLF)

The coordinate logic filters are very efficient in various 1D, 2D, or higher-dimensional digital signal processing applications, such as noise removal, magnification, opening, closing, skeletonization, and coding, as well as in edge detection, feature extraction, and fractal modeling. In this paper we present some typical image processing applications using coordinate logic filters. The key issue in the coordinate logic analysis of images is the method of fast successive filtering and managing of the residues. The desired processing is achieved by executing only direct logic operations among the pixels of the given image. Coordinate logic filters can be easily and quickly implemented using logic circuits or cellular automata; this is their primary advantage

Edge extraction in an image G can be achieved with CL filters using the difference of the original image G and the eroded image Geb , so that the edge detector is G - Geb .
A simple implementation can be found in img(Rummager) application.


FCTH - Fuzzy Color and Texture Histogram - results from the combination of 3 fuzzy systems. FCTH is intended for use in image retrieval systems. This new feature is suitable for accurately retrieving images even in distortion cases such as deformations, noise and smoothing. It is tested on a large number of images selected from proprietary image databases or randomly retrieved from popular search engines. The retrieval rate of a system that implements FCTH, approximates 40 images per second, assuming no prior stored feature information in the searched image databases. To evaluate the performance of the proposed feature, the objective measure called ANMRR is used.

An extension of this feature so as to incorporate spatial information is also proposed. This new feature is called Spatial FCTM (Fuzzy Color and Texture Matrix).

Submited for Publication

Software and demonstrators for MPEG-7

Img(Rummager): Image Retrieval Software.
Caliph & Emir: Creation and Retrieval of images based on MPEG-7 (GPL).
Frameline 47 Video Notation: Frameline 47 from Versatile Delivery Systems. The first commercial MPEG-7 application, Frameline 47 uses an advanced content schema based on MPEG-7 so as to be able to notate entire video files, or segments and groups of segments from within that video file according to the MPEG-7 convention (commercial tool)
Eptascape ADS100 uses a real-time MPEG 7 encoder on an analog camera video signal to identify interesting events, especially in surveillance applications, check the demos to see MPEG-7 in action (commercial tool)
IBM VideoAnnEx Annotation Tool: Creating MPEG-7 documents for video streams describing structure and giving keywords from a controlled vocabulary (binary release, restrictive license)
iFinder Medienanalyse- und Retrievalsystem: Metadata extraction and search engine based on MPEG-7 (commercial tool)
MPEG-7 Audio Encoder: Creating MPEG-7 documents for audio documents describing low level audio characteristics (binary & source release, Java, GPL)
XM Feature Extraction Web Service: The functionalities of the eXperimentation Model(XM) are made available via web service interface to enable automatic MPEG-7 low-level visual description characterization of images.
TU Berlin MPEG-7 Audio Analyzer (Web-Demo): Creating MPEG-7 documents (XML) for audio documents (WAV, MP3). All 17 MPEG-7 low level audio descriptors are implemented (commercial)
TU Berlin MPEG-7 Spoken Content Demonstrator (Web-Demo): Creating MPEG-7 documents (XML) with SpokenContent description from an input speech signal (WAV, MP3) (commercial)
MP7JRS C++ Library Complete MPEG-7 implementation of part 3, 4 and 5 (visual, audio and MDS) by IIS, JOANNEUM RESEARCH Institute of Informationssystems and Informationmanagement.

Content Based Image Resizing

This post is actually only a copy of the one on the

"I’ve just uploaded a maintenance release. Biggest change is that the seam table is now re-used. Therefore computation is somewhat faster. I’ve also cleaned out the code and made more ‘readable’. Fell free to download and comment:
Java Webstart: ImageSeams
Download binaries & source v4 (Java Swing GUI App): (66K) or SeamCarving-v4.tar.bz2 (57k)
Download Windows binary (Java Swing GUI App with Windows launcher, Java 1.6 needed, 243k)
Other tools (stand alone, plugins, etc.) are reviewed for instance here. Seems like there is a “war of seam carving tools” going on. Many of those are closed source, perhaps some people are trying to make money selling old shoes The roadmap for this implementation is clear: If the following two constraints are met development is going on:
Someone (including me) needs some feature / performance upgrade or finds some bug
Someone (possibly me) finds some time to implement the feature / performance upgrade" - Mathias Lux

New journal at Springer

Signal, Image and Video Processing, a new quarterly journal from Springer, incorporates all theoretical and practical aspects of Signal, Image and Video Processing. It features original research work, review and tutorial papers and accounts of practical developments.

- Disseminates high level research results and engineering

- Presents practical solutions for the current Signal, Image
and Video Processing problems in Engineering and the

Subject areas covered by the journal include but are not limited to:
Adaptive processing, biomedical signal processing, multimedia signal processing, communication signal processing, non-linear signal processing, array processing, statistical signal processing, modeling, filtering, multi-resolution, segmentation, coding, restoration, enhancement, storage and retrieval, colour and multi-spectral processing, scanning, displaying, printing, interpolation, motion detection and estimation, stereoscopic processing.

Vision for Cognitive Systems Conference

ICVS 2008 is the 6th International Conference dedicated in advanced research on Computer Vision Systems. In the past years, through the advances in microelectronics and digital technology, cameras became a widespread media. This boosted the development of new and fast computer vision systems. To further encourage the research in this area this conference aims to gather researchers and developers from academic fields and industries worldwide to explore the state-of-the-art. The program committee invites you to attend the conference, which will be held in the sunlit island of Santorini, and submit papers in all aspects of computer vision systems including, but not limited to:
Computer vision from a system perspective: paradigms, applications, architectures, integration and control.
Cognitive vision techniques for recognition and categorization, knowledge representation, learning, reasoning, goal specification and context awareness.
Methods and metrics for performance evaluation and benchmarking
Besides the main conference program, workshops and tutorial will allow practitioners building computer vision systems to exchange knowledge and ideas. The Proceedings of the ICVS 2008 conference will be published in the Lecture Notes in Computer Science (LNCS) series.

Signal and Image Processing Conference

The 10th IASTED International Conference on Signal and Image Processing ~SIP 2008~
August 18 – 20, 2008 Kailua-Kona, Hawaii, USA.
This conference is an international forum for researchers and practitioners interested in the advances in and applications of signal and image processing. It is an opportunity to present and observe the latest research, results, and ideas in these areas. SIP 2008 aims to strengthen relationships between companies, research laboratories, and universities. All papers submitted to this conference will be double blind evaluated by at least two reviewers. Acceptance will be based primarily on originality and contribution.

Thursday, October 11, 2007

Mpeg-7 Descriptors for C#

Please Update your bookmarks: New URL: