Pages

Saturday, November 29, 2008

Yahoo's New VideoTagGame Lets You Tag Within Videos

The transfer of human intelligence to the machine is something the internet makes easy to do. With reCAPTCHA, we keep spammers at bay while helping digitize old books, Amazon's Mechanical Turk lets us crowdsource small tasks to a dynamic human workforce available on demand, and Google Image Labeler makes the tedious task of tagging fun. Now Yahoo is trying to tap into that human machine through their new VideoTagGame, a game that encourages participants to tag sections within a video for better retrieval.
The first VideoTagGame ran back in summer of 2007 during a Yahoo! party in Amsterdam. Now they're ready to take their experiment to the public through the Yahoo! Sandbox so they can collect more statistics on its usage.


The objective of the VideoTagGame is to collect time-based annotations of the video which could then enable the retrieval of relevant parts in a video when a search is performed, rather than returning the entire video itself. These annotations are collected in the context of a multi-player game.


How To Play
To play the VideoTagGame, participants must sign in with their Yahoo! ID and join a new game. There will always be at least three players in each game. After a 3-second countdown, the video will begin to play. As it plays, participants enter tags that correspond to the various parts of the video. When two players agree on a tag (that is, they enter the same tag), they each get points. The closer together the tags were entered, the more points are rewarded. After the video ends, participants can then watch as it plays again, this time with the tags overlaid on top of the video.

Read More

Friday, November 28, 2008

Yottalook


Yottalook™ is a free radiology-centric web search engine the provides desicion support at the point of care using proprietary relevance and ranking algorithms by iVirtuoso. Yottalook™ is designed to provide the practicing radiologists the most important and most relevant information they need at the time of patient care.
Core Technologies

Yottalook™ is based on core technologies developed by iVirtuoso to achieve optimized search results. First is automated analysis of the search term to understand what the radiolgist is trying to look for - this core technology is called "natural query analysis".

Yottalook™ has also developed a thesaurus of medical terminologies that not only identifies synonyms of terms but also defines relationships between terms. This second core technology is called "semantic ontology" and is based on existing medical ontologies that have been enhanced by iVirtuoso, such as RadLex - A Lexicon for Uniform Indexing and Retrieval of Radiology Information Resources developed by the Radiology Society of North America.
Third core technology is "relevance algorithm" for image search that differentiates medical terms from other words in text associated with medical images and uses them to create ranking for Yottalook image search.
The fourth core technology is a specialized content delivery system called "Yottalinks" that provides high yield content based on the search term. This content may also be provided by a third party vendor licensing Yottalook search. Yottalook™ can be integrated with any web based medical application so that context relevant information is provided to the physician at the point of care. http://www.yottalook.com/

Thursday, November 27, 2008

Pen with digital capabilities is a truly innovative and fun way to take notes and record audio

Livescribe's Pulse "smartpen" is part pen, part voice recorder, and part nothing you've ever seen before. Remember Picture Pages? If not, watch this YouTube clip, and then imagine Livescribe's Pulse as the Picture Pages pen on a combination of steroids, hallucinogens, and time-travel pills. It's fun to use, and it could prove to be a groundbreaking, useful tool for students, meeting-hoppers, and journalists.

Tuesday, November 25, 2008

New version of PhotoEnhancer

Version 2.2 of PhotoEnhancer (image enhancement software) is now available here:

http://savvash.blogspot.com/2008/11/photoenhancer.html

The new version features better visualization techniques for all the kinds of screens, plus, a new feature that preserves even better the correctly exposed image regions. PhotoEnhancer 2.2 is the most complete version of this project.

Monday, November 24, 2008

Kitware

Kitware, a software company with offices in New York and North Carolina, won an initial $6.7 million contract for what is technically called Video and Image Retrieval and Analysis Tool, or VIRAT.In a statement about the contract award, Kitware projected that through its proposed system, “the most high-value intelligence content will be clearly and intuitively presented to the video analyst, resulting in substantial reductions in analyst workload per mission as well as increasing the quality and accuracy of intelligence yield.”Anthony Hoogs, Kitware’s project leader, said, ”This project will really make a difference to the war fighter.”To carry out the project, Kitware said it was teaming up with two leading military technology companies, Honeywell and General Dynamics, as well as a number of academic researchers.

Jena – A Semantic Web Framework for Java

Jena is a Java framework for building Semantic Web applications. It provides a programmatic environment for RDF, RDFS and OWL, SPARQL and includes a rule-based inference engine.
Jena is open source and grown out of work with the HP Labs Semantic Web Programme.

The Jena Framework includes:
A RDF API
Reading and writing RDF in RDF/XML, N3 and N-Triples
An OWL API
In-memory and persistent storage
SPARQL query engine

Read More

Sunday, November 23, 2008

CVPR 2009

CVPR 2009 will be held at the Fontainebleau Hotel in Miami, Florida.
Papers in the main technical program must describe high-quality, original research.

Topics of interest include all aspects of computer vision and pattern recognition (applied to images and video) including, but not limited to, the following areas:
Sensors
Early and Biologically-inspired Vision
Color and Texture
Segmentation and Grouping
Computational Photography and Video
Motion and Tracking
Shape-from-X
Stereo and Structure from Motion
Image-Based Modeling
Illumination and Reflectance Modeling
Shape Representation and Matching
Object Detection, Recognition, and Categorization
Video Analysis and Event Recognition
Face and Gesture Analysis
Statistical Methods and Learning
Performance Evaluation
Medical Image Analysis
Image and Video Retrieval
Vision for Graphics
Vision for Robotics
Vision for Internet
Applications of Computer Vision

OpenCV: How to keystone an image

Saturday, November 22, 2008

CBMI 2009

Following the six successful previous events (Toulouse 1999, Brescia 2001, Rennes 2003, Riga 2005, Bordeaux 2007, London 2008), CBMI 2009 will be held on June 3-5, 2009 at the picturesque city of Chania, in Crete Island, Greece. It will be organized by Image, Video and Multimedia Laboratory of National Technical University of Athens. CBMI 2009 aims at bringing together the various communities involved in the different aspects of content-based multimedia indexing, such as image processing and information retrieval with current industrial trends and developments. CBMI 2009 is supported by IEEE and EURASIP. The technical program of CBMI 2009 will include presentation of invited plenary talks, special sessions as well as regular sessions with contributed research papers.

Topics of interest include, but are not limited to:

Multimedia indexing and retrieval (image, audio, video, text)
Matching and similarity search
Construction of high level indices
Multimedia content extraction
Identification and tracking of semantic regions in scenes
Multi-modal and cross-modal indexing
Content-based search
Multimedia data mining
Metadata generation, coding and transformation
Large scale multimedia database management
Summarisation, browsing and organization of multimedia content
Presentation and visualization tools
User interaction and relevance feedback
Personalization and content adaptation
Evaluation and metrics

Thursday, November 20, 2008

ACM CIVR 2009

ACM International Conference on Image and Video Retrieval
July 8-10, 2009, Santorini Island, Greece - http://www.civr2009.org/ -
Image and Video retrieval have now reached a state where successful techniques and applications start flourishing. The ACM International Conference on Image and Video Retrieval (ACM-CIVR) series of conferences is the ideal opportunity to present and encounter such developments. Originally set up to illuminate the state-of-the-art in image and video retrieval throughout the world, it is now a reference event in the field where researchers and practitioner exchange knowledge and ideas. CIVR2009 is seeking original high quality special sessions addressing innovative research in the broad field of image and video retrieval. We wish to highlight significant and emerging areas of the main problem of search and retrieval but also the equally important related issues of multimedia content management, user interaction and community-based management.
Example topics of interest include but are not limited to: social network information mining, unsupervised methods for data exploration, large scale issues for algorithms and data set generation.
Each special session will consist of 5 invited papers. The organizers’ role is to attract the speakers and chair the session itself. Proposals will be evaluated based on the timeliness of the topic, relevance to CIVR, the degrees to which they will bring together key researchers in the area, introduce the area to the larger research community, further develop the area, and potentials to establish a larger community around the area. Please note that all papers in the proposed session will undergo the same review process as regular papers. If after the reviewing process less than the necessary number of papers solicited for a special session are selected, the Special Session will be cancelled, and the solicited papers that passed review process will be presented within regular sessions of the conference.

Photo Tourism: Exploring Photo Collections in 3D

Photo Tourism is a system for browsing large collections of photographs in 3D. Our approach takes as input large collections of images from either personal photo collections or Internet photo sharing sites, and automatically computes each photo's viewpoint and a sparse 3D model of the scene. Our photo explorer interface enables the viewer to interactively move about the 3D space by seamlessly transitioning between photographs, based on user control.
Microsoft Live Labs has turned these research ideas into a streaming multi-resolution Web-based service called Photosynth.
You can also read about newer research we have been doing in this area at the University of Washington Photo Tourism project page.



Paper and video

Noah Snavely, Steven M. Seitz, and Richard Szeliski, Photo Tourism: Exploring photo collections in 3D," ACM Transactions on Graphics, 25(3), August 2006. (Video WMV), Video (MOV))
Abstract

We have developed a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end, which automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo navigation tool uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, as well as to annotate image details, which are automatically transferred to other relevant images in the collection. We demonstrate our system on several large personal photo collections as well as images gathered from photo sharing Web sites on the Internet.

Wednesday, November 19, 2008

EngLab - Open Source Mathematical Platform

EngLab is a cross-compile mathematical platform with a C like syntax, intended to be used both by engineers and users with little programming knowledge. The initiative has been taken from a group of students a year ago.



"Our goal is to develop an easy-to-use computaion and simulation platform with a C++ like syntax. We have adopted Matlab's structure philoshophy and C++ 's structured language syntax. There are various toolboxes (packages of functions relative to a certain scientific field), which depend on open-source libraries." EngLab Team

The EngLab distribution is available in two ways: there are two basic Englab releases, EngLab Console and EngLab GUI. EngLab Console allows EngLab's execution through the console(Linux or Windows). EngLab GUI gives the opportunity of using EngLab through a graphical user interface. EngLab GUI is implemented with the use of the open-source library wxWidgets 2.8, providing additional usability compared to EngLab Console edition. EngLab GUI is independent, so there is no need for EngLab Console to be installed, in order to properly install and execute EngLab GUI.

Toolboxes are distributed as seperate packages. Their installation is possible either through EngLab Console or EngLab GUI. The reason is that those toolboxes depend on open-source libraries that have to be previously installed. So as the user not to be forced to install those libraries directly, user can install packages and toolboxes at his/her own will.

For the time being, EngLab Console edition is available for Windows and Linux and Englab GUI is available for Linux only.

Until now EngLab has the following features :

- 16 types of variable declaration (int, float, ...)
- Variable declaration with unlimited number of dimensions.
- Loop structures (for, while, ...)
- Arithmetic, logical and binary operations
- Constant number declaration (pi, phi, ...)
- Graphical manipulation of variable values of any dimension (Englab GUI)
- Adjustable graphical environment (Englab GUI)
- Editor for writing *.eng functions (Englab GUI)
- Command history for the last 5 sessions
- Immediate access to variables, constants and functions (EngLab GUI)
- Recent files opened through EngLab (EngLab GUI)

Toolboxes that have been fully or partially implemented:

- a package containing fundamental functions of C (trigonemetric, hyperbolic trigonometrical, ...)
- a package containing some statistic functions
- a package containing functions that allow convertions of the variable type

All these toolboxes accompany the basic two EngLab editions, since they do not depend on another open-source library. Moreover, some other toolboxes have been partially implemented:

- a package that contains functions for the manipulation of 2-D matrices (determinant, inverse array, ...). This package depends on the open-source library NewMat10.
- a package that contains functions for image processing. This package depends on the open-source library CImg.
- a package that contains functions for image processing. This package depends on the open-source library OpenCV.
- a toolbox for visual data representation(plots etc)
- a toolbox that contains functins for manipulating polyonymials, root detection, computation of integrals and derivatives, special functions and more.

Tuesday, November 18, 2008

Smile And Robot Smiles With You

Humanoid 'Jules' is a disembodied androgynous robotic head that automatically copies the movement and expressions of a human face.
The technology works using 10 stock human emotions - for instance happiness, sadness, concern - that have been programmed into the robot.
The software then maps what it sees to Jules' face to combine expressions instantly to mimic those being shown by a human subject.
Controlled only by its own software, Jules can grin and grimace, furrow its brow, and 'speak' as the software translates real expressions observed through video camera 'eyes'.
If you want people to be able to interact with machines, then you've got to be able to do it naturally...When it moves it has to look natural in the same way that human expressions are, to make interaction useful.Chris Melhuish, head of the Bristol Robotics Laboratory
The robot - made by US roboticist David Hanson - then copies the facial expressions of the human by converting the video image into digital commands that make the robot's inner workings produce mirrored movements.
And it all happens in real time as Jules is bright enough to interpret the commands at 25 frames per second.
The project was developed over more than three years at the Bristol Robotics Laboratory, a lab run by the University of the West of England and the University of Bristol under the leadership of Chris Melhuish, Neill Campbell and Peter Jaeckel.
The aim of the developers was to make it easier for humans to interact with 'artificial intelligence', in other words to create a 'feelgood' factor.
The BRL's Peter Jaeckel said: ''Realistic, life-like robot appearance is crucial for sophisticated face-to-face robot - human interaction.
''Researchers predict that one day robotic companions will work, or assist humans in space, care and education. Robot appearance and behaviour need to be well matched to meet expectations formed by our social experience."
But a warning note has been sounded.
Kerstin Dautenhahn, a robotics researcher at the University of Herefordshire, believes that people may be disconcerted by humanoid automatons that simply look 'too human'.
''People might easily be fooled into thinking that this robot not only looks like a human and behaves like a human, but that it can also feel like a human. And that's not true," she pointed out.

Monday, November 17, 2008

Zunavision: Novel software by Stanford university students/prof to embed video/image inside video

Stanford artificial intelligence researchers have developed software that makes it easy to reach inside an existing video and place a photo on the wall so realistically that it looks like it was there from the beginning. The photo is not pasted on top of the existing video, but embedded in it It works for videos as well - you can play a video on a wall inside your video. The technology can cheaply do some of the tricks normally performed by expensive commercial editing systems.

Friday, November 14, 2008

New Experimental Features on img(Anaktisi)

Two new experimental features are implemented in the on-line image retrieval system img(Anaktisi)
1. Draw a sketch to retrieve similar images from our database. The method is based on a new spatial compact color descriptor.


2. Automatic keyword annotation. Select a combination of words and retrieve images. The method is based on a fuzzy support vector machine system. The network was trained using a combination of CEDD and FCTH descriptors


Note that both techniques are still under study.

Thursday, November 13, 2008

TouchGraph

TouchGraph lets you see how your friends are connected, and who has the most photos together.

The TouchGraph Facebook Browser shows connections between users based on friendships and common photo appearances.
Friendships are shown as dark gray lines, and common photo appearances are shown as a lighter gray line with a number in the center. The number indicates how many photos the two people appeared in together.
Personal vs. Friends social networks
* When launched from one's own profile, information about all of one's friends and their friendships is loaded.* When launched from another user's profile (Using the "TouchGraph Friends + Photos" link below their profile picture) only people tagged in their photos will appear in the graph.
One can not see another person's whole social network because Facebook only allows applications to get a list of one's own friends. For other users it is only possible to get a list of people that they appear in photos with. Perhaps Facebook's policy will change in the future.
Clusters
The TouchGraph Facebook Browser determines the clusters/cliques to which one's Friends belong and uses different colors to show each clique. Cliques are characterized by having lots of friendships within a group of friends and few connections to members outside the group.
Rank
Friends are assigned a Rank so that one can reduce clutter by only showing a set of 'Top' friends. TouchGraph gives the highest rank to friends who are connectors between different cliques. Finding connecters involves a metric called Betweeness Centrality which is an established measure for a person's importance within a social network.

http://apps.facebook.com/touchgraph/

Wednesday, November 12, 2008

A New Era for Image Annotation

Searching for images on the Web has traditionally been more complicated than text search – for instance, a Google image search for “tiger” not only yields images of tigers, but also returns images of Tiger Woods, tiger sharks and many others that are ‘related’ to the text in the query string. This is because contemporary search engines look for images using any ‘text’ linked to images rather than the ‘content’ of the picture itself. In an effort to improve the recall of image searches, folks from UC San Diego are working on a search engine that works differently – one that analyzes the image itself. “You might finally find all those unlabeled pictures of your kids playing soccer that are on your computer somewhere,” says Nuno Vasconcelos, a professor of electrical engineering at the UCSD Jacobs School of Engineering. They claim that their Supervised Multiclass Labeling System “may be folded into next-generation image search engines for the Internet; and in the shorter term, could be used to annotate and search commercial and private image collections.
Read More

Sunday, November 9, 2008

PhotoEnhancer

PhotoEnhancer is an experimental image enhancement software, which employs the characteristics of the ganglion cells of the Human Visual System. Many times the image captured by a camera and the image in our eyes are dramatically different. Especially when there are shadows or highlights in the same scene. In these cases our eyes can distinguish many details in the shadows or the highlights, while the image captured by the camera suffers from loss of visual information in these regions.







PhotoEnhancer attempts to bridge the gap between "what you see" and "what the camera sees". It enhances the shadow or the highlight regions of an image, while keeping intact all the correctly exposed ones. The final result is a lot closer to the human perception of the scene, than the original captured image, revealing visual information that otherwise wouldn't be available to the human observer.

PhotoEnhancer 2.4

The latest version of PhotoEnhancer (2.4) has been released. Version 2.4 features a 'Batch Processing' mode, for the quick enhancement of many image files just with a few clicks, as well as an improved user interface.

Version History:

PhotoEnhancer 2.3

The new version of PhotoEnhancer features the method of "Multi-Scale Image Contrast Enhancement". This additional algorithm enhances locally the contrast of images, maximizing the available visual information. It can be applied to foggy scenes, aerial or satellite images, images with smoke or medical images.

Download



Bugs report and suggestions at: bbonik@ee.duth.gr
More Details:

http://sites.google.com/site/vonikakis/Home

Saturday, November 8, 2008

GazoPa

GazoPa is a similar image search service on the web in private beta by Hitachi. Users can search images from the web based on user’s own photo, drawings, images found on the web and keywords. GazoPa enables users to search for a similar image from characteristics such as a color or a shape extracted from an image itself. There are abundant quantities of images on the web, however many of these simply cannot be described by keywords. Since GazoPa uses image features to search other similar images, a vast range of images can be retrieved from the web. GazoPa is a new visual search service that can navigate users to new territories on the web. http://www.gazopa.com/sign_in

Wednesday, November 5, 2008

Multimodal and Mobile Personal Image Retrieval: A User Study

These last months, I have been collaborating to a research project on Multimodal Information Retrieval of digital pictures collected through camera phones. Recently, one of the papers resuming the results of the research was presented at the International Workshop of Mobile Information Retrieval, held in conjunction with SIGIR in Singapore. Here goes the abstract and the URL to download the paper.
X. Anguera, N. Oliver, and M. Cherubini. Multimodal and mobile personal image retrieval: A user study. In K. L. Chan, editor, Prooceeding of the International Workshop on Mobile Information Retrieval (MobIR’08), pages 17–23, Singapore, 20-24 July 2008. [PDF]
Mobile phones have become multimedia devices. Therefore it is not uncommon to observe users capturing photos and videos on their mobile phones. As the amount of digital multimedia content expands, it becomes increasingly difficult to find specific images in the device. In this paper, we present our experience with MAMI, a mobile phone prototype that allows users to annotate and search for digital photos on their camera phone via speech input. MAMI is implemented as a mobile application that runs in real-time on the phone. Users can add speech annotations at the time of capturing photos or at a later time. Additional metadata is also stored with the photos, such as location, user identification, date and time of capture and image-based features. Users can search for photos in their personal repository by means of speech without the need of connectivity to a server. In this paper, we focus on our findings from a user study aimed at comparing the efficacy of the search and the ease-of-use and desirability of the MAMI prototype when compared to the standard image browser available on mobile phones today.
Source

Adobe Photoshop Lightroom 2

Adobe Photoshop Lightroom 2 is best categorized as a Digital Processor. That is, from bringing images to your computer, cataloging them for later retrieval (and, if you want, backing them up to insure protection against accidental loss), enhancing and fine tuning your images, all the way to printing and/or digital distribution—one can do it all from Lightroom. However, one of the strongest reasons to use Lightroom is the opportunity for playing with images with no concern about how many versions and variations of the image you create without screwing up your original image. Any alteration you make on an image in Lightroom is only how Lightroom lets you "see" the image. Nothing is changed in the image itself unless you save a copy with those changes. The biggest negative about Lightroom is that the interface constantly changes, depending upon what you've clicked. This makes "hacking" the program a challenge, and working with Lightroom isn't helped by the manual that doesn't properly explain the conditions where you will see what is being explained. Despite the complex learning curve, there is much to like in Lightroom.

Read More

Neural Networks on C#

It is known fact, that there are many different problems, for which it is difficult to find formal algorithms to solve them. Some problems cannot be solved easily with traditional methods; some problems even do not have a solution yet. For many such problems, neural networks can be applied, which demonstrate rather good results in a great range of them. The history of neural networks starts in 1950-ies, when the simplest neural network's architecture was presented. After the initial work in the area, the idea of neural networks became rather popular. But then the area had a crash, when it was discovered that neural networks of those times are very limited in terms of the amount of tasks they can be applied to. In 1970-ies, the area got another boom, when the idea of multi-layer neural networks with the back propagation learning algorithm was presented. From that time, many different researchers have studied the area of neural networks, what lead to a vast range of different neural architectures, which were applied to a great range of different problems. For now, neural networks can be applied to such tasks, like classification, recognition, approximation, prediction, clusterization, memory simulation, and many other different tasks, and their amount is growing.
In this article, a C# library for neural network computations is described. The library implements several popular neural network architectures and their training algorithms, like Back Propagation, Kohonen Self-Organizing Map, Elastic Network, Delta Rule Learning, and Perceptron Learning. The usage of the library is demonstrated on several samples:
Classification (one-layer neural network trained with perceptron learning algorithms);
Approximation (multi-layer neural network trained with back propagation learning algorithm);
Time Series Prediction (multi-layer neural network trained with back propagation learning algorithm);
Color Clusterization (Kohonen Self-Organizing Map);
Traveling Salesman Problem (Elastic Network).
The attached archives contain source codes for the entire library, all the above listed samples, and some additional samples which are not listed and discussed in the article.
The article is not intended to provide the entire theory of neural networks, which can be found easily on the great range of different resources all over the Internet, and on CodeProject as well. Instead of this, the article assumes that the reader has general knowledge of neural networks, and that is why the aim of the article is to discuss a C# library for neural network computations and its application to different problems.
http://www.codeproject.com/KB/recipes/aforge_neuro.aspx

Special Issue: Advances in Medical Intelligent Decision Support Systems

Advances in Medical Intelligent Decision Support Systems.
Intelligent Decision Technologies (IDT) journal seeks original manuscripts
for a Special Issue on Advances in Medical Decision Support Systems scheduled to appear in Vol. 3, No. 2, 2009.The last few decades have witnessed significant advancements in intelligent computation techniques. Driven by the need to solve complex real-world problems, powerful and sophisticated intelligent data analysis technologies have been exploited or emerged, such as neural networks, support vector machines, evolutionary algorithms, clustering methods, fuzzy logic, particle swarm optimization, data mining, etc. In recent years, the volume of biological data has been increasing exponentially, thus, allowing significant learning and experimentation to be carried out using a multidisciplinary approach, which gives rise to many challenging problems. The foundation for any medical decision support is the medical knowledge base which contains the necessary rules and facts. This knowledge needs to be acquired from information and data in the fields of interest, such as medicine. Clinical decision-making is a challenging, multifaceted process. Its goals are precision in diagnosis and institution of efficacious treatment. Achieving these objectives involves access to pertinent data and application of previous knowledge to the analysis of new data in order to recognise patterns and relations. As the volume and complexity of data have increased, use of digital computers to support data analysis has become a necessity. In addition to computerisationof standard statistical analysis, several other techniques for computer-aided data classification and reduction generally referred to as intelligent systems, have evolved. This special issue will focus on illustrative and detailed information about medical intelligent decision support systems and feature extraction/selection for automated diagnostic systems.The focus of this special issue is on advances in medical intelligent decision support systems including determination of optimum classification schemes for the problem under study and also to infer clues about the extracted features. Topics include, but are not limited to, the following:
* Bioinformatics and Computational Biology
* Neural Networks, Fuzzy Logic Systems and Support Vector Machines in Biological Signal Processing
* Decision Support Systems and Computer Aided Diagnosis
* Biomedical Signal Processing
* Biomedical Imaging and Image Processing
* Modelling, Simulation, Systems, and Control
Paper submission: Submitted articles must not have been previously published or currently submitted for journal publication elsewhere. As an author, you are responsible for understanding and adhering to our submission guidelines. You can access them from http://www.iospress.nl. Please thoroughly read these before submitting your manuscript. Each paper will go through a rigorous review process.

Please note the following important dates:

Paper submission for review: November 30, 2008 (Final deadline)
Review results: January 15, 2009
Revised Paper submission: February 20, 2009
Final acceptance: March 1, 2009
Manuscript delivery to the publisher: April 15, 2009

Interested authors should submit digital copies (PDF preferred) of their papers (suggested paper length: 15 pages), including all tables, diagrams, and illustrations, to the Guest Editor, Dr. Vassilis S. Kodogiannis, bye-mail.

Tuesday, November 4, 2008

WordsEye

Create 3D scenes using language.Share them with others

Here is an example:

Text: "a tiny grey manatee is in the aquarium. it is facing right. the fishdead of the aquarium is invisible. the manatee is two inches above the tank_sand of the aquarium. the ground is tile. there is a large brick wall behind the aquarium."

Result:

http://www.wordseye.com/frontpage