Pages

Wednesday, September 30, 2009

The 4th International Conference on Bioinformatics and Biomedical Engineering (iCBBE 2010)

The 4th International Conference on Bioinformatics and Biomedical Engineering (iCBBE 2010) will be held from June 18th to 20th, 2010 in Chengdu, China. iCBBE 2010 will bring together top researchers from Asian Pacific areas, North America, Europe and around the world to exchange research results and address open issues in all aspects of bioinformatics and biomedical engineering.

The 4th International Conference on Bioinformatics and Biomedical Engineering (iCBBE 2010) will be held from June 18th to 20th, 2010 in Chengdu, China. iCBBE 2010 will bring together top researchers from Asian Pacific areas, North America, Europe and around the world to exchange research results and address open issues in all aspects of bioinformatics and biomedical engineering. All accepted papers in iCBBE 2010 will be published by IEEE and indexed by Ei Compendex and ISTP.

Topics: The conference is soliciting state-of-the-art research papers in the following areas of interest:

Bioinformatics and Computational Biology
Protein structure, function and sequence analysis
Protein interactions, docking and function.
Computational proteomics
DNA and RNA structure, function and sequence analysis
Gene regulation, expression, identification and network
Structural, functional and comparative genomics
Computational evolutionary biology
Data acquisition, normalization, analysis and visualization
Algorithms, models, software, and tools in Bioinformatics
Any novel approaches to bioinformatics problems

Biomedical Engineering
Biomedical imaging, image processing & visualization
Bioelectrical and neural engineering
Biomaterials and biomedical optics
Methods and biology effects of NMR/CT/ECG technology
Biomedical devices, sensors, and artificial organs
Biochemical, cellular, molecular and tissue engineering
Biomedical robotics and mechanics
Rehabilitation engineering and clinical engineering
Health monitoring systems and wearable system
Bio-signal processing and analysis
Biometric and bio-measurement
Biomaterial and biomedical optics
Other topics related to biomedical engineering

http://www.icbbe.org/2010/CallForPapers.aspx

5th International Conference on Mass Data Analysis of Images and Signals in Medicine, Biotechnology, Chemistry and Food Industry

July 12 - 13, 2010, Berlin/Germany

The International Conference on Mass Data Analysis of Images and Signals in Medicine, Biotechnology, Chemistry and Food Industry MDA is held on yearly basis.
The aim of the conference is to bring together researchers from all over the world who deal with the automatic analysis of images and signals in medicine, biotechnology, and chemistry in order to discuss the recent status of the research and to direct further developments. Basic research papers as well as application papers are welcome

MDA Topics

The Scope of the Conference
The Goals of the Conference
Topics of the conference

The Scope of the Conference

The automatic analysis of images and signals in medicine, biotechnology, and chemistry is a challenging and demanding field. Signal-producing procedures by microscopes, spectrometers and other sensors have found their way into wide fields of medicine, biotechnology, economy and environmental analysis. With this arises the problem of the automatic mass analysis of signal information. Signal-interpreting systems which generate automatically the desired target statements from the signals are therefore of compelling necessity. The continuation of mass analyses on the basis of the classical procedures leads to investments of proportions that are not feasible. New procedures and system architectures are therefore required. The scope of conference is to bring together researcher, praticioners and industry people who are dealing with mass analysis of images and signals to present and discuss recent research in these fields.

Tuesday, September 29, 2009

Semantic Classification of Byzantine Icons

The painters of the Byzantine and post Byzantine artworks use specific rules and iconographic patterns for the creation of sacred figures. Based on these rules, the sacred figure depicted in the artwork is recognizable. In this work, we propose an automatic knowledge-based image analysis system used for Byzantine icons classification on the basis of the sacred figure recognition.

Figure 1. The Byzantine icon classification system’s architecture. The analysis module extracts information about the icon, and the knowledge representation and reasoning module uses this information to infer implicit knowledge that categorizes the icon.

Firstly, the system detects and analyzes the most important facial characteristics providing rich, still imprecise information about the Byzantine icon. Then, the information extracted is expressed in terms of an expressive terminology formalized using Description Logics (DLs) which form the basis of Semantic Web ontology languages.

Figure 2. Image analysis. First, the algorithm detects the sacred figure’s face region, eyes, and nose. Then, it extracts the hair, forehead, cheek, mustache, and beard parts together with the face’s base color layers. Further analysis of the extracted parts provides information about characteristic features. Finally, the algorithm produces a semantic interpretation for each of these features, together with formal assertions.

In order to effectively handle the imprecision involved, fuzzy extensions of DLs are used for the assertional part of the ontology. In this way, the extracted by image analysis information comprises the assertional component while the expressive terminology, formalizing the rules and the iconographic patterns, permits categorization of Byzantine artworks.

Figure 3. A reasoning example. The extracted information from image analysis constitutes the assertional component (ABox) of the knowledge base; the terminological component (TBox) is defined on the basis of Fourna’s specification. These components form the input to the fuzzy reasoning engine, which infers information about the icon.

P. Tzouveli, N. Simou, G. Stamou, S. Kollias. Semantic Classification of Byzantine Icons. In IEEE Intelligent Systems, vol 24, no. 2, pp. 35-43, March, April 2009. [ Abstract ] [ Bibtex ]

Sunday, September 27, 2009

New CCD descriptor

Fuzzy rule based scalable composite descriptor (BTDH) is a new descriptor that can be used for the indexing and retrieval of radiology medical images. This descriptor uses brightness and texture characteristics as well as the spatial distribution of these characteristics in one compact 1D vector. The most important characteristic of the proposed descriptor is that its size adapts according to the storage capabilities of the application that is using it. This characteristic renders the descriptor appropriate for use in large medical (or gray scale) image databases.
To extract the proposed descriptor, a two unit fuzzy system is used. To extract the brightness information, a fuzzy unit classifies the brightness values of the image's pixels into L_{Bright}  clusters. The cluster centers are calculated using the Gustafson Kessel Fuzzy Classifier.
The texture information embodied in the proposed descriptor comes from the Directionality histogram. This feature is part of the well known Tamura texture features. 
Fractal Scanning method through the Hilbert Curve or the Z-Grid method is used to capture the spatial distribution of brightness and texture information.

Seven approaches used in our experiments in order to evaluate the performance of this descriptor:

The approach displaying the best correlation between size and result is approach E5. Download the dll file and use the following source code to get the descriptor.

Bitmap ImageData = new Bitmap("c:/1.jpg");

double[] BTDHTable = new double[2048];

BTDH GetBTDH = new BTDH(16,8,true);

GetBTDH.extract(ImageData);

BTDHTable = GetBTDH.SFBTDD ;

1.S. A. Chatzichristofis and Y. S. Boutalis, “CONTENT BASED RADIOLOGY IMAGE RETRIEVAL USING A FUZZY RULE BASED SCALABLE COMPOSITE DESCRIPTOR”, Multimedia Tools and Applications, Special Issue on Data Semantics for Multimedia Systems, Springer, to Appear, 2009, DOI 10.1007/s11042-009-0349-x. [Download]
2.S. Α. Chatzichristofis and Y. S. Boutalis, “CONTENT BASED MEDICAL IMAGE INDEXING AND RETRIEVAL USING A FUZZY COMPACT COMPOSITE DESCRIPTOR”, «The Sixth IASTED International Conference on Signal Processing, Pattern Recognition and Applications SPPRA 2009» Proceedings: ACTA PRESS pp.1-6, February 17 to February 19, 2009, Innsbruck, Austria.[Download]
MAP results on IRMA 2005 medical image database

Read More

Thursday, September 24, 2009

2010 IEEE International Symposium on Biomedical Imaging (ISBI)

2010 IEEE International Symposium on Biomedical Imaging (ISBI)

14-17 April 2010, Rotterdam,

The Netherlands Four-page paper submission deadline: 2 November 2009 !!!

See http://www.biomedicalimaging.org/ for details

The IEEE International Symposium on Biomedical Imaging (ISBI)is the premier forum for the presentation of technological advances in theoretical and applied biomedical imaging. ISBI 2010 will be the seventh meeting in this series. The previous meetings have played a leading role in facilitating interaction between researchers in medical and biological imaging. The 2010 meeting will continue this tradition of fostering crossfertilization among different imaging communities and contributing to an integrative approach to biomedical imaging across all scales of observation. ISBI is a joint initiative of the IEEE Engineering in Medicine and Biology Society (EMBS) and the IEEE Signal Processing Society (SPS). The 2010 meeting will feature an opening morning of tutorials, followed by a scientific program of plenary talks, special sessions, and oral and poster presentations of peer-reviewed contributed papers.

Confirmed Plenary Talks: * Richard Ehman (Mayo Clinic, USA) Topic: New clinical imaging technologies * Jason Swedlow (University of Dundee, UK) Topic: Challenges in bioimage informatics * Clemens Lowik (Leiden University Medical Center, Netherlands) Topic: Molecular imaging and applications * Milan Sonka (University of Iowa, USA) Topic: Challenges in biomedical image analysis

Confirmed Special Sessions: * Functional magnetic resonance and diffusion tensor imaging Organizer: Carl-Fredrik Westin (Harvard Medical School, USA) * High-field clinical magnetic resonance imaging Organizer: Andrew Webb (Leiden University Medical Center, Netherlands) * Fluorescence guided surgery Organizer: Vasilis Ntziachristos (Technical University of Munich, Germany) * Whole-body image acquisition and analysis Organizer: Faiza Admiraal-Behloul (Toshiba Medical Systems Europe) * Histological and intravital microscopy Organizer: Tom Vercauteren (Mauna Kea Technologies, France) * Ultrasound imaging and analysis Organizer: Hans Bosch (Erasmus Medical Center, Netherlands) * Multi-parameter biomedical optical imaging and analysis Organizer: Atam Dhawan (New Jersey Institute of Technology, USA) * Computer aided diagnosis Organizer: Nico Karssemeijer (University Medical Centre Nijmegen, Netherlands)

Contributed Program: High-quality papers are solicited describing original contributions to the mathematical, algorithmic, and computational aspects of biomedical imaging, from nano- to macro-scale. Topics of interest include image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological, and statistical modeling. Papers on molecular, cellular, anatomical, and functional imaging modalities and applications are welcomed. All accepted papers will be published in the proceedings of the symposium and will be available online through the IEEE Xplore database.

Important Dates: Deadline for submission of 4-page paper: November 2, 2009

Notification of acceptance/rejection: January 15, 2010

Submission of final accepted 4-page paper: February 15, 2010

Deadline for author registration: February 15, 2010

Deadline for early registration: March 1, 2010

Monday, September 21, 2009

IJCSIS Research Series

(IJCSIS) International Journal of Computer Science and Information Security
Kindly circulate call for papers at your institution:
Call for Paper – IJCSIS Vol.4 August 2009
http://sites.google.com/site/ijcsis/ijcsis-cfp-august2009
Call for Paper – IJCSIS Vol.5 September 2009
http://sites.google.com/site/ijcsis/call-for-paper-september-2009
Be part of IJCSIS Technical Review Committee
http://sites.google.com/site/ijcsis/ijcsis-reviewers
Click the link below to visit your group:
https://www.researchgate.net/groups.GroupInfo.html?group_key=Computer_Vision

Friday, September 18, 2009

An improved Huffman coding method for archiving text, images, and music characters in DNA

Menachem Ailenberg, Ori D. Rotstein

Departments of Surgery, University of Toronto, and St. Michael's Hospital, Li Ka Shing Knowledge Institute, Keenan Research Centre, Toronto, Ontario, Canada

BioTechniques, Vol. 47, No. 3, September 2009, pp. 747–754

 Full Text (PDF)

Supplementary Material

Supplementary Material For: An improved Huffman coding method for archiving text, images, and music characters in DNA (.pdf)

Abstract

An improved Huffman coding method for information storage in DNA is described. The method entails the utilization of modified unambiguous base assignment that enables efficient coding of characters. A plasmid-based library with efficient and reliable information retrieval and assembly with uniquely designed primers is described. We illustrate our approach by synthesis of DNA that encodes text, images, and music, which could easily be retrieved by DNA sequencing using the specific primers. The method is simple and lends itself to automated information retrieval.

Introduction

The increasing use of digital technology presents a challenge for existing storage capabilities. The need for a reliable and long-term solution for information storage is further heightened by the prediction that the current magnetic and optical storage will become unrecoverable within a century or less (1). DNA is a compact, long-term, and proven medium for information storage. Indeed, over the last few decades, a good case has been made for crucial information storage in DNA (2). Desirable properties of DNA include its capacity for long-term information storage and recovery, which are mostly independent of technological changes, the ability to conceal data in a miniaturized fashion and its ability to be transferred, when required, via self propagation (1,2,3,4,5,6). Various approaches for information coding in DNA have been reported, including the Huffman code, the comma code, and the alternating code (4), a straight coding based on 3 bases per letter (1,2,6), or sequential conversion of text to keyboard scan codes followed by conversion to hexadecimal code and then conversion to binary code with a designed nucleotide encryption key (5). Each approach offers advantages and inherent difficulties, and differs in the degree of economical use of nucleotides. We sought to develop an alternate approach for information archiving in DNA. We used the principles of the Huffman code (4,7) to define DNA codons for the entire keyboard, for unambiguous information coding. The approach described in this manuscript is based on the construction of plasmid library for information archiving with specially designed primers embedded in the message segment with an exon/intron structure for rapid, reliable, and efficient information retrieval.

Materials and methods

The DNA coding was based on modification of the Huffman code (2,4,7,8). We also adopted the nomenclature suggested by Cox (2) for definition of the DNA segment representing a single character as ‘codon’. DNA (844 bp; Figure 1A) was synthesized and inserted as a SacI/KpnI fragment in pBluescript-based plasmid (Mr. Gene GmbH, Regensburg, Germany). Sequence confirmation of supplied plasmid was provided by the manufacturer using plasmid universal primer. For information retrieval, plasmid (300 ng/7 µL) was mixed with sequencing primer (5 pmol/0.7µL; Sigma, Oakville, Ontario) (Figure 1B) and subjected to sequencing (service was performed by The Centre for Applied Genomics, The Hospital for Sick Children, Toronto, ON, Canada). The chromatogram was created using the FinchTV 1.4 application (Geospiza Inc., Seattle, WA, USA). Sequences of designed and sequenced DNA were aligned using bl2seq (NCBI, Bethesda, MD, USA). PCR amplification was performed in iQ5 cycler (Bio-Rad Laboratories, Mississauga, ON, Canada). A reaction mixture contained 2 units Taq polymerase with 1× reaction buffer (New England BioLabs, Pickering, ON, Canada), 0.2 mM each dNTP (Fermentas, Burlington, ON, Canada), 0.3 mM each primer, 200 ng plasmid DNA, and UltraPure distilled water (Invitrogen, Burlington, ON, Canada) to a volume of 20 µL. PCR conditions were 94°C for 3 min; 94°C for 30 s, 55°C for 30 s, and 72°C for 60 s for 30 cycles; then 72°C for 7 min final extension; and hold at 4°C. Ten micro-liters of PCR reaction was mixed with 2 µL 6× loading buffer (Fermentas). DNA fragment size was determined by loading in parallel 5 µL 100-bp DNA ladder (Fermentas) and resolved on 1% agarose gel (Bioshop, Burlington, ON, Canada). Gel was visualized with UV transillumination, and image was captured with Biospectrum AC Imaging System (UVP, Upland, CA, USA).

Read More

Wednesday, September 16, 2009

FPGA-embedded algorithms compensate for atmospheric distortion

Project: Enhanced Long-Range Imaging

Problem: Long-range imaging is a critical component to many NASA applications including range surveillance, launch tracking, and astronomical observation. However, significant degradation occurs when imaging through the Earth’s atmosphere. The subsequent effects of poor image quality range from inconvenient to dangerous depending on the application.

Solution: EM Photonics is developing an embedded system based on field-programmable gate array (FPGA) technology capable of enhancing long-range images and videos by compensating for atmosphere induced distortions. This solver processes incoming video streams in real-time for a variety of formats, including the high-definition version used by NASA. The resulting device is light-weight and low-power and can be integrated with current video collection, viewing, and recording equipment. It can be used to process data 1) as it is collected (in real-time) or 2) from previously recorded imagery and deployed with camera systems or 3) in data centers depending on the application. Additionally, since this processing unit is built on FPGA technology, it can easily be extended to perform a variety of other tasks such as compression, encryption, or further processing.

http://www.emphotonics.com/services/case-studies

Monday, September 14, 2009

Glamourizer 1.0

Luxand is proud to announce yet another product using face identification technologies. Released today, Glamourizer 1.0 beta allows anyone with a digital point-and-shoot, ultra-zoom or SLR camera turn ordinary snapshots into stunning glamour portraits. Empowered by Luxand face identification and facial feature recognition algorithms, the new product features fully automated operation that makes it possible to process hundreds of photos at once - no user input required!

Glamour Portraits Made Easy

Turn ordinary snapshots into stunning glamour portraits! With automatic face recognition and skin enhancement, Glamourizer offers complete automation for hundreds of pictures – while treating every single shot with all the attention it deserves.

Removes Skin Imperfections

Glamourizer will make people skin look healthier on the pictures by applying several enhancements. The product will automatically remove small skin defects such as pimples, wrinkles, and freckles without making the skin look unnatural. Applying just the right amount of texture, Glamourizer makes people’s faces look healthy and completely natural without the ‘plastic’ feel to them.

Enhances Skin Tones

To compensate for varying lighting conditions, Glamourizer will detect color cast and improve skin tones on all pictures, resulting in healthy-looking people with natural skin tones no matter how difficult the light was.

Stand-Alone Operation

Glamourizer is completely stand-alone, and does not require any third-party tools. This makes Glamourizer easy to learn and to use by any photographer.

Point-and-Shoot and SLR Support

Glamourizer works with all pictures, no matter which camera you use to take them. Pro SLRs, mega-zooms or point-and-shoot cameras can produce beautiful glamour portraits if you use Glamourizer to enhance your pictures!

Automated Operation

Thanks to advanced face recognition technologies used, all operations of Glamourizer are completely automated. Just a few clicks will start processing batches of hundreds of images with stunning results!

Batch Mode

No need to open and close pictures one by one! The available batch mode automates processing of hundreds of photos, applying the same healing effect to every picture without requiring human interaction.

http://luxand.com/glamourizer/

Sunday, September 13, 2009

IBM Researchers and E.U. Consortium Pioneer Analysis Engine for Multimedia

New tool can help identify people, places and things by scouring the Web to analyze digital music, movies and photos HAIFA, Israel, Sept. 10 /PRNewswire-FirstCall/ -- What if finding spare parts or identifying a landmark was as easy as uploading and analyzing a digital photo? This will soon be possible, thanks to technology produced by a European Union project, led by IBM (NYSE: IBM) Researchers. (Logo: http://www.newscom.com/cgi-bin/prnh/20090416/IBMLOGO ) Today, IBM announced that in collaboration with a European Union consortium, researchers have developed an analytics engine to allow people to find even untagged video, pictures, and music that match multimedia they've submitted as a query. The consortium has engineered Web technology called SAPIR (Search in Audio-Visual Content Using Peer-to-Peer Information Retrieval) that can analyze and identify the pixels in large-scale collections of audio-visual content. For example, it can analyze a digitized photograph or the bitstreams in electronic sound files -- even if they haven't been tagged or indexed with descriptive information. The multimedia identified is automatically indexed and ranked for easy retrieval. "SAPIR is a potential 'game-changer' when it comes to scalability in search and analyses," said Yosi Mass, a research scientist at IBM Research-Haifa, and project leader for SAPIR. "It approaches the problem from a fundamentally different perspective, and opens up a universe of new possibilities for using multimedia to analyze the vast visual and auditory world in which we now live." SAPIR (www.sapir.eu) can index and enable the ability to sift through collections of millions of multimedia items by extracting "low-level descriptors" from the photographs or videos. These descriptors include features such as color, layout, shapes, or sounds. For example, if a tourist uses her mobile phone to photograph a statue, SAPIR identifies the image's low-level descriptors, compares them to existing photographs, and helps identify the statue. With further research, more specific features of a given item could be analyzed, so that, say, someone can photograph a fashionable wallet seen on the street, and find out which stores carry the item. In the future, scientists might also be able to extend the power of SAPIR's scalability to aid in patient healthcare and assisting with diagnoses. It might analyze medical images and rich-media patient records, then compare that information to historical data from distributed medical repositories. Multimedia comprises the biggest proportion of information stored on the Internet. In fact, according to a May 2009 IDC study, 95% of electronic information on the Internet, such as digital photos, is unstructured -- and isn't neatly categorized or tagged. Images, captured by more than 1 billion devices in the world, are the biggest part of the digital universe. The number of cell phone pictures reached nearly 100 billion in 2006, and is expected to reach 500 billion images by 2010. SAPIR taps into the vast -- and rapidly growing -- electronic repository of multimedia and has exceptional reliability and nearly unlimited capacity. It uses the same type of self-organizing peer-to-peer technology currently used for swapping audio and video over the Internet. With this approach, there is no central point of potential failure, and server hardware can be added for additional capacity when the collection grows. The "freshness" of the categorized indexed is ensured by an approach where providers of content automatically push their material into a searchable repository. A demo for testing by the general public is now available at http://sapir.isti.cnr.it/index. A video clip showing SAPIR in use can be found at www.youtube.com/watch?v=n43fIpOGbd4 The SAPIR project consortium (http://www.sapir.eu/) includes: IBM Research- Haifa, Israel; Istituto di Scienza e Tecnologie dell'Informazione, Consiglio Nazionale delle Ricerche (CNR), Italy; Max-Planck-Gesellschaft Zur Foerderung der Wissenschaften E.V. (MPG), Germany; Eurix S.R.L. (EURIX), Italy; Xerox - SAS (XRCE), France; Masarykova Univerzita (MU Brno), Czech Republic; Telefonica Investigacion y Desarrollo sa Unipersonal (TID), Spain; Telenor ASA (TELENOR), Norway; Universita' degli Studi di Padova (UPD), Italy. For more information about IBM Research, please visit www.research.ibm.com Media Inquiries: Ari Fishkind IBM Media Relations 914-945-2319 fishkind@us.ibm.com Chani Sacharen IBM Media Relations 972-4-8296166 sacharen@il.ibm.com SOURCE IBM

Friday, September 11, 2009

Obscura Digital projects multi-touch "hologram," blows all sorts of minds

The creative cats and kittens at Obscura Digital have put together a stunning piece of performance art / data manipulation demo which combines their proprietary multi-touch software with Musion's Eyeliner 3D holographic projection system. Like that BMW installation we saw recently, this is one of those odd combinations of technology and art which is best seen in action rather than described -- so check out the video after the break and see the work in all its mind-bending glory.

 

Read More

IBM’s New Image Search Engine Claims to Be Better than Google & Yahoo

Article From:http://techxav.com/2009/09/10/sapir/ By Brad Thompson

Before the recent global recession, traveling and touring around exotic places like Paris and Tokyo has always been a ‘necessity’ for most of us during the summer holidays. However, few years later, often we cannot quite remember the locations as we skim through photos of ourselves. This is when a new technology, SAPIR, or Search in Audio-Visual Content Using Peer-to-Peer Information Retrieval comes in handy.

The US tech giant has collaborated with the European Union consortium to develop a new image and video recognition-based search that claims to be better than current search technologies used by search engine leaders Google and Yahoo.

SAPIR which focuses on advanced, content based similarity search using distributed and P2P technologies, is able to analyze and identify the pixels in large-scale collections of audiovisual content. For example, it can analyze a digitized photograph or the bitstreams in electronic sound files – even if they have not been tagged or indexed with descriptive information. The multimedia identified is automatically indexed and ranked for easy retrieval. The clever search system can even index and enable the ability to sift through collections of millions of multimedia items by extracting “low-level descriptors” from the photographs or videos. These descriptors include features such as color, layout, shapes, or sounds.

ibm_search_sept09a

This new search technology has not been implemented or tested by major search engines on the Web. Different from SAPIR, most search engines such as Google and Yahoo merely sift through images based on text tags assigned to the photos.

With some tweaks and further improvements, the technology could be used to develop applications to increase the efficiency in the healthcare industry. For example, they can aid in patient healthcare and assisting with diagnoses by analyzing medical images and rich-media patient records, then comparing that information to historical data from distributed medical repositories.

Here’s a video demonstration of a man testing out the SAPIR mobile search interface in Plaza de España, Madrid. He was walking into a square in the tourist attraction and snapping a photograph of a statue using his N95 8GB. He then searches for similar images using SAPIR and found some similarity – similar colors and shot angles. However, the technology does not seem to work perfectly. As a result, he tried a combined search and punched it ‘Madrid’ as a keyword. And it works!

SIA: Semantic Image Annotation using Ontologies and Image Content Analysis

Pyrros Koletsis

Image annotation is the task of assigning a class name or description to an unknown image. In this work, we propose SIA, a framework capable of automatically annotating images using information from ontologies in combination with low level image features (color and texture) which are extracted from raw image data. The method works for images of a particular domain. First, an ontology is constructed denoting characteristics of the various image classes in this domain. A set of low level image characteristics is also assigned to each class. Image annotation is then implemented as a retrieval process by comparing vectors of such low-level characteristics extracted from the input image and representative images of each class in the ontology respectively. A combined similarity measure is used between images. The relative importance of low-level features in this measure is determined using machine learning by decision trees. The result list of images are ranked in decreasing visual similarity. AVR(Average Retrieval Rank) is used as a metric to estimate the semantic category where the image is possible to belong to (ie. the unknown image is assigned a class which is computed by voting among the top ranked retrieved images from the ontology). The experimental results demonstrate that approximately 70% of the input images are correctly annotated (ie. the method identified its class correctly). Experiments and evaluations were realized on an image dataset consisting of images belong to 30 dog breeds (semantic categories), which were collected from the World Wide Web (WWW).

http://www.intelligence.tuc.gr/lib/downloadfile.php?id=332

SeMuDaTe2009 || Workshop on Semantic Multimedia Database Technologies

10th International Workshop of the Multimedia Metadata Community

Important dates:

September 7, 2009 September 22, 2009 Deadline for Workshop Papers

September 28, 2009 October 16, 2009 Notification of Acceptance

October 19, 2009 October 30, 2009 Camera-ready Workshop Papers due

General Information:
Ontology-based systems have been developed to structure content and support knowledge retrieval and management. Semantic multimedia data processing and indexing in ontology- based systems is usually done in several steps, one starts by enriching multimedia metadata with additional semantic information (possibly obtained by methods for bridging the semantic gap). Then, in order to structure data, a localized and domain specific ontology becomes necessary since the data has to be interpreted domain-specific. The annotations are stored in an ontology management system where they are kept for further processing. In this scope, Semantic Database Technologies are now applied to ensure reliable and secure access, efficient search, and effective storage and distribution for both multimedia metadata and data. Their services can be used to adapt multimedia to a given context based on multimedia metadata or even ontology information. Services automate cumbersome multimedia processing steps and enable ubiquitous intelligent adaptation. Both, database and automation support facilitate to ubiquitous use of multimedia in advanced applications.
We are searching for research contributions on the mapping and integration of multimedia metadata and ontologies into databases, on multimedia query languages, on the optimization and processing of semantic queries. Moreover, we are interested how multimedia data services are conceived to ensure interoperability, how to improve security and reliability of access and storage of multimedia data and metadata.
In addition, application papers showing concrete semantic multimedia database services (like, adaptation of multimedia, semantic enrichment of multimedia, and bridging of media breaks), as well as demonstrations on database technologies (like, mobile online image analysis and retrieval) are expected.
Topics of interest:

  • Multimedia metadata models and mappings to databases
  • Multimedia ontology and interoperability
  • Multimedia ontology to database mapping and processing
  • Multimedia query optimization and processing
  • Ontology query languages and multimedia
  • Semantic retrieval in multimedia databases
  • Database management: security, indexing, reliability, distribution, transactions
  • Indexing strategies for multimedia databases
  • Semantic enrichment and annotation of multimedia
  • Semantic metadata management
  • Uncertainty in multimedia databases
  • Human-computer interfaces for multimedia database access
  • Mobile multimedia database services
  • Context-aware multimedia
  • Semantic adaptation of multimedia
  • Proactive semantic multimedia delivery & distribution services
  • Self-organization in service oriented multimedia architectures
  • Semantic multimedia demonstrations and applications

Thursday, September 10, 2009

Multimodality?

Juan C. Caicedo

My research topic is about combining visual features and text data for improving the response of an Image Retrieval system. During the writing process of my research proposal, I was using the term "multimodal information retrieval" to indicate that the system take advantage of the information in texts and images simultaneously for solving queries. I found two different surveys in which this approach is mentioned as a promising and underexplored research direction (see Lew2006 and Datta2008 for more details, specially the later, page 37, seccion 3.5: Multimodal Fusion and Retrieval).
Searching in academic databases and digital libraries for scholarly articles on "multimodal information retrieval" leads to a considerable amount of papers. For instance, in GoogleScholar we can find about 200 papers, and the top papers are related to image retrieval, it is also suposed that I have to read all the 200 papers. In general, the literature indicates that multimodal is a good term for expressing our intention of combining text and image data.
I sent my research proposal to a doctoral symposium, in which it got accepted for presentation and publication. Two out of three referees pointed out that multimodal is a confusing term to indicate our intention of combining text and visual features. Later, in the defense of my research proposal one out of two of the committee members also recommended to change that term. Then, I got confused about the right use of this word. I guess I have enough evidence that multimodal has been used to mean the same as I want. But the comments of other experts contradict it.
As far as I understand, multimodal may be used to indicate the interaction between a user and a system using different devices, as one of the referees indicated inside his review (multimodal interaction). On the other hand, when someone talks about multimodal data, it means that you have several sensors to measure different aspects of the same phenomenom (such as this). So, since the multimodal data perspective, images and text would be measures of the same phenomenon: a meaning or a semantic unit. However, it seems to be complicated, and non-natural to explain and understand in that way.
The discussion about multimodal data in the context of our research is still open. May be we can publish a review paper to discuss about that with many other people, in an information retrieval conference for instance. Meanwhile, I think I'll avoid the term unless we can be sure that it will be correctly understood.
[Lew2006] M. S. Lew, N. Sebe, C. Djeraba, and R. Jain, “Content-based multimedia information retrieval: State of the art and challenges,” ACM Trans. Multimedia Comput. Commun. Appl., vol. 2, no. 1, pp. 1–19, February 2006.
[Datta2008] R. Datta, D. Joshi, J. Li, and J. Z. Wang, “Image retrieval: Ideas, influences, and trends of the new age,” ACM Comput. Surv., vol. 40, no. 2, pp. 1–60, April 2008.

http://jccaicedo.blogspot.com/2009/09/multimodality.html

IARIA conferences 2010

DigitalWorld 2010
February 10-15, 2010 - St. Maarten, Netherlands Antilles

  • ICDS 2010, The Fourth International Conference on Digital Society
  • ACHI 2010, The Third International Conferences on Advances in Computer-Human Interactions
  • ICQNM 2010, The Fourth International Conference on Quantum, Nano and Micro Technologies
  • GEOProcessing 2010, The Second International Conference on Advanced Geographic Information Systems, Applications, and Services
  • eTELEMED 2010, The Second International Conference on eHealth, Telemedicine, and Social Medicine
    • MLMB 2010: The First International Workshop on Applications of Machine Learning Techniques in Medicine and Biology
    • BUSMMed 2010: The International Workshop on Business Modeling for the Next Generation of Telemedicine Systems and Services
  • eL&mL 2010, The Second International Conference on Mobile, Hybrid, and On-line Learning
  • eKNOW 2010, The Second International Conference on Information, Process, and Knowledge Management
    • WEBONT 2010: The First International Workshop on Ontologies on the Web
  • CYBERLAWS 2010, The First International Conference on Technical and Legal Aspects of the e-Society

InfoSys 2010
March 7-13, 2010 - Cancun, Mexico

  • ICNS 2010, The Sixth International Conference on Networking and Services
    • LMPCNA 2010: The Second International Workshop on Learning Methodologies and Platforms used in the Cisco Networking Academy
  • ICAS 2010, The Sixth International Conference on Autonomic and Autonomous Systems
  • INTENSIVE 2010, The Second International Conference on Resource Intensive Applications and Services

BioSciencesWorld 2010
March 7-13, 2010 - Cancun, Mexico

  • BIOTECHNO 2010 , The Second International Conference on Advances in Biotechnologies
  • BIOINFO 2010 , The First International Conference on Advances in Bioinformatics and Applications
  • BIOSYSCOM 2010 , The First International Conference on Computational and Systems Biology and Microbiology
  • BIOGREEN 2010 , The First International Conference on Advances in Renewable and Sustainable Energies
  • BIODIV 2010 , The First International Conference on Biodiversity and Invasion Control
  • BIOENVIRONMENT 2010 , The First International Conference on Environmental Change Awareness

GlobeNet 2010
April 11-16, 2010 -
Menuires, The Three Valleys, French Alps, France

  • ICN 2010, The Ninth International Conference on Networks
  • ICONS 2010, The Fifth International Conference on Systems
  • DBKDA 2010, The Second International Conference on Advances in Databases, Knowledge, and Data Applications

WebTel 2010
May 9-15, 2010 - Barcelona, Spain

  • AICT 2010 , The Sixth Advanced International Conference on Telecommunications
  • ICIW 2010 , The Fifth International Conference on Internet and Web Applications and Services
  • ICIMP 2010 , The Fifth International Conference on Internet Monitoring and Protection

NexComm 2010
June 13-19, 2010 - Athens, Greece

  • CTRQ 2010, The Third International Conference on Communication Theory, Reliability, and Quality of Service
  • ICDT 2010, The Fifth International Conference on Digital Telecommunications
  • SPACOMM 2010, The Second International Conference on Advances in Satellite and Space Communications
  • MMEDIA 2010, The Second International Conferences on Advances in Multimedia

NetWare 2010
July 18-25, 2010 - Venice, Italy

  • SENSORCOMM 2010 , The Fourth International Conference on Sensor Technologies and Applications
  • SECURWARE 2010 , The Fourth International Conference on Emerging Security Information, Systems and Technologies
  • MESH 2010, The Third International Conference on Advances in Mesh Networks
  • AFIN 2010, The Second International Conference on Advances in Future Internet
  • DEPEND 2010, The Third International Conference on Dependability

InfoWare 2010
September 20-25, 2010 - Valencia, Spain

  • ICCGI 2010, The Fifth International Multi-Conference on Computing in the Global Information Technology
  • ICWMC 2010, The Sixth International Conference on Wireless and Mobile Communications
  • INTERNET 2010, The Second International Conference on Evolving Internet
  • ACCESS 2010, The First International Conferences on Access Networks, Services and Technologies

The Second International Conferences on Advances in Multimedia, MMEDIA 2010

The rapid growth of information on the Web, its ubiquity and pervasiveness makes the www the biggest repository. While the volume of information may be useful, it creates new challenges for information retrieval, identification, understanding, selection, etc. Investigating new forms of platforms, tools, principles offered by Semantic Web opens another door to enable humans programs, or agents to understand what records are about, and allows integration between domain-dependent and media-dependent knowledge. Multimedia information has always been part of the Semantic Web paradigm, but requires substantial effort to integrate both.

The new technological achievements in terms of speed and the quality of expanding and creating a vast variety of multimedia services like voice, email, short messages, Internet access, m-commerce, to mobile video conferencing, streaming video and audio.

Large and specialized databases together with these technological achievements have brought true mobile multimedia experiences to mobile customers. Multimedia imply adoption of new technologies and challenges to operators and infrastructure builders in terms of ensuring fast and reliable services for improving the quality of web information retrieval.

Huge amounts of multimedia data are increasingly available. The knowledge of spatial and/or temporal phenomena becomes critical for many applications, which requires techniques for the processing, analysis, search, mining, and management of multimedia data.

MMEDIA 2010 aims to provide an international forum by researchers, students, and professionals for presenting recent research results on advances in multimedia, mobile and ubiquitous multimedia and to bring together experts from both academia and industry for the exchange of ideas and discussion on future challenges in multimedia fundamentals, mobile and ubiquitous multimedia, multimedia ontology, multimedia user-centered perception, multimedia services and applications, and mobile multimedia.

The topics suggested by the conference can be discussed in term of concepts, state of the art, research, standards, implementations, running experiments, applications, and industrial case studies. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited to, topic areas.  All tracks are open to both research and industry contributions.

http://www.iaria.org/conferences2010/CfPMMEDIA10.html

Experimental content based retrieval engine which operates within a real time 3D virtual reality environment.

This system presents the idea of content based retrieval of 3D models within a VR environment. A 3D reconstruction of a part of the old city of Xanthi has been used as a test-bed 3D scene. Within this environment the user can perform a virtual walkthrough and by clicking on objects the system performs aquery-by-example to a database and retrieves the coordinates of similar objects that might exist within the 3D scene. Animated arrows are then presented over similar objects while the user can monitor on the top map the similarity ranking of the retrieved objects.
At the moment the system uses our 3D descriptors which are designed for vessels and generally surfaces of revolution and thus the available objects are limited to bottles, flower-pots,etc. With the use of other 3D shape descriptors the system can be expanded for other types of objects such as architectural structures, facade features, etc.
It should be mentioned that the 3D scene is manually segmented into separated 3D model entities which properties such as their digital shape signatures (descriptors), their coordinates within the 3D space and other information are encoded into metadata within a native XML database and when the 3D scene is loaded those are loaded progressively and located to their positions according to their metadata.More specifically a left click on an object initiates a query-by-example call which forwards the object's id to the database. The database compares the digital signatures of the query-object against all other scene objects using a similarity metric and returns a sorted (based-on-shape similarity) list with the id's of the similar objects.The system is based on PHP technology and on the native XML database eXist. The 3D reconstruction of a part of the old city of Xanthi has been modelled using Blender and the web based virtual walkthrough is based on Quest 3D technology which works on IE web browser.
To enter the 3D scene using an IE browser click on the following link Take me to the old city of Xanthi (pop up window) or you can watch a small video demo on youtube.

Additional material related to this project can be found at the following links

3D Reconstruction of the old city of Xanthi
On 3D Reconstruction of the old city of Xanthi (Journal of Cultural Heritage)
Process Evaluation of 3D Reconstruction methodologies targeted to web based VR (CIPA 2007)
Experimental 3D Pottery Content Based Retrieval Engine

Monday, September 7, 2009

Hobby built car-like robot

Article from http://www.aforgenet.com/articles/qwerk_robot_car/

Introduction

It has passed a while since the time I got interested in robotics and started building some stuff. Initially, as many other novice hobbyists do, I started with something simple and tried different Lego Mindstorms Robotics kits, like RCX and NXT. These kits are really easy to start with and allow building many different things, like car-bots, pan-tilt cameras, etc. But eventually you may want trying something more sophisticated, which could result in a robot with wider range of features. To get most flexibility and also enjoy building robots on your own, it is much more preferred to switch from kits like Lego, to some more specialized hardware, like different motors, servos, sensors, controllers, robotics boards, etc. Of course it will not be that easy as with plug-n-play Lego kits, but it may allow us designing our own robot putting the hardware we want for tasks we need. Going this road we need to be a bit prepared and get some simple tools like screwdrivers, soldering tools, wire-cutters, maybe a small saw, files, etc. It may end up in real building, which is real fun!

One of the easy things to use for building your custom robots is a range of different controllers and interface boards provided by Phidgets. These are very nice things which can be plugged into computer's USB port and programmed very easily with SDK provided by the company. Last time I've tried Phidgets for building pan-tilt module for a simple stereo vision setup. This worked really nice and I definitely consider building more on their base. But this time I was interested in building mobile robot and I did not want making PC (even a tiny notebook like EeePC) part of my robot. Instead of this I wanted to get something, which is not so big in size and could be controlled over Wi-Fi. In order to achieve this goal, I made my choice in using Qwerk.

I was already writing about Qwerk board in the past and the way to control it remotely. This is really nice peace of manufacturing, which allows building quite sophisticated robots carrying bunch of sensors, servos, motors, etc. This time it will be used for building mobile robot controlled remotely over Wi-Fi.

What are we going to build ?

This time we are going to build a remotely controlled car robot, with the next list of features:

  • 3 wheels - 2 in the front and 1 one rear. The two wheels in the front will be motorized and will direct robot's movement. Setting equal speed to both motors will result in straight forward/backward movement, but setting different speeds will lead to going left or right.
  • On board camera, which will give a view to the person controlling the robot. In the future the camera may be used also for automated robot's controlling with the help of developed computer vision algorithm.
  • To mimic real cars, the car robot will have some lights - stop lights, turn lights and dimension lights.
  • It will carry some sensors on board, which may be used as for fun, as for future tasks targeted to autonomous movement.
  • And finally to make it wireless, it will have its own battery on board and a Wi-Fi communication module.

It should be really nice, when all the above is done and works!

Let's start building

The base

One of the fun things in building something as a hobby is building everything on your own using all sort of stuff you can find around and can make use of. For example, building a robot you may find that different stuff left after flat repairing will work fine, as well as old broken child's toys, electronics, etc. Sometimes it may be hard to find a specialized part for the robot you build or it may be not that cheap buying dedicated stuff out of a set. So, building a robot I always take a look around the flat for something, which may become useful.

This time it happened with robot's base. I was looking for something, which could be used to attach wheels to and carry all the robot's stuff. And I found that a small cutting board looks so nice as a robot's base ... Yes, sounds crazy - cutting board used in robotics. But I could not resist when I found that accidentally in a regular shop and imagined my robot with it.

Read More

The SpringerImages Collection

Introducing Springer’s comprehensive collection of scientific and medical images

Scientific research has become progressively more focused on raw data and visual forms of learning and communication. These visuals come in many different forms, ranging from charts and graphs to high-quality photos. SpringerImages spans science, technology and medicine in 18 subject collections with over 1.5 million photos, graphs, histograms, tables and more.
Now end users can search images faster and easier than ever before.

Searching for specific images? No more screening long texts! SpringerImages spans science, technology and medicine with over 1.5 million photos, graphs, histograms, tables and more. Features include a powerful search interface and download privileges.

Features

• Over 1.5 million scientific, technical and medical images online
• Rapidly growing collection as images are added as they are published
• Based on trusted sources, such as SpringerLink or images.MD

SEARCH NOW at SpringerImages.com !

Librarian Benefits

Help increase the productivity of their researchers, as they spend less time reviewing the literature
Be confident that they always offer the most up-to-date data
Be sure that they offer high-quality data from a trusted source
Increase exposure to other content and maximize the investment in SpringerImages
Support their researchers in finding images more easily and getting answers faster – 24 hours a day
Monitor user behavior to see the results of their investment immediately

Researcher Benefits
Get a refined and relevant list of results quickly when searching for pertinent pieces of information in a vast quantity

Easily jump to the source to confirm the context and retrieve further information
Quickly understand the context of an image, dramatically reducing the time needed to review literature
You can always rely on up-to-date data and research results on SpringerImages, because images are automatically updated as content is published
Use images to quickly update lectures or presentations
Easily customize the search to your individual needs, such as comparison of research results or sharing keywords with other users to optimize the search

Covering all STM Subjects

Biomedicine .................... Chemistry ....................... Computer Science
Economics / Management Sciences ..................... Education
Engineering ..................... Environment .................. Geography
Geosciences ................... Humanities / Arts .......... Life Sciences
Material Science ............. Mathematics .................. Medicine & Public Health
Pharmacy ......................... Physics ........................... Psychology
Social Sciences

Availability

There are two options for subscribing to SpringerImages. You may subscribe to the entire collection of SpringerImages or to the Medical and Life Sciences collection only.

http://www.springerimages.com/

Sunday, September 6, 2009

High-Speed Robot Hand

Ishikawa Komuro Lab's high-speed robot hand performing impressive acts of dexterity and skillful manipulation
http://www.k2.t.u-tokyo.ac.jp/papers/fusion_movies-e.html