Pages

Friday, September 24, 2010

SDM'11: THE ELEVENTH SIAM INTERNATIONAL CONFERENCE ON DATA MINING

Phoenix, Arizona, USA, April 28 - April 30, 2011
URL: http://www.siam.org/meetings/sdm11

Important Dates:
Paper Submission: 11:59pm, October 15, 2010 (PST)
Author Notification: December 22, 2010
Workshop/Tutorial Proposals: 11:59pm, October 15, 2010
Camera Ready: January 25, 2011

Data mining is an important tool in science, engineering, industrial processes, healthcare, business, and medicine. The datasets in these fields are large, complex, and often noisy.  Extracting knowledge requires the use of sophisticated, high-performance and principled analysis techniques and algorithms, based on sound theoretical and statistical foundations. These techniques in turn require powerful visualization technologies; implementations that must be carefully tuned for performance; software systems that are usable by scientists, engineers, and physicians as well as researchers; and infrastructures that support
them. This conference provides a venue for researchers who are addressing these problems to present their work in a peer-reviewed forum. It also provides an ideal setting for graduate students and others new to the field to learn about cutting-edge research by hearing outstanding invited speakers and attending tutorials (included with conference registration). A set of focused workshops are also held on the last day of the conference. The proceedings of the conference are published in archival form, and are also made available on the SIAM web site.

Artificial Intelligence and Soft Computing (ASC 2011)

The Fourteenth IASTED International Conference on Artificial Intelligence and Soft Computing (ASC 2011) will create an international forum for researchers and practitioners to exchange new ideas and practical experience in the areas of soft computing and artificial intelligence. The conference provides an opportunity to present and observe the latest research, results, and ideas in these areas. ASC 2011 will strengthen relations between industry practitioners, research laboratories, and universities. All papers submitted to this conference will be peer evaluated by at least two reviewers. Acceptance will be based primarily on originality and contribution.

ASC 2011 will be held in conjunction with the IASTED International Conferences on:

Call for Papers

Please submit your papers as well as proposals for tutorials, special sessions, and panel sessions by January 15, 2011. See the Call for Papers here.

Location

Situated in the warm and sunny Mediterranean, Crete is the largest of the Greek islands. It is renowned in myth and rich in a history that spans thousands of years. Today, history and myth blend seamlessly with Crete's natural beauty and lively culture. Take in the sharp mountain ranges, interrupted by steep ravines and divided by fields of olive trees, all below a splendid sky. Spend a day exploring the wild cypress forests and keep an eye out for the kri-kri, the Cretan wild goat. A trip to the stunning White Mountains and the Samariá Gorge is a must, as are the many ruins and historic landmarks throughout the island. Unwind with a sunset stroll on the beach followed by a hearty meal at a local tavern, or an evening of vibrant city nightlife.

Saturday, September 18, 2010

Windows Phone 7 SDK here; YouTube, Netflix demoed; no CDMA yet

Article from arstechnica.com

Two weeks after the operating system itself was finalized, Microsoft has released the Windows Phone 7 SDK to developers. Applications developed with the new SDK will be submittable to the Windows Phone Marketplace when that opens for submissions next month.

The new SDK brings many welcome improvements; it (finally) includes built-in support to allow developers to offer many of the same interface concepts as the built-in phone software uses. Specifically, Panoramas, used in the various hubs such as People and Office, and Pivots, used in the e-mail client, are now available for all to use. The sideways-scrolling panoramas in particular are a striking part of the Windows Phone 7 experience, and their absence led many to attempt to develop their own versions. Having a standard control to use will ease development and provide greater uniformity in third-party applications.

In spite of these additions, the SDK still isn't complete. There are desirable things that the built-in applications include—such as picking dates and times—that aren't available to third parties. In spite of the work that Microsoft has put in to Windows Phone 7 over the last few months of public releases, it's still a new platform that's immature in many ways.

To fill some of these gaps, the company is using its open source Silverlight Toolkit project to provide unsupported alternatives to the missing functionality.

In addition to the main SDK, Microsoft also released a Mobile Advertising SDK. Microsoft already has its own advertising platform, used in Bing (and soon Yahoo!), and unsurprisingly, it's bringing it to its mobile platform—for some months, Redmond has been promoting its phone operating system as an "ad-serving machine." With the SDK, it's trivial for developers to add advertising to their applications as a way of monetizing them.

Initially, the ads will only be available in the US, to US developers. Support in other markets will begin rolling out early next year. With Windows Phone 7 launching first in Europe, this is a little surprising. Microsoft already sells ads outside the US, and can pay non-US developers of phone apps, so on the face of it, it would seem that all the legal hurdles have been jumped, giving little reason for such a restriction. The first ads will be plain text and image banners, with rich media ads—similar to those of Apple's iAd—promised for the future. Microsoft's ads will pay out 70 percent of ad revenue to the application publisher, in contrast to iAd's 60 percent.

Along with the new SDKs, some new applications and capabilities have been demonstrated; among others, there's a good-looking Netflix app with streaming video, and an official Twitter app.

In a Bieber-heavy demo, Microsoft's Brandon Watson also showed off Windows Phone 7's YouTube support. Surprisingly, this used neither a YouTube application nor Flash (which is not available on the platform at the moment). Instead, Windows Phone 7 streamed the videos directly from YouTube using YouTube's APIs for that purpose; the support is built-in. YouTube videos also appear to integrate into the operating system's hubs, putting them on an equal footing with videos stored on the device itself.

Read More

WIAMIS 2011

The International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS) is one of the main international fora for the presentation and discussion of the latest technological advances in interactive multimedia services. The objective of the workshop is to bring together researchers and developers from academia and industry working in the areas of image, video and audio applications, with a special focus on analysis.
After a series of successful meetings starting in 1997 in Louvain, WIAMIS 2011 will be held at Delft University of Technology (TUDelft), Delft, The Netherlands.

Topics of interest include, but are not limited to

* 2D/3D feature extraction
* Segmentation and reconstruction of objects in 2D/3D image sequences
* Motion analysis and tracking
* Video/Audio special event recognition
* Multimedia coding efficiency and increased error resilience
* Multimedia browsing, indexing and retrieval
* Advanced descriptors and similarity metrics for multimedia
* Multimedia content adaptation tools, transcoding and transmoding
* Advanced interfaces for content analysis and relevance feedback
* End-to-end quality of service support for Universal Multimedia Access
* Semantic mapping and ontologies
* Semantic web and social networks
* Relevance feedback and learning systems
* Multimedia analysis hardware and middleware
* Advanced multimedia applications
* Video/Audio based human behavior analysis systems
* Camera based human computer interaction

 

Paper Submission

All submissions will be handled electronically. Submission instructions will be posted on the workshop website (http://www.wiamis2011.org/) in due course.

Friday, September 17, 2010

Ετήσιο Βραβείο Καλύτερης Διδακτορικής Διατριβής

image

Το Ινστιτούτο Πληροφορικής και Τηλεματικής (ΙΠΤΗΛ) προκηρύσσει από φέτος το Ετήσιο Βραβείο Καλύτερης Διδακτορικής Διατριβής το οποίο έχει απονεμηθεί από Ελληνικό Πανεπιστήμιο στον ευρύτερο τομέα της Πληροφορικής-Τηλεματικής κατά την διάρκεια του προηγούμενου ημερολογιακού έτους.

Οι όροι του διαγωνισμού είναι οι ακόλουθοι:

  • Ο διδακτορικός τίτλος θα πρέπει να έχει απονεμηθεί από Ελληνικό Πανεπιστήμιο μέσα στο 2010.
  • Η διδακτορική διατριβή μπορεί να είναι γραμμένη είτε στα Ελληνικά είτε στα Αγγλικά.
  • Η διδακτορική διατριβή θα πρέπει να αναφέρεται σε ένα από τα θέματα στα οποία δραστηριοποιείται ερευνητικά το ΙΠΤΗΛ:
    • Επεξεργασία εικόνας
    • Όραση υπολογιστών
    • Αναγνώριση προτύπων
    • Επεξεργασία σήματος
    • Τεχνητή νοημοσύνη
    • Πολυμέσα
    • Εικονική και επαυξημένη πραγματικότητα
  • Η υποβολή της διδακτορικής διατριβής θα γίνεται ηλεκτρονικά. Μαζί θα πρέπει να υποβάλλεται βεβαίωση από την γραμματεία της σχολής ότι ο διδακτορικός τίτλος απονεμήθηκε μέσα στο έτος που αναφέρει η προκήρυξη, καθώς και τα ονόματα και οι ηλεκτρονικές διευθύνσεις της Τριμελούς Επιτροπής.

Η επιτροπή κρίσης θα αποτελείται από Ερευνητές του ΙΠΤΗΛ.

Το βραβείο θα απονέμεται κατά την Ανοιχτή Ημέρα του ΙΠΤΗΛ και θα αποτελείται από βεβαίωση και χρηματικό ποσό 600 ευρώ. Το ΙΠΤΗΛ θα καλύπτει τα έξοδα μετακίνησης του νικητή, εντός Ελλάδας, για να παραστεί στην εκδήλωση και να παραλάβει το βραβείο προσωπικά. Το βραβευμένο διδακτορικό θα προβάλλεται από την ιστοσελίδα του ΙΠΤΗΛ καθώς και μέσω δελτίου τύπου.

Προθεσμία υποβολής υποψηφιοτήτων: 15 Ιανουαρίου 2011.

Η υποβολή γίνεται ηλεκτρονικά στην διεύθυνση: http://www.iti.gr/itiPHD

Image Processing: The Fundamentals, 2nd Edition

Cover image for product 047074586XFollowing the success of the first edition, this thoroughly updated second edition of Image Processing: The Fundamentals will ensure that it remains the ideal text for anyone seeking an introduction to the essential concepts of image processing. New material includes image processing and colour, sine and cosine transforms, Independent Component Analysis (ICA), phase congruency and the monogenic signal and several other new topics. These updates are combined with coverage of classic topics in image processing, such as orthogonal transforms and image enhancement, making this a truly comprehensive text on the subject.

Key features:

  • Presents material at two levels of difficulty: the main text addresses the fundamental concepts and presents a broad view of image processing, whilst more advanced material is interleaved in boxes throughout the text, providing further reference for those who wish to examine each technique in depth.
  • Contains a large number of fully worked out examples.
  • Focuses on an understanding of how image processing methods work in practice.
  • Illustrates complex algorithms on a step-by-step basis, and lists not only the good practices but also identifies the pitfalls in each case.
  • Uses a clear question and answer structure.
  • Includes a CD containing the MATLAB code of the various examples and algorithms presented in the book. There is also an accompanying website with slides available for download for instructors as a teaching resource.

Image Processing: The Fundamentals, Second Edition is an ideal teaching resource for both undergraduate and postgraduate students. It will also be of value to researchers of various disciplines from medicine to mathematics with a professional interest in image processing.

http://eu.wiley.com/WileyCDA/WileyTitle/productCd-047074586X,descCd-tableOfContents.html

Buy Both and Save 20%!

Buy Image Processing: The Fundamentals, 2nd Edition (List Price: £55.00 / €66.00) with Image Processing: Dealing With Texture (List Price = £45.00 / €54.00) Total List Price: £100.00 / €120.00
Discounted Price: £80.00 / €96.00

Saturday, September 11, 2010

Adaptive Hierarchical Density Histogram for Complex Binary Image Retrieval

This paper proposes a novel binary image descriptor, namely the Adaptive Hierarchical Density Histogram, that can be utilized for complex binary image retrieval. This novel descriptor exploits the distribution of the image points on a two-dimensional area. To reflect effectively this distribution, we propose an adaptive pyramidal decomposition of the image into non-overlapping rectangular regions and the extraction of the density histogram of each region. This hierarchical decomposition algorithm is based on the recursive calculation of geometric centroids. The presented technique is experimentally shown to combine efficient performance, low computational cost and scalability. Comparison with other prevailing approaches demonstrates its high potential.

image 

Two queries by visual example of patent images and the first retrieved results.

@conference{sidiropoulos2010adaptive,
  title={{Adaptive hierarchical density histogram for complex binary image retrieval}},
  author={Sidiropoulos, P. and Vrochidis, S. and Kompatsiaris, I.},
  booktitle={Content-Based Multimedia Indexing (CBMI), 2010 International Workshop on},
  pages={1--6},
  year={2010},
  organization={IEEE}
}

IEEE SPS: Call for Papers - ICME 2011

IEEE International Conference on Multimedia and Expo (ICME) 2011

July 11-15, 2011 • BARCELONA, Spain

http://www.icme2011.org

IEEE International Conference on Multimedia & Expo (ICME) has been the flagship multimedia conference sponsored by four IEEE societies since 2000. It serves as a forum to promote the exchange of the latest advances in multimedia technologies, systems, and applications from both the research and development perspectives of the circuits and systems, communications, computer, and signal processing communities. An Exposition of multimedia products, animations and industries will be held in conjunction with the conference.

Authors are invited to submit a full paper (two-column format, 6 pages) according to the guidelines available on the conference website at http://www.icme2011.org. Reviewing will be double blind. Only electronic submissions will be accepted. Topics of interest include, but are not limited to:

o Speech, audio, image, video, text processing

o Signal processing for media integration

o 3D visualization, animation and virtual reality

o Multi-modal multimedia computing systems and human-machine interaction

o Multimedia communications and networking

o Multimedia compression

o Multimedia security and privacy

o Multimedia databases and digital libraries

o Multimedia applications and services

o Media content analysis and search

o Hardware and software for multimedia systems

o Multimedia standards and related issues

o Multimedia quality assessment

ICME 2011 showcases high quality oral and poster presentations and demo sessions. Best paper, poster and demo awards will be selected and recognized in the conference. Extended versions of oral papers will be considered for potential publication in a special section of IEEE Trans. Multimedia. Accepted papers have to be registered and presented; otherwise they will not be included in the IEEE Xplore Library.

ICME 2011 features IEEE societies sponsored workshops, as well as call for workshop proposals. We encourage researchers, developers and practitioners to organize workshops on various new emerging topics. Industrial exhibitions are held in conjunction with the main conference. Job fairs, keynote/plenary talks and panel discussions are other conference highlights. Proposals for Tutorials and Workshops are invited. Please visit the ICME 2011 website for submission details.

Paper Submission (Revised):-------------November 29, 2010
Paper Acceptance Notification:----------February 15, 2011
Camera-Ready Paper:---------------------March 15, 2011
Workshop Proposal Submission:-----------October 15, 2010

Friday, September 10, 2010

Research Position at HP Labs China

HP Labs China is now seeking global top talents to fill its multiple open Research positions at all levels (senior and junior). HP Labs is the company’s central research organization with a mission in delivering breakthrough technologies and technology advancements that provide a competitive advantage for HP, by investing in fundamental science and technology in areas of interest to HP.

Job – Research
Primary Location - Beijing, China
Schedule - Full-time
Language - English

Research areas include (but are not limited to):

(1) Web content analysis technology, including information extraction, information retrieval, data mining, natural language processing, large scale data management, and high performance computing;

(2) data center and enterprise networks, including network architecture, network management, network security, network mobility, and key network applications such as WAN performance optimization and network storage.

Candidates are expected to hold a Ph.D. or Master degree in Computer Science with a solid background and a strong publication record in a research field related to the aforementioned areas. An ideal candidate is one who has strong passion for research and has an extended independent research experience in the relevant areas.

For consideration, please submit your resume as a PDF attachment to Dr. Fei Chen (fei.chen4@hp.com) and include “HPLC Job Application” in the subject line. Review of applications will begin immediately and will continue until the positions are filled.

HP Labs China (HPLC), located in Beijing, was established in 2005 and strategically situated in the high-tech area of Zhongguan Chun in Beijing, the Capital city of China. HPLC is part of the larger HP Labs (HPL), which is HP’s central research organization. HPL is an international research organization, with labs in China, India, Israel, Russia, Singapore, United Kingdom, and United States, promising rich career experiences. As the world’s largest technology company, HP creates technology solutions that provide new possibilities for consumers and businesses with a portfolio that spans printing, personal computing, software, services and IT infrastructure.

Thursday, September 9, 2010

Can Global Visual Features Improve Tag Recommendation for Image Annotation?

Mathias Lux, Arthur Pitman and Oge Marques

 Download PDF Full-Text

Abstract: Recent advances in the fields of digital photography, networking and computing, have made it easier than ever for users to store and share photographs. However without sufficient metadata, e.g., in the form of tags, photos are difficult to find and organize. In this paper, we describe a system that recommends tags for image annotation. We postulate that the use of low-level global visual features can improve the quality of the tag recommendation process when compared to a baseline statistical method based on tag co-occurrence. We present results from experiments conducted using photos and metadata sourced from the Flickr photo website that suggest that the use of visual features improves the mean average precision (MAP) of the system and increases the system's ability to suggest different tags, therefore justifying the associated increase in complexity.

http://www.mdpi.com/1999-5903/2/3/341/

www.mmretrieval.net - Beta

screenshotAs digital information is increasingly becoming multimodal, the days of single-language text-only retrieval are numbered. Take as an example Wikipedia where a single topic may be covered in several languages and include non-textual media such as image, sound, and video. Moreover, non-textual media may be annotated with text in several languages in a variety of metadata fields such as object caption, description, comment, and filename. Current search engines usually focus on limited numbers of modalities at a time, e.g.\ English text queries on English text or maybe on textual annotations of other media as well, not making use of all information available. Final rankings are usually results of fusion of individual modalities, a task which is tricky at best especially when noisy or incomplete modalities are involved.

Overview

In this web site we present the experimental multimodal search engine http://www.mmretrieval.net, which allows multimedia and multilingual queries in a single search and makes use of the total available information in a multimodal collection. All modalities are indexed separately and searched in parallel, and results can be fused with different methods depending on

  • the noise and completeness characteristics of the modalities in a collection,
  • whether the user is in a need of initial precision or high recall.

Beyond fusion, we also provide 2-stage retrieval by first thresholding the results obtained by secondary modalities, targeting recall, and then re-ranking the results based on the primary modality.

The engine demonstrates the feasibility of the proposed architecture and methods on the ImageCLEF 2010 Wikipedia collection. The primary modality is image, consisting of 237434 items, associated with noisy and incomplete user-supplied textual annotations and the Wikipedia articles containing the images. Associated modalities are written in any combination of English, German, French, or any other unidentified language.

Searching

structure The web application is developed in the C#/.NET Framework 4.0 and requires a fairly modern browser as the underlying technologies which are employed for the interface are HTML, CSS and JavaScript (AJAX). The user provides image and text queries through the web interface which are dispatched in parallel to the associated databases. Retrieval results are obtained from each of the databases, fused into a single listing, and presented to the user.

Users can supply no, single, or multiple query images in a single search, resulting in 2*i active image modalities, where i is the number of query images. Similarly, users can supply no text query or queries in any combination of the 3 languages, resulting in5*l active text modalities, where l is the number query languages: each supplied language results to 4 modalities, one per field, plus the name modality which we are matching with any language. The current beta version assumes that the user provides multilingual queries for a single search, while operationally query translation may be done automatically.

The results from each modality are fused by one of the supported methods. Fusion consists of two components: score normalization and combination. We provide two linear normalization methods, MinMax and Z-score, the ranked-based Borda Count in linear and non-linear forms, and the non-linear KIACDF.

We are currently planning controlled experiments in order to obtain a more concrete comparative evaluation of the effectiveness of the implemented methods. For enhancing efficiency, the multiple indices may easily be moved to different hosts.

http://www.mmretrieval.net

Information Retrieval: Implementing and Evaluating Search Engines

Information Retrieval: Implementing and Evaluating Search EnginesNow available...

Information Retrieval: Implementing and Evaluating Search Engines
Stefan Büttcher, Charles L. A. Clarke and Gordon V. Cormack
MIT Press, 2010
8 x 9, 632 pp., 127 illus., $55.00/£40.95 cloth • 978-0-262-02651-2
http://ir.uwaterloo.ca/book

Information retrieval is the foundation for modern search engines.  This text offers an introduction to the core topics underlying modern search technologies, including algorithms, data structures, indexing, retrieval, and evaluation.  The emphasis is on implementation and experimentation; each chapter includes exercises and suggestions for student projects.  Wumpus—a multiuser open-source information retrieval system developed by one of the authors and available online—provides model implementations and a basis for student work.  The modular structure of the book allows instructors to use it in a variety of graduate-level courses, including courses taught from a database systems perspective, traditional information retrieval courses with a focus on IR theory, and courses covering the basics of Web retrieval.
After an introduction to the basics of information retrieval, the text covers three major topic areas—indexing, retrieval, and evaluation—in self-contained parts.  The final part of the book draws on and extends the general material in the earlier parts, treating such specific applications as parallel search engines, Web search, and XML retrieval.  End-of-chapter references point to further reading; exercises range from pencil and paper problems to substantial programming projects.  In addition to its classroom use, Information Retrieval will be a valuable reference for professionals in computer science, computer engineering, and software engineering.

Sample chapters, and other supporting materials, are available at the book Web site: http://ir.uwaterloo.ca/book

Stefan Büttcher is a Site Reliability Engineer at Google.
Charles L. A. Clarke is Professor of Computer Science at the University of Waterloo’s David R. Cheriton School of Computer Science.
Gordon V. Cormack is Professor of Computer Science at the University of Waterloo’s David R. Cheriton School of Computer Science.

Wednesday, September 8, 2010

ICMR 2011

ICMR 2011 is seeking original high quality submissions addressing innovative research in the broad field of multimedia retrieval. We wish to highlight significant contributions addressing the main problem of search and retrieval but also the related and equally important issues of multimedia content management, user interaction, and community-based management. Topics of interest include, but are not limited to:

  • Content- and context-based indexing, search and retrieval of images and video
  • Multimedia content search and browsing on the Web
  • Advanced descriptors and similarity metrics for audio, image, video and 3D data
  • Multimedia content analysis and understanding
  • Semantic retrieval of visual contents
  • Learning and relevance feedback in media retrieval
  • Query models, paradigms, and languages for multimedia retrieval
  • Multimodal media search
  • Human perception based multimedia retrieval
  • Studies of information-seeking behavior among image/video users
  • Affective/emotional interaction or interfaces for image/video retrieval
  • HCI issues in multimedia retrieval
  • Evaluation of multimedia retrieval systems
  • High performance multimedia indexing algorithms
  • Database architectures for multimedia retrieval
  • Novel multimedia data management systems and applications
  • Community-based multimedia content management
  • Retrieval from multimodal lifelogs
  • Interaction with medical image databases
  • Satellite imagery analysis/retrieval
  • Image/video summarization and visualization

IMPORTANT DATES

October 15, 2010 : Special Session and Tutorials Proposals
November 5, 2010 : Special Session and Tutorials Selection
December 3, 2010 : Paper Submission
February 11, 2011 : Notification of acceptance
March 4, 2011 : Submission of camera-ready papers

http://www.icmr2011.org/call.php

Open position in Yahoo! Research Barcelona

Think about impacting 1 out of every 2 people online in innovative and imaginative ways that are uniquely Yahoo!. We do just that each and every day, and you could too. After all, it's big thinkers like you who will create the next generation of Internet experiences for consumers and advertisers across the globe. Now is the time to show the world what you've got. Put your ideas to work for over half a billion people.

Yahoo! Labs is pioneering the new sciences underlying the Web. As the center of scientific excellence for Yahoo!, Yahoo! Labs delivers both fundamental and applied scientific leadership through published research and new technologies powering the company's products.  Yahoo! Labs is looking for a highly motivated Front-End developer in Barcelona. You will be responsible for developing the user interface prototypes and other application level software prototypes for high-end mobile devices such as the iPhone, iPad, and Android handsets.

Front-End Mobile Developer position as part of the GLOCAL European Integrated Project:

Required skills/qualifications:

Minimum Job Qualifications:

- Master's Degree in Computer Science

- 3+ Years relevant work experience developing in PHP, Javascript, DHTML, AJAX, HTML5

- 2+ years mobile front-end development experience

- Fast learner with strong problem solving and analytical skills

- Excellent written and verbal communication skills

- Thorough understanding of mobile web architecture and technologies

- Passion for developing innovative and state-of-the-art mobile applications

Experience working with webkit-based browsers and geographic technologies a plus.

To apply for this position, please visit our website:

https://yahoo73.myvurv.com/MAIN/CareerPortal/Job_Profile.cfm?szOrderID=32359

Monday, September 6, 2010

2011 IEEE Students' Technology Symposium (TechSym) At Indian Institute of Technology Kharagpur Kharagpur, India

About the Symposium:

2011 IEEE Students' Technology Symposium [IEEE TechSym 2011]
14 - 16 January 2011
Indian Institute of Technology Kharagpur, Kharagpur 721 302 India

IEEE Students' Technology Symposium is an annual event organized by IEEE Student Branch at IIT Kharagpur and IEEE Kharagpur Section. The second version of the event will host oral and poster sessions showcasing original contributions from students and young professionals, with subsequent publications through IEEEXplore Digital Library. The bouquet also includes exciting opportunities from some of the technological giants. Be here as part of the symposium to know about your peers in technological advancement, and learn the trade from veterans.

Conference Secretariat

The Secretariat, IEEE TechSym 2011
IEEE Kharagpur Section
Campus of Indian Institute of Technology Kharagpur
Kharagpur, WB 721 302, India

Phone: +91-3222-288034
Email: techsym@ieee.org
Web: http://techsym.in/

Organizing Chair

Hrushikesh Garud (garudht@gmail.com)

IEEE Conference Registration# 18009
Permalink

Call for Papers:

The organizing committee of the IEEE Students' Technology Symposium 2011 invites scholastic contributions in the form of articles for oral/poster presentations. Submissions within scope of the symposium are invited as full papers for presentations at the following tracks:

• Communication Systems

1. Antennas for Broadband Communication Systems
2. Distributed Network Dynamics, self-organization and evolution
3. Effect of electromagnetic radiation on human body
4. Energy-efficient cellular network design
5. Future Internet design
6. Grid and cloud networks
7. Information Theory and coding
8. Multimedia protocols and networking
9. Network Applications and Services
10. Optical networks and communication
11. Satellite and Space Communications
12. Security, Trust and Privacy
13. Wi-MAX, Wi-Fi, UWB, MANET Cellular mobile, 3G, LTE system
14. Wireless Sensor Networks

• Image and Multidimensional Signal Processing

1. Acoustics and Speech Analytics
2. Biomedical and Biological Signal and Image Processing
3. Color and Multispectral Imaging
4. Computational Applications in Imaging
5. Fast Processing for Multidimensional data
6. Hardware and Software co-design for Signal Processing (Algorithms)
7. Interpolation, Super-resolution and Mosaicing
8. Linear and Nonlinear Filtering and Prediction techniques
9. Multidimensional Sampling
10. Multidimensional signal reconstruction from partial or incomplete observations and projections
11. Partial Differential Equation based Processing
12. Signal, Image and Video Sensing and Representation
13. Spectral analysis and Transform techniques
14. Speech Analysis
15. Speech Synthesis
16. Speech Verification

• Micro Electro-Mechanical Systems, Electron Devices and Sensors

1. Bio-MEMS and Bio-Sensors
2. Lightwave Technology
3. MEMS and NEMS
4. Nanoelectronics and Photonics
5. Semi-Conductor Devices and Circuits
6. Sensors and IC Packaging
7. Sensors and Systems
8. Thin and Thick Film sensors

• Pattern Analysis and Machine Intelligence

1. Computer Vision and Image Understanding
2. Content Based Image Retrieval
3. Document and Handwriting Analysis
4. Evolutionary, Fuzzy, Genetic, Hybrid, Neural approaches for Pattern Analysis
5. Feature extraction, feature mapping and feature selection techniques
6. Heuristics and Model based systems
7. Machine Learning and Data Mining
8. Multidimensional Signal Analysis
9. Non-traditional search, optimization and classification systems
10. Search and Multi-objective Optimization techniques
11. Specialized Hardware and Software architectures for Pattern Analysis Systems
12. Syntactic, Structural and Statistical Pattern Recognition
13. Video, face, object, motion and gesture recognition

• Power and Control Systems

• VLSI Design and Automation

1. Architectural and High-Level Synthesis
2. Design Automation
3. Digital circuits for signal processing
4. Low Power Systems Design
5. System Architectures and Applications

• Web, Multimedia, Computers and Embedded Systems

1. Applications, Design Automation Algorithms and Synthesis Methods for System Level Design
2. Architectural, Micro-architectural and Concurrency issues in Embedded Systems: Customizable Processors, Multi-Processor SoC and NoC Architectures
3. Component Based System Design, Hardware-Software Co-Design, Design Space Exploration Tools
4. Intelligent Indexing and Retrieval of Multimedia Web Content
5. Mobile Based Web Applications
6. Mobile Multimedia Application
7. Multimedia Supported User Interfaces and Interaction Models
8. Profiling, Measurement and Analysis Techniques, Security Issues for Embedded Applications
9. Programming Languages and Software Engineering for Embedded and/or Real-Time
10. Semantics and Ontology
11. Testing, Validation and Verification of Embedded Systems
12. Web Based Architectures, Protocols, Services and Applications
13. Web Based User Interface Design (Affective, Adaptive, Assistive, Multi-Modal, Social Relevance)
14. Web Technology for Human Computer Interaction and Education

Manuscript preparation and submission

Manuscripts are to be prepared according to IEEE two-column conference format in letter-paper. All final manuscript should have a page length up to 6 pages including figures, tables and references. Submissions are handled electronically via the conference management portal.

Visit CMT for manuscript submission.

Editorial policy

Reviewing criteria for the symposium is double-blind. In submitting a manuscript to IEEE TechSym 2011, authors acknowledge that no paper substantially similar in content has been or will be submitted to another symposium, conference or workshop during the review period (14 September 2010 - 7 November 2010)

Read more about Editorial Pilicies.

Reviewers' Guidelines:

Invited to Review for IEEE TechSym 2011
If you have been invited to review for IEEE TechSym 2011, an account has been automatically generated for you using the contact email as your account name (regardless of whether you agreed to review or not). You need to only request for a new password via "Reset your password". You will receive the new password on your email. Please log in using the new password and remember to change it to your favorable one once logged in.

Please add "cmt@microsoft.com" to your list of safe senders to prevent important email announcements from being blocked by spam filters.

Conflict of interest domain
When you log in for the first time, you will be asked to enter your conflict domain information. You will not be able to review any paper without entering this information. This is necessary to ensure conflict free reviewing of all papers. Typically if your affiliate institution/organization provides official e-mail id as xyz@abc.com, your conflict domain is abc.com. Additionally if your institution uses multiple domain names (xyz@abc.com and xyz@abc.pqr.com) or you are associated with multiple institutions/organizations, please enter all such information. Please avoid entering commercial domain names (e.g., gmail.com, yahoo.com, live.com, ieee.org etc.) as your conflict domain since it would add to unnecessary complications during reviewer assignment.

Received notification about papers assigned for review
Once you've been notified that the papers have been assigned to you, please log in to the site. Click on "Paper Reviews and Discussions". In the "Paper Reviews and Discussions" page, click on "Download Assigned Papers". This allows you to download a zip file containing all the papers plus supplementary files (if available). If you observe any violations of Editorial Policies or Manuscript Preparation Guidelines, (e.g., manuscript is not blind-submission, authors have exceeded prescribed page limit of 6 pages, or authors have not followed an IEEE conference template for manuscript preparation) please inform the track chair.

Reviewing criteria and review submission
For a paper, under the review column, click "Add" (to the right of the "Review" line) to review. The reviewing pane has certain questions which are designed to help Technical Program Committee assess suitability of selecting a paper for submission at the symposium. Please substantiate your comment on originalty of work presented in the manuscript, and appropriately mention suspected sources of plagiarism as "Comments to Program Chair". Motivating and serious feedback on the work presented in the manuscript is highly encouraged and reviewers are suggested to fill in the "Comments to Authors" section.

Blind reviewing requirements
Blind reviewing is an essential part of IEEE TechSym 2011. Authors are asked to take reasonable efforts to hide their identities, including not listing their names or affiliations and omitting acknowledgments. This information will be included during camera ready submission. Reviewers are also requested to make all efforts to keep their identity invisible to the authors. Refrain from saying, "you should have cited my paper from 2006!"

Submit your reviews
If you save your review as a draft, it is visible only to you. You can access your draft review form by clicking on the same "Add" link. To make the review visible to the Technical Program Committee, click on the "Submit" button in the review form.

Ranking of Reviewed Submissions
Once you've reviewed the papers, you can rank them (the first being the best in your batch). In the "Paper Reviews and Discussions" page, click on "Edit Ranks" near the top of the page. In the "Edit Paper Ranks" page, click on the "Start Ranking" link for the papers. Use the "Move Up" and "Move Down" to adjust the ranks. Remember to click on the "Save Changes" button.

Proposal Submission for Tutorials and Workshops:

Scope of Tutorials and Workshops
Proposals for tutorials and workshops are invited on topics related to, but not limited to:
• Communication Systems,
• Image and Multidimensional Signal Processing,
• Micro Electro-Mechanical Systems,
• Electron Devices and Sensors,
• Pattern Analysis and Machine Intelligence,
• Power and Control Systems,
• VLSI Design and Automation,
• Web, Multimedia, Computers and Embedded Systems.

Format for Preparing Proposals
Please submit proposals in pdf format to comprehensively cover the following details:
• Title of the Tutorial/Workshop (max. 256 characters) and short summary (max. 200 words).
• Broad scope (eg. Communication Systems)
• Name, affiliation and contact (including email and phone) details of the co-ordinating presenter and all other co-presenters.
• The targeted audience and prerequisites (approx. 50 words).
• Tutorial/Workshop duration. Full day tutorials will be six hours and half day tutorials will be three hours.
• Detailed tutorial program, with a list of topics covered, a short description of each topic and the approximate time devoted to each topic (approx. 2000 words).
• Technical bibliography (with emphasis on the authors' works and other related works being covered).
• List of tutorial material to be provided to the attendees. Also indicate whether these are provided free of cost, or cost is to be bourne by the attendees.
• Short biographical sketch of each presenter indicating previous experience in delivering lectures and tutorials, and expertise on the tutorial topic. (approx. 200 words for each presenter).

Proposal Submission
Co-ordinating presenters are encouraged to submit full day tutorial proposals to Vijaya Sankara Rao P [Tutorials and Workshop Chair] (vijaysankar@ieee.org) by and before Midnight of 14 September 2010 in the format specified above. All tutorial proposals will undergo a peer review process.

Special Requirements
Presenters at the tutorials will be provided with Audio-Visual equipment including public addressing system, a LCD overhead projector, and a Windows PC preloaded with MSPowerPoint Player 2003 and Adobe PDF Reader. Any other resources to be arranged by the presenters' on consultation with the Tutorials and Workshop Chair.

Note
• It is imperative that all presenters listed in the tutorial, workshop and demo proposal be available for presentation on the day of the tutorial (during 14-16 January 2011). Hence, it is important that consent be obtained from all the presenters and all organizational approvals obtained before making the tutorial proposal.
• All material from individual presenters must be collated and combined into a single tutorial presentation, as per the presentation guidelines (to be communicated at the time of acceptance).

Organizers:

IEEE Kharagpur Section
IEEE Student Branch at IIT Kharagpur

Important Dates:

Symposium: 14-16 Jan 2011
Workshop Proposal Submission: 14 Sept 2010
Workshop Notification: 30 Sept 2010
Regular Paper Submission: 14 Sept 2010
Acceptance Notification: 07 Nov 2010
Early Registration: 21 Nov 2010
Camera Ready Submission: 07 Dec 2010
Presentation Submission: 07 Dec 2010

Contact Us:

Symposium Secretariat: Contact | Email
Public Relations: Details | Email
Corporate Donors: Details | Email
Co-sponsors and Benefactors: Details | Email

for more info visit.
http://www.enjineer.com

Wednesday, September 1, 2010

Fuzzy Description of Skin Lesion

Authors: Nikolaos Lascaris ; Lucia Ballerini ; Robert Fisher ; Ben Aldridge ; Jonathan Rees

Abstract:
We propose a system for describing skin lesions images based on a human perception model. Pigmented skin lesions including melanoma and other types of skin cancer as well as non-malignant lesions are used. Works on classification of skin lesions already exist but they mainly concentrate on melanoma. The novelty of our work is that our system gives to skin lesion images a semantic label in a manner similar to humans. This work consists of two parts: first we capture they way users perceive each lesion, second we train a machine learning system that simulates how people describe images. For the first part, we choose 5 attributes: colour (light to dark), colour uniformity (uniform to non-uniform), symmetry (symmetric to non-symmetric), border (regular to irregular), texture (smooth to rough). Using a web based form we asked people to pick a value of each attribute for each lesion. In the second part, we extract 93 features from each lesions and we trained a machine learning algorithm using such features as input and the values of the human attributes as output. Results are quite promising, especially for the colour related attributes, where our system classifies over 80% of the lesions into the same semantic classes as humans.
Copyright:
2010 by The University of Edinburgh. All Rights Reserved
Links To Paper
Free Preprint Version
Bibtex format
@InProceedings{EDI-INF-RR-1358,
author = { Nikolaos Lascaris and Lucia Ballerini and Robert Fisher and Ben Aldridge and Jonathan Rees },
title = {Fuzzy Description of Skin Lesions},
book title = {SPIE Medical Imaging 2010},
publisher = {SPIE},
year = 2010,
}

PHD: open post in image-based object recognition at Edinburgh


University of Edinburgh, School of Informatics

Applications are invited for one fully funded PhD student to work in the School of Informatics on an EC funded project entitled "Fish4Knowledge: Supporting humans in knowledge gathering and question answering w.r.t. marine and  environmental monitoring through analysis of multiple video streams". 

Informatics at Edinburgh is one of the top-ranked departments of Informatics in Europe.

The consortium's project goal is to investigate: information abstraction and storage methods for reducing the massive amount of video data (from 10E+15 pixels to 10E+12 units of information), machine and human vocabularies for describing fish, recognising fish and retrieving examples from a massive image database, flexible process architectures to process the data and scientific queries and effective specialised user query interfaces. A  combination of computer vision, database storage, workflow and human  computer interaction methods will be used to achieve this. 

The main research of the PhD student will be in appearance based object recognition. The unique aspects of the project are that rather than having 100s of very different objects, here we 
will have 100s of very similar objects (tropical fish).

So, we will be looking for subtle discriminations, hopefully at the level of species, but perhaps at the level of genus or family. Another unique aspect of the PhD project will be access to up 10^9 instances of the different fish.

See:
for more details of the project.

Applicants must have a good BS/BSc or MS/MSc degree (A/B, 1st class/2.i) in an appropriate area, such as computer science, mathematics or engineering and should be competent in the MATLAB and C/C++ programming languages and have good mathematical skills.

As the project is about image and video data analysis, we will be looking for applicants who have studied computer vision, image processing and machine learning.

Closing date for applications is September 10, 2010.

The online application form can be found at     http://www.ed.ac.uk/schools-departments/informatics/postgraduate/apply 
When applying, use the term "Fish4Knowledge" as the answer in the Research Area box.

Informal inquiries may be made to Bob Fisher: rbf@inf.ed.ac.uk.

Πρόσκληση υποβολής προτάσεων για τη δράση "Ενίσχυση Μεταδιδακτόρων Ερευνητών/τριών


Ανακοινώνεται ότι το Υπουργείο Παιδείας Δια Βίου Μάθησης και Θρησκευμάτων / Γενική Γραμματεία Έρευνας και Τεχνολογίας καλεί τους ενδιαφερόμενους Δικαιούχους (Ιδρύματα ανώτατης εκπαίδευσης, Α.Σ.ΠΑΙ.Τ.Ε., Ερευνητικοί Φορείς) να υποβάλουν προτάσεις ένταξης Πράξεων, στο πλαίσιο της Δράσης«Ενίσχυση Μεταδιδακτόρων Ερευνητών/τριών». 
Κύριος σκοπός της Δράσης είναι η ενίσχυση Μεταδιδακτόρων Ερευνητών/τριών (ΜΕ) για την απόκτηση νέων ερευνητικών δεξιοτήτων, που θα αναβαθμίσουν τις προοπτικές της επαγγελματικής τους εξέλιξης σε οποιονδήποτε τομέα ή/και θα βοηθήσουν στην επανεκκίνηση  της  καριέρας τους  μετά από μια διακοπή.  Παράλληλα αναμένεται να ενισχυθεί το στελεχιακό δυναμικό των εγχώριων Φορέων Έρευνας και Εκπαίδευσης και να αναβαθμιστούν οι δυνατότητές τους για την εκπόνηση υψηλού επιπέδου ερευνητικού έργου.

Η συνολική δημόσια δαπάνη της παρούσας πρόσκλησης ανέρχεται στα 30.000.000€ και συγχρηματοδοτείται από το ΕΚΤ (Ευρωπαϊκό Κοινωνικό Ταμείο) στο πλαίσιο του Επιχειρησιακού Προγράμματος «Εκπαίδευση και Δια Βίου Μάθηση».
Στο πλαίσιο της Δράσης θα  χρηματοδοτηθούν (ΜΕ) από την Ελλάδα ή το εξωτερικό για την εκπόνηση ερευνητικού έργου, το οποίο μπορεί να αφορά σε οποιαδήποτε περιοχή της σύγχρονης έρευνας και θα έχει διάρκεια 24 - 36 μήνες. Το έργο θα υλοποιηθεί σε Ίδρυμα ανώτατης εκπαίδευσης (ΑΕΙ/ΤΕΙ), στην ΑΣΠΑΙΤΕ ή ερευνητικό φορέα της χώρας (Φορείς Υποδοχής/Δικαιούχοι) και για κάθε πρόταση θα πρέπει να ορισθεί Επιστημονικώς Υπεύθυνος/η (ΕΥ), ο/η οποίος/α θα προέρχεται από το επιστημονικό προσωπικό του Φορέα Υποδοχής. 
Oι ενδιαφερόμενοι (ΜΕ) θα πρέπει καταρχήν να διασφαλίσουν τη συνεργασία τους με κάποιον από τους παραπάνω φορείς, προκειμένου στη συνέχεια να συντάξουν την πρόταση σε συνεργασία με τον/την αντίστοιχο/η (ΕΥ).

Κατά την ημερομηνία υποβολής της πρότασης δεν θα πρέπει να έχει παρέλθει χρονικό διάστημα μεγαλύτερο των 10 ετών από την ημερομηνία αναγόρευσης του/της  υποψήφιου/ας  (ΜΕ) σε διδάκτορα.  

Επίσης οι άνδρες (ΜΕ) θα πρέπει να έχουν εκπληρώσει τις στρατιωτικές τους υποχρεώσεις ή να έχουν νόμιμα απαλλαγεί (όπου απαιτείται ανάλογα με τη χώρα προέλευσης του ΜΕ).
Ο ανώτατος προϋπολογισμός κάθε ερευνητικού έργου θα ανέρχεται στα 150.000€. 
Για κάθε Φορέα Υποδοχής θα χρηματοδοτηθεί και μία «Κεντρική Δράση», που θα περιλαμβάνει αφενός τη διοικητική υποστήριξη των έργων που θα εκτελέσουν οι (ΜΕ) και αφετέρου ενέργειες για τη δημοσιότητα των ερευνητικών αποτελεσμάτων και την αξιολόγηση  του τρόπου υλοποίησης  της Δράσης  στον φορέα.
Οι προτάσεις υποβάλλονται μέχρι την 30/09/2010 στα αγγλικά σε πλατφόρμα ηλεκτρονικής υποβολής, η οποία θα είναι διαθέσιμη προς χρήση εντός των επόμενων ημερών.
Αναλυτικές πληροφορίες για τη Δράση παρέχονται στο πλήρες κείμενο της πρόσκλησης και στα σχετικά συνοδευτικά έντυπα που είναι διαθέσιμα στην ιστοσελίδα της ΓΓΕΤ  (www.gsrt.gr).

Υπεύθυνοι/ες επικοινωνίας:
α) Πολυτίμη  Σακελλαρίου,  τηλέφωνο: 210-7458125, e-mail: psak@gsrt.gr
β) Βασιλική Καραβαγγέλη,  τηλέφωνο: 210-7458181, e-mail: vkar@gsrt.gr
γ) Άγγελος Κωστόπουλος,  τηλέφωνο: 210-7458128, e-mail: akos@gsrt.gr
δ) Κική Νικοπούλου,  τηλέφωνο: 210-7458129, e-mail: kiki@gsrt.gr

Recent advances in visual information retrieval

Dr. Oge Marques published his presentation on the Recent Advances in Visual Information Retrieval at the Klagenfurt University, Klagenfurt, Austria, June 2010



View more presentations from Oge Marques.

Web Search Interest for the term "CBIR"

Google Insights for Search is a service by Google similar to Google Trends, providing insights into the search terms people have been entering into the Google search engine. Unlike Google Trends, Google Insights for Search provides a visual representation of regional interest on a countries map. It displays top searches and rising searches that may help with keyword research. Results can be narrowed down with categories that are displayed for each search terms.

According to Google Insights for Search, people is no longer interested in "CBIR" and/or "Image Retrieval" :)

Xyggy


Content-based image retrieval (CBIR) using ~1m flickr photos sourced from Cophir. Enter a query and Xyggy will find visually similar images based on their content. Improve relevance by dragging images in and out of the interactive Xyggy query box. Toggle images on and off from the search while looking. Go ahead and try it! Xyggy makes finding and searching an engaging and fun experience. Interact, explore and discover. Perfect for content search on media-rich sites and touch devices such as the iPod, iPad and Android. Content can include image, video and audio.

Access the Xyggy Images demo

* How does the UI work?

- enter keywords/phrases and results will be displayed.

- drag one or more photos (one at a time) into query box to find other similar pictures

- drag images in AND out of the query box.

- toggle photos on and off while in the query box. useful while finding photos of interest.

- keywords/phrase can also be toggled on and off.


* Does the content-based image search work? Yes, and here are some examples:

a. sunflower

- results for query "sunflower": http://www.xyggy.com/image.php#&sunflower

- drag photo to find other similar ones: http://www.xyggy.com/image.php#128591463,&sunflower

- drag another picture: http://www.xyggy.com/image.php#128591463,194630700,&sunflower


b. sunflower + pendant

- results for query "sunflower": http://www.xyggy.com/image.php#&sunflower

- drag non-sunflower photo to find other similar ones: http://www.xyggy.com/image.php#147387916,&sunflower


* But, why don't many of the queries return meaningful results?

Because, there aren't any visually similar photos in the flickr image data set to match the query.


* The performance is slow?

The service runs on a hosted virtual server in the US.