Tuesday, April 27, 2010

Virginia Tech researchers reveal full-sized CHARLI-L humanoid robot

Article From engadget

Dr. Dennis Hong was kind enough to give us a glimpse the CHARLI robot on The Engadget Show this weekend -- or its leg, anyway -- but he and his students have just now finally revealed the full-sized bot that's been described as a "robot teenager."

As we'd heard, CHARLI is actually a series of robots that initially consists of the 5-foot tall CHARLI-L (or lightweight, pictured above), and the forthcoming CHARLI-H (or heavy), both of which are completely autonomous, with a full range of movements and gestures thanks to a series of pulleys, springs, carbon fiber rods, and actuators (not to mention some slightly more mysterious AI).

What's more, while CHARLI-L is currently restricted to walking on flat surfaces, CHARLI-H promises to be able to walk on the uneven ground around the Virginia Tech campus, and eventually even be able to "run, jump, kick, open doors, pick up objects, and do just about anything a real person can do." Unfortunately, there doesn't seem to be any video of CHARLI-L in action just yet, and it is still somewhat of a work in progress -- the researchers say it will be able to speak soon, and they're also busily working to improve its soccer skills in time for this year's RoboCup.

Monday, April 26, 2010

Special Issue on Content-Based Multimedia Indexing CBMI’2010

This call is related to the CBMI’2010 workshop but is open to all contributions on a relevant topic, whether  submitted at CBMI’2010 or not.
The Special issue of Multimedia Tools and Applications Journal will contain selected papers, after resubmission  and review from 8th  International Workshop on Content-Based Multimedia Indexing CBMI’2010. 
Following the seven successful previous events (Toulouse 1999, Brescia 2001, Rennes 2003, Riga 2005,  Bordeaux 2007, London 2008, Chania 2009), 2010 International Workshop on Content-Based Multimedia  Indexing (CBMI) will be held on June 23-25, 2010 in Grenoble, France. It will be organized by the Laboratoire
d'Informatique de Grenoble CBMI 2010 aims at bringing together the various communities  involved in the different aspects of content-based multimedia indexing, such as image processing and  information retrieval with current industrial trends and developments. 

Research in Multimedia Indexing covers a wide spectrum of topics in content analysis, content description, content adaptation and content retrieval. Hence, topics of interest for the Special Issue include, but are not
limited to:

•  Multimedia indexing and retrieval (image, audio, video, text) 
•  Matching and similarity search 
•  Construction of high level indices 
•  Multimedia content extraction 
•  Identification and tracking of semantic regions in scenes 
•  Multi-modal and cross-modal indexing 
•  Content-based search 
•  Multimedia data mining 
•  Metadata generation, coding and transformation 
•  Large scale multimedia database management 
•  Summarization, browsing and organization of multimedia content 
•  Presentation and visualization tools 
•  User interaction and relevance feedback 
•  Personalization and content adaptation

Saturday, April 24, 2010

CIVR 2010: Unsupervised Multi-Feature Tag Relevance Learning for Social Image Retrieval

Unsupervised Multi-Feature Tag Relevance Learning for Social Image Retrieval

The CIVR 2010 paper entitled Unsupervised Multi-Feature Tag Relevance Learning for Social Image Retrieval by Xirong Li, Cees Snoek, and Marcel Worring is available online. The work extends upon our tag-relevance approach. Interpreting the relevance of a user-contributed tag with respect to the visual content of an image is an emerging problem in social image retrieval. In the literature this problem is tackled by analyzing the correlation between tags and images represented by specific visual features. Unfortunately, no single feature represents the visual content completely, e.g., global features are suitable for capturing the gist of scenes, while local features are better for depicting objects. To solve the problem of learning tag relevance given multiple features, we introduce in this paper two simple and effective methods: one is based on the classical Borda Count and the other is a method we name UniformTagger. Both methods combine the output of many tag relevance learners driven by diverse features in an unsupervised, rather than supervised, manner. Experiments on 3.5 million social-tagged images and two test sets verify our proposal. Using learned tag relevance as updated tag frequency for social image retrieval, both Borda Count and UniformTagger outperform retrieval without tag relevance learning and retrieval with single-feature tag relevance learning. Moreover, the two unsupervised methods are comparable to a state-of-the-art supervised alternative, but without the need of any training data.

Friday, April 23, 2010

Tetris-Bot (Tetris-playing DSP + NXT robot)

Thursday, April 22, 2010

3rd International Conference on Similarity Search and Applications (SISAP 2010)

September 18.-19., 2010 - Istanbul, Turkey

The International Conference on Similarity Search and Applications (SISAP) is a conference devoted to similarity searching, with emphasis on metric space searching. It aims to fill in the gap left by the various scientific venues devoted to similarity searching in spaces with coordinates, by providing a common forum for theoreticians and practitioners around the problem of similarity searching in general spaces (metric and non-metric) or using distance-based (as opposed to coordinate-based) techniques in general.

SISAP aims to become an ideal forum to exchange real-world, challenging and exciting examples of applications, new indexing techniques, common testbeds and benchmarks, source code, and up-to-date literature through a web page serving the similarity searching community. Authors are expected to use the testbeds and code from the SISAP web site for comparing new applications, databases, indexes and algorithms.

After the very successful first events in Cancun, Mexico (2008) and Prague, Czech Republic (2009), this year SISAP conference will be held in Istanbul, Turkey, on September 18-19, 2010.

The four best papers will be invited to be published in a special issue of an international journal. SISAP 2010 is organized in cooperation with ACM SIGSPATIAL and the papers will be indexed in the ACM Digital Library.

Wednesday, April 21, 2010

The 2nd International Symposium on Peer Reviewing: ISPR 2010

The 4th International Conference on Knowledge Generation, Communication and Management: KGCM 2010

June 29th - July 2nd, 2010 – Orlando, Florida, USA

In a survey of members of the Scientific Research Society, "only 8% agreed that 'peer review works well as it is'." (Chubin and Hackett, 1990; p.192).

"A recent U.S. Supreme Court decision and an analysis of the peer review system substantiate complaints about this fundamental aspect of scientific research. Far from filtering out junk science, peer review may be blocking the flow of innovation and corrupting public support of science." (Horrobin, 2001)

Empirical studies have shown that assessments made by independent reviewers of papers submitted to journals and abstracts submitted to conferences are no reproducible, i.e. agreement between reviewers is about what is expected by chance alone. Rothwell and Martyn (2000), for example, analyzed the statistical correlations among reviewers' recommendations (made to two journals and two conferences) by analysis of variance and found out that for one journal "was not significantly greater than that expected by chance" and, in general, agreement between reviewers "was little greater than would be expected by chance alone."

These are just three examples of an increasing number of facts that are indicating that more research and reflections are urgently needed on research quality assurance and, specifically, on Peer Review. "Peer Review is one of the sacred pillars of the scientific edifice" (Goodstein, 2000). "Peer Review is central to the organization of modern science…why not apply scientific [and engineering] methods to the peer review process" (Horrobin, 2001). Why not apply peer review to current peer reviewing methodologies.

Research and reflections on Peer Review have been mainly addressed by Bio-medical communities, and the results have been mostly shared via five International Congresses on "Peer Review in Biomedical Publication", the first of which was held in 1990. The sixth of these congresses will be held on September 2010 and is being organized by the Journal of The American Medical Association (JAMA) and the BMJ (British Medical Journal) Publishing Group.

We are convinced that reflections and research on Peer Reviewing is also needed in other scientific and engineering disciplines, as well as in multi-, inter- and trans-disciplinary research, technological projects, and Knowledge Management in Business and Government. Methodologies applied, and problems found, in peer reviewing in diverse academic disciplines, can synergistically cross-fertilize each other, and can contribute to knowledge quality assurance in the area of Knowledge Management, which would benefit the private and the public sectors and, in general, what has been called as "Knowledge Society". This is why we think that the multidisciplinary context of WMSCI 2010, and its collocated conferences, might fertilize the required cross-disciplinary opportunities.

Conceptual and methodological research and reflections on Peer Review are being increasingly desirable, important and even necessary in academic disciplines and interdisciplinary programs and projects. Peer Review is a research evaluation process which, in turn, requires to be researched and, in turn, peer-reviewed. Peer Review of Peer Review methodologies is urgently being required.

" 'Peer Review' is a name given to a principle that research should be evaluated by people bound by mutual trust and respect who are socially recognized as expert in a given field of knowledge." (Steve Fuller, 2002, Knowledge Management foundation p. 232; emphasis added) But, "peer review" is also a name given to the processes and/or the methodologies of implementing the mentioned principle and achieving the implied objective. In any case, "peer review" refers to knowledge quality control (as a principle, an end, or a mean). But the fact that only 8% of the members of the Scientific Research Society agreed that 'peer review works well as it is' means that peer review "as it is" needs to be, in turn, peer reviewed and, consequently, researched. Although we all agree on "peer review' as principle there is a solid disagreement regarding the effectiveness of the methodologies being applied into achieving the objectives implied by the commonly agreed principle. In the survey made to the members of the Scientific Research Society, 92% of its members disagreed with the actual implementations and methodologies applied in peer reviewing processes.

The almost unanimous agreement about peer reviewing as principle, and the huge disagreement about its current methods, are a clear sign that more efforts are needed in scientific and engineering research and development in order to identify more effective methodologies and support systems (especially with current Information and Communication Technologies) so the real purpose of peer review (based on its principle) is better fulfilled.

The Organizing Committee of the International Symposium on Peer Review: ISPR 2010 thinks that the multi-disciplinary approach of The 13th World Multi-Conference on Systemics, Cybernetics and Informatics: WMSCI 2010 (and its collocated conferences and symposia) and the ICT orientation of many of their participants, might be a fertile context for academics and researchers who can help, through their experience and knowledge, their reflections, research, ideas and opinions, to identify solutions, innovations and support systems for more effective peer reviewing approaches, models, and methodologies.

The Organizing Committee of ISPR 2010 invites scholars, researchers, editors, publishers, authors, readers, professionals and, in general, any user or person affected by or affecting scientific and engineering peer review to submit articles related to their research reflections, ideas, hypothesis, models, etc. on peer review and how to improve it. Among the kinds of submissions accepted are the following:

  • Research articles
  • Reflection articles
  • Literature research papers
  • Experience-based Position papers
  • Research proposals
  • Engineering Design Proposals
  • Decision Support Systems Engineering applied to editorial decisions.
  • New ICT-based peer reviewing models


Chubin, D. R. and Hackett E. J., 1990, Peerless Science, Peer Review and U.S. Science Policy;
            New York, State University of New York Press.

Horrobin, D., 2001, "Something Rotten at the Core of Science?" Trends in Pharmacological
, Vol. 22, No. 2, February 2001. Also at and (both pages were accessed on February 1, 2010)

Goodstein, D., 2000, "How Science Works", U.S. Federal Judiciary Reference Manual on
, pp. 66-72 (referenced in Hoorobin, 2000)

Rothwell, P. M. and Martyn, C. N., 2000, "Reproducibility of peer review in clinical neuroscience Is
            agreement between reviewers any greater than would be expected by chance alone?" Brain, A
            Journal of Neurology
, Vol. 123, No. 9, 1964-1969, September 2000, Oxford University Press

Sunday, April 18, 2010

Wikipedia Retrieval

ImageCLEF's Wikipedia Retrieval task provides a testbed for the system-oriented evaluation of visual information retrieval from a collection of Wikipedia images. The aim is to investigate retrieval approaches in the context of a large and heterogeneous collection of images (similar to those encountered on the Web) that are searched for by users with diverse information needs.
In 2010, ImageCLEF's Wikipedia Retrieval will use a new collection of over 237,000 Wikipedia images that cover diverse topics of interest. These images are associated with unstructured and noisy textual annotations in English, French, and German.
This is an ad-hoc image retrieval task; the evaluation scenario is thereby similar to the classic TREC ad-hoc retrieval task and the ImageCLEF photo retrieval task: simulation of the situation in which a system knows the set of documents to be searched, but cannot anticipate the particular topic that will be investigated (i.e. topics are not known to the system in advance). The goal of the simulation is: given a textual query (and/or sample images) describing a user's (multimedia) information need, find as many relevant images as possible from the Wikipedia image collection.
Any method can be used to retrieve relevant documents. We encourage the use of both concept-based and content-based retrieval methods and, in particular, multi modal and - new this year - multi lingual approaches that investigate the combination of evidence from different modalities and language resources.

ImageCLEF 2010 Wikipedia Collection

The ImageCLEF 2010 Wikipedia collection consists of 237,434 images and associated user-supplied annotations. The collection was built to cover similar topics in English, German and French. Topical similarity was obtained by selecting only Wikipedia articles which have versions in all three languages and are illustrated with at least one image in each version: 44,664 such articles were extracted from the September 2009 Wikipedia dumps, containing a total number of 265,987 images. Since the collection is intended to be freely distributed, we decided to remove all images with unclear copyright status. After this operation, duplicate elimination and some additional cleaning up, the remaining number of images in the collection is 237,434, with the following language distribution:
-English only: 70,127
-German only: 50,291
-French only: 28,461
-English and German: 26,880
-English and French: 20,747
-German and French: 9,646
-English, German and French: 22,899
-Language undetermined: 8,144
-No textual annotation: 239
The main difference between the ImageCLEF 2010 Wikipedia collection and the INEX MM collection (Westerveld and van Zwol, 2007) used in the previous WikipediaMM tasks is that the multilingual aspect has been reinforced and both mono- and cross-lingual evaluations can be carried out. Another difference is that this year, participants will receive for each image both its user-provided annotation and also links to the article(s) which contain the image. Finally, in order to encourage multi modal approaches, three types of low-level image features were extracted using PIRIA, CEA LIST's image indexing tool (Joint et al., 2004) and are provided to all participants.
(Joint et al., 2004) M. Joint, P.-A. Moëllic, P. Hède, P. Adam. PIRIA: a general tool for indexing, search and retrieval of multimedia content In Proceedings of SPIE, 2004.
(Westerveld and van Zwol, 2007) T. Westerveld and R. van Zwol. The INEX 2006 Multimedia Track. In N. Fuhr, M. Lalmas, and A. Trotman, editors, Advances in XML Information Retrieval:Fifth International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2006, Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence (LNCS/LNAI). Springer-Verlag, 2007.

Two examples that illustrate the images in the collection and their metadata are provided below:
example 8120
example 35

Evaluation Objectives

The characteristics of the new Wikipedia collection allow for the investigation of the following objectives:

  • how well do the retrieval approaches cope with larger scale image collections?
  • how well do the retrieval approaches cope with noisy and unstructured textual annotations?
  • how well do the content-based retrieval approaches cope with images that cover diverse topics and are of varying quality?
  • how well can systems exploit and combine different modalities given a user's multimedia information need? Can they outperform mono modal approaches like query-by-text, query-by-concept or query-by-image?
  • how well can systems exploit the multiple language resources? Can they outperform mono-lingual approaches that use for example only the English text annotations?
In the context of INEX MM 2006-2007, mainly text-based retrieval approaches have been examined. Here, we hope to attract more visually-oriented approaches and most importantly, multi modal and multi lingual approaches that investigate the combination of evidence from different modalities and languages. The results of WikipediaMM at ImageCLEF 2008/2009 showed that multimedia retrieval approaches outperformed for certain topics the text-based approaches, but globally the retrieval based on text remains unbeaten. The retrieval of multimedia documents will stay in the focus of attention for 2010. This year, a second focus will be the effectiveness of multi lingual approaches for multimedia document retrieval.


The schedule can be found here:

  • 15.2.2010: registration opens for all ImageCLEF tasks
  • 30.3.2010: data release (images + metadata + article)
  • 26.4.2010: topic release
  • 15.5.2010: registration closes for all ImageCLEF tasks
  • 11.6.2010: submission of runs
  • 16.7.2010: release of results
  • 15.8.2010: submission of working notes papers
  • 20.09.2010-23.09.2010: CLEF 2010 Conference, Padova, Italy

ImageCLEF 2010

ImageCLEF is part of CLEF 2010. Please also consider submitting scientific articles to CLEF2010!


There will be four main tasks in ImageCLEF 2010: ImageCLEF also organizes a challenge at ICPR 2010 in Instanbul: ImageCLEF@ICPR


Registration to all tasks is open here. It is necessary to sign the copyright form that is available here


Each of the tasks sets its own schedule. A (tentative) global schedule can be found below:
  • 16.2.2010: registration opens for all ImageCLEF tasks
  • 15.3.2010-30.4.2010: data release (depending on the task)
  • 15.4.2010-15.5.2010: topic release (depending on the task)
  • 15.5.2010: registration closes for all ImageCLEF tasks
  • 1.6.2010-30.6.2010: submission of runs (depending on the task)
  • 15.7.2010: release of results
  • 15.8.2010 : submission of working notes papers
  • 20.09.2010-23.09.2010: CLEF 2010 Conference, Padova, Italy

Kaikō Project

deepseelogo[1] Kaikō is a full-featured multimedia search engine library written entirely in Java. Kaikō is also an implementation of the standard ISO/IEC 15938-12:2008 (MPEG Query Format (MPQF)) and the standard ISO/IEC CD 24800-3:2008 (JPSearch).

Kaikō was a deep-sea Japanese research submersible. It sampled bacteria from the ocean floor of the Challenger Deep in the Mariana Trench, the deepest location in the world. Kaikō was lost during Typhoon Chan-Hom in May 2003, when a secondary cable connecting it to the surface broke.

Go to the images search form here.

Go to the MPEG Query Format online demo here.

MPEG Query Format (MPQF)

This web page contains informative material about the initiative of standardization of an MPEG Query Format (MPQF), [ISO/IEC 15938-12:2008] taking place within the MPEG context (ISO/IEC JTC1/SC29/WG11). The definition of a unified language to accept and respond to requests for multimedia contents searches would facilitate repositories interoperability, allowing users experiencing aggregated services from various multimedia databases. Basically, MPQF is an XML-based query language that defines the format of queries and replies to be interchanged between clients and servers in a distributed multimedia information search-and-retrieval context. The two main benefits of standardizing such kind of language are 1) interoperability between parties (e.g. content providers, aggregators and user agents) and 2) platform independence; developers can write their applications involving multimedia queries independently of the database used, which fosters software reusability and maintainability.

MPQF became an ISO/IEC standard in December 2008.


The X Project multimedia search engine can be queried using an MPEG Query Format interface. In fact, parts of the software has been contributed to MPEG in the form of MPQF Reference Software. Now we offer an MPQF online demo based on the X Project.

Try the MPEG Query Format online demo here.

Sunday, April 11, 2010

2010 International Conference on Computer Application and System Modeling

Taiyuan, China. October 22-24, 2010

2010 International Conference on Computer Application and System Modeling (ICCASM 2010) is the premier forum for the presentation of new advances and research results in the fields of theoretical, experimental, and applied Computer Application and System Modeling. The conference proceeding will be published by IEEE CPS, which will be included in the IEEE Xplore, and will be indexed by Ei Compendex and ISI Proceeding. The conference will bring together leading researchers, engineers and scientists in the domain of interest from around the world. Topics of interest for submission include, but are not limited to:

1.  Artificial Intelligence Theory and Applications

Machine Learning

Pattern Recognition

Knowledge Discovery

Intelligent Data Analysis

Neural Networks

Genetic Algorithms

Medical Diagnostics

Data Mining

Support Vector Machines

Machine Vision

Intelligent Systems and Language

2. Computer Science and Applications

Numerical Algorithms and Analysis

Computational Simulation and Analysis

Data Visualization and Virtual Reality

Computational Mathematics

Computational Graphics

Computational Statistics

Scientific and Engineering Computing

Parallel and Distributed Computing

Grid Computing and Cluster Computing

Embedded and Network Computing

Signal and Image Processing


3. Network,Communication Technology and Applications

Attacks and Prevention of Online Fraud

Cryptographic Protocols and Functions

Economics of Security and Privacy

Identity and Trust Management

Information Hiding and Watermarking

Infrastructure Security

Network and Wireless Network Security

Trusted Computing

Adaptive Modulation and Coding

Channel Capacity and Channel Coding

4. System Modeling and Simulation

Simulation Tools and Languages

Discrete Event Simulation

Object-Oriented Implementation

Web-based Simulation

Monte Carlo Simulation

Distributed Simulation

Simulation Optimization

Numerical Methods

Mathematical Modelling

Agent-based Modelling

Dynamic Modelling

5. Automation Control and Applications

Micro-computer Embedded Control

Process Control and Automation

Sensors and Applications

Industrial Process Control

Decision Support Systems

Fuzzy Control and Its Applications

Cybernetics for Informatics

Industrial Bus Control Applications

Measurement and Diagnosis Systems

Digital System and Logic Design

Circuits and Systems

6. Software Engineering and Information System Design

Software Architectures

Software Design and Development

Software Testing

Software Agents

Web-based Software Engineering

Project Management

Software Performance Engineering

Service Engineering

Model-Driven Development

Applications of DB Systems and Information Systems


Important Date:

Paper Submission (Full Paper) May 31, 2010

Notification of Acceptance July 25, 2010

Final Paper Submission August 10, 2010

Authors' Registration August 10, 2010

ICCASM 2010 Conference Dates Oct 22 - 24, 2010

Friday, April 9, 2010

ICCGI 2010: The Fifth International Multi-Conference on Computing in the Global Information Technology (Second Call For Papers)

September 20-25, 2010 - Valencia, Spain

General page:

Call for Papers:

Submission deadline: April 20, 2010

Sponsored by IARIA,

Extended versions of selected papers will be published in IARIA Journals:

Publisher: CPS ( see: )

Archived: IEEE CSDL (Computer Science Digital Library) and IEEE Xplore

Submitted for indexing: Elsevier's EI Compendex Database, EI's Engineering Information Index

Other indexes are being considered: INSPEC, DBLP, Thomson Reuters Conference Proceedings Citation Index

Please note the Poster Forum and Work in Progress options.

The topics suggested by the conference can be discussed in term of concepts, state of the art, research, standards, implementations, running experiments, applications, and industrial case studies. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited to, topic areas.

All tracks are open to both research and industry contributions, in terms of Regular papers, Posters, Work in progress, Technical/marketing/business presentations, Demos, Tutorials, and Panels.

Before submission, please check and conform with the Editorial rules:

ICCGI 2010 Tracks (tracks' topics and submission details: see CfP on the site)

Industrial systems

Control theory and systems; Fault-tolerance and reliability; Data engineering; Enterprise computing and evaluation; Electrical and electronics engineering; Economic decisions and information systems; Advanced robotics; Virtual reality systems; Industrial systems and applications; Industrial and financial systems; Industrial control electronics; Industrial IT solutions

Evolutionary computation

Algorithms, procedures, mechanisms and applications; Computer architecture and systems; Computational sciences; Computation in complex systems; Computer and communication systems; Computer networks; Computer science theory; Computation and computer security; Computer simulation; Digital telecommunications; Distributed and parallel computing; Computation in embedded and real-time systems; Soft computing; User-centric computation

Autonomic and autonomous systems

Automation and autonomous systems; Theory of Computing; Autonomic computing; Autonomic networking; Network computing; Protecting computing; Theories of agency and autonomy; Multi-agent evolution, adaptation and learning; Adjustable and self-adjustable autonomy; Pervasive systems and computation; Computing with locality principles; GRID networking and services; Pervasive computing; Cluster computing and performance; Artificial intelligence Computational linguistics; Cognitive technologies; Decision making; Evolutionary computation; Expert systems; Computational biology


Models and techniques for biometric technologies; Bioinformatics; Biometric security; Computer graphics and visualization; Computer vision and image processing; Computational biochemistry; Finger, facial, iris, voice, and skin biometrics; Signature recognition; Multimodal biometrics; Verification and identification techniques; Accuracy of biometric technologies; Authentication smart cards and biometric metrics; Performance and assurance testing; Limitations of biometric technologies; Biometric card technologies; Biometric wireless technologies; Biometric software and hardware; Biometric standards

Knowledge data systems

Data mining and Web mining; Knowledge databases and systems; Data warehouse and applications; Data warehousing and information systems; Database performance evaluation; Semantic and temporal databases; Database systems Databases and information retrieval; Digital library design; Meta-data modeling

Mobile and distance education

Human computer interaction; Educational technologies; Computer in education; Distance learning; E-learning; Mobile learning Cognitive support for learning; Internet-based education; Impact of ICT on education and society; Group decision making and software; Habitual domain and information technology; Computer-mediated communications; Immersing authoring; Contextual and cultural challenges in user mobility

Intelligent techniques, logics, and systems

Intelligent agent technologies; Intelligent and fuzzy information processing; Intelligent computing and knowledge management; Intelligent systems and robotics; Fault-tolerance and reliability; Fuzzy logic & systems; Genetic algorithms; Haptic phenomena; Graphic recognition; Neural networks; Symbolic and algebraic computation; Modeling, simulation and analysis of business processes and systems

Knowledge processing

Knowledge representation models; Knowledge languages; Cognitive science; Knowledge acquisition; Knowledge engineering; Knowledge processing under uncertainty; Machine intelligence; Machine learning; Making decision through Internet; Networking knowledge plan

Information technologies

Information technology and organizational behavior; Agents, data mining and ontologies; Information retrieval systems; Information and network security; Information ethics and legal evaluations; Optimization and information technology; Organizational information systems; Information fusion; Information management systems; Information overload; Information policy making; Information security; Information systems; Information discovery

Internet and web technologies

Internet and WWW-based computing; Web and Grid computing; Internet service and training; IT and society; IT in education and health; Management information systems; Visualization and group decision making; Web based language development; Web search and decision making; Web service ontologies; Scientific web intelligence; Online business and decision making; Business rule language; E-Business; E-Commerce; Online and collaborative work; Social eco-systems and social networking; Social decisions on Internet; Computer ethics

Digital information processing

Mechatronics; Natural language processing; Medical imaging; Image processing; Signal processing; Speech processing; Video processing; Pattern recognition; Pattern recognition models; Graphics & computer vision; Medical systems and computing

Cognitive science and knowledge agent-based systems

Cognitive support for e-learning and mobile learning; Agents and cognitive models; Agents & complex systems; computational ecosystems; Agent architectures, perception, action & planning in agents; Agent communication: languages, semantics, pragmatics & protocols; Agent-based electronic commerce and trading systems Multi-agent constraint satisfaction; Agent programming languages, development environments and testbeds; Computational complexity in autonomous agents; Multi-agent planning and cooperation; Logics and formal models of for agency verification; Nomadic agents; Negotiation, auctions, persuasion; Privacy and security issues in multi-agent systems

Mobility and multimedia systems

Mobile communications; Multimedia and visual programming; Multimedia and decision making; Multimedia systems; Mobile multimedia systems; User-centered mobile applications; Designing for the mobile devices; Contextual user mobility; Mobile strategies for global market; Interactive television and mobile commerce

Systems performance

Performance evaluation; Performance modeling; Performance of parallel computing; Reasoning under uncertainty; Reliability and fault-tolerance; Performance instrumentation; Performance monitoring and corrections; Performance in entity-dependable systems; Real-time performance and near-real time performance evaluation; Performance in software systems; Performance and hybrid systems; Measuring performance in embedded systems

Networking and telecommunications

Telecommunication and Networking; Telecommunication Systems and Evaluation; Multiple Criteria Decision Making in Information Technology; Network and Decision Making; Networks and Security; Communications protocols (SIP/H323/MPLS/IP); Specialized networks (GRID/P2P/Overlay/Ad hoc/Sensor); Advanced services (VoIP/IPTV/Video-on-Demand; Network and system monitoring and management; Feature interaction detection and resolution; Policy-based monitoring and managements systems; Traffic modeling and monitoring; Traffic engineering and management; Self-monitoring, self-healing and self-management systems; Man-in-the-loop management paradigm

Software development and deployment

Software requirements engineering; Software design, frameworks, and architectures; Software interactive design; Formal methods for software development, verification and validation; Neural networks and performance; Patterns/Anti-patterns/Artifacts/Frameworks; Agile/Generic/Agent-oriented programming; Empirical software evaluation metrics; Software vulnerabilities; Reverse engineering; Software reuse; Software security, reliability and safety; Software economics; Software testing and debugging; Tracking defects in the OO design; Distributed and parallel software; Programming languages; Declarative programming; Real-time and embedded software; Open source software development methodologies; Software tools and deployment environments; Software Intelligence; Software Performance and Evaluation

Knowledge virtualization

Modeling techniques, tools, methodologies, languages; Model-driven architectures (MDA); Service-oriented architectures (SOA); Utility computing frameworks and fundamentals; Enabled applications through virtualization; Small-scale virtualization methodologies and techniques; Resource containers, physical resource multiplexing, and segmentation; Large-scale virtualization methodologies and techniques; Management of virtualized systems; Platforms, tools, environments, and case studies; Making virtualization real; On-demand utilities Adaptive enterprise; Managing utility-based systems; Development environments, tools, prototypes

Systems and networks on the chip

Microtechnology and nanotechnology; Real-time embedded systems; Programming embedded systems; Controlling embedded systems; High speed embedded systems; Designing methodologies for embedded systems; Performance on embedded systems; Updating embedded systems; Wireless/wired design of systems-on-the-chip; Testing embedded systems; Technologies for systems processors; Migration to single-chip systems

Context-aware systems

Context-aware autonomous entities; Context-aware fundamental concepts, mechanisms, and applications; Modeling context-aware systems; Specification and implementation of awareness behavioral contexts; Development and deployment of large-scale context-aware systems and subsystems; User awareness requirements Design techniques for interfaces and systems; Methodologies, metrics, tools, and experiments for specifying context-aware systems; Tools evaluations, Experiment evaluations

Networking technologies

Next generation networking; Network, control and service architectures; Network signalling, pricing and billing; Network middleware; Telecommunication networks architectures; On-demand networks, utility computing architectures; Next generation networks [NGN] principles; Storage area networks [SAN]; Access and home networks; High-speed networks; Optical networks; Peer-to-peer and overlay networking; Mobile networking and systems; MPLS-VPN, IPSec-VPN networks; GRID networks; Broadband networks

Security in network, systems, and applications

IT in national and global security; Formal aspects of security; Systems and network security; Security and cryptography; Applied cryptography; Cryptographic protocols; Key management; Access control; Anonymity and pseudonymity management; Security management; Trust management; Protection management; Certification and accreditation; Virii, worms, attacks, spam; Intrusion prevention and detection; Information hiding; Legal and regulatory issues

Knowledge for global defense

Business continuity and availability; Risk assessment; Aerospace computing technologies; Systems and networks vulnerabilities; Developing trust in Internet commerce; Performance in networks, systems, and applications; Disaster prevention and recovery; IT for anti-terrorist technology innovations (ATTI); Networks and applications emergency services; Privacy and trust in pervasive communications; Digital rights management; User safety and protection

Information Systems [IS]

Management Information Systems; Decision Support Systems; Innovation and IS; Enterprise Application Integration; Enterprise Resource Planning; Business Process Change; Design and Development Methodologies and Frameworks; Iterative and Incremental Methodologies; Agile Methodologies; IS Standards and Compliance Issues; Risk Management in IS Design and Development; Research Core Theories; Conceptualisations and Paradigms in IS; Research Ontological Assumptions in IS Research; IS Research Constraints, Limitations and Opportunities; IS vs Computer Science Research; IS vs Business Studies

IPv6 Today - Technology and deployment

IP Upgrade - An Engineering Exercise or a Necessity?; Worldwide IPv6 Adoption - Trends and Policies; IPv6 Programs, from Research to Knowledge Dissemination; IPv6 Technology - Practical Information; Advanced Topics and Latest Developments in IPv6; IPv6 Deployment Experiences and Case Studies; IPv6 Enabled Applications and Devices


Continuous and Discrete Models; Optimal Models; Complex System Modeling; Individual-Based Models; Modeling Uncertainty; Compact Fuzzy Models; Modeling Languages; Real-time modeling; Peformance modeling


Multicriteria Optimization; Multilervel Optimization; Goal Programming; Optimization and Efficiency; Optimization-based decisions; Evolutionary Optimization; Self-Optimization; Extreme Optimization; Combinatorial Optimization; Disccrete Optimization; Fuzzy Optimization; Lipschitzian Optimization; Non-Convex Optimization; Convexity; Continuous Optimization; Interior point methods; Semidefinite and Conic Programming


Complexity Analysis; Computational Complexity; Complexity Reduction; Optimizing Model Complexity; Communication Complexity; Managing Complexity; Modeling Complexity in Social Systems; Low-complexity Global Optimization; Software Development for Modeling and Optimization; Industrial applications

Monday, April 5, 2010

Information Retrieval using a Bayesian Model of Learning and Generalization

Bayesian Sets is a new framework for information retrieval based on how humans learn new concepts and generalize.  In this framework a query consists of a set of items which are examples of some concept. Bayesian Sets automatically infers which other items belong to that concept and retrieves them. As an example, for the query with the two animated movies, “Lilo & Stitch” and “Up”, Bayesian Sets would return other similar animated movies, like “Toy Story“.

How does this work? Human generalization has been intensely studied in cognitive science and various models have been proposed based on some measure of similarity and feature relevance. Recently, Bayesian methods have emerged as models of both human cognition and as the basis of machine learning systems.

Bayesian Sets – a novel framework for information retrieval

Consider a universe of items, where the items could be web pages, documents, images, ads, social and professional profiles, publications, audio, articles, video, investments, patents, resumes, medical records, or any other class of items we may want to query.

An individual item is represented by a vector of features of that item.  For example, for text documents, the features could be counts of word occurrences, while for images the features could be the amounts of different color and texture elements.

Given a query consisting of a small set of items (e.g. a few images of buildings) the task is to retrieve other items (e.g. other images) that belong to the concept exemplified by the query.  To achieve the task, we need a measure, or score, of how well an available item fits in with the query items.

A concept can be characterized by using a statistical model, which defines the generative process for the features of items belonging to the concept.  Parameters control specific statistical properties of the features of items.  For example, a Gaussian distribution has parameters which control the mean and variance of each feature. Generally these parameters are not known, but a prior distribution can represent our beliefs about plausible parameter values.

The score

The score used for ranking the relevance of each item x given the set of query items Q compares the probabilities of two hypotheses. The first hypothesis is that the item x came from the same concept as the query items Q. For this hypothesis, compute the probability that the feature vectors representing all the items in Q and the item x were generated from the same model with the same, though unknown, model parameters. The alternative hypothesis is that the item x does not belong to the same concept as the query examples Q. Under this alternative hypothesis, compute the probability that the features in item x were generated from different model parameters than those that generated the query examples Q. The ratio of the probabilities of these two hypotheses is the Bayesian score at the heart of Bayesian Sets, and can be computed efficiently for any item x to see how well it “fits into” the set Q.

This approach to scoring items can be used with any probabilistic generative model for the data, making it applicable to any problem domain for which a probabilistic model of data can be defined.  In many instances, items can be represented by a vector of features, where each feature can either be present or absent in the item.  For example, in the case of documents the features may be words in some vocabulary, and a document can be represented by a binary vector x where element j of this vector represents the presence or absence of vocabulary word j in the document.  For such binary data, a multivariate Bernoulli distribution can be used to model the feature vectors of items, where the jth parameter in the distribution represents the frequency of feature j.  Using the beta distribution as the natural prior the score can be computed extremely efficiently.

Read More

Sunday, April 4, 2010

S2ES 2010

The term Science 2.0 has been used with different but related meanings. It is usually related to Web 2.0-enabled scientific activities, specifically Web 2.0[1], but it has also been related to the expansion of science by means of new concepts and theories (Second Order Cybernetics[2], and the Systems Approach), or new mode of producing knowledge[3].
The purpose of the Organizing Committee of the International Symposium on Science 2.0 and Expansion of Science (S2ES 2010) is to bring together researchers and designers from the three perspectives of the proposed New Science in order 1) to share their reflections regarding each of these three perspectives, 2) to analyze what is common among them, and 3) to identify the ways how they complement each other.
Accordingly, the Organizing Committee is planning to include in the symposium program 1) sessions with formal presentations, and/or 2) informal conversational sessions, and/or 3) hybrid sessions, which will have formal presentations first and informal conversations later.
S2ES 2010 ( will be held in the context of The World Multi-Conference on Systemics, Cybernetics, and Informatics: WMSCI 2010 ( in Orlando, Florida, USA on June 29th-July 2nd, 2010.

Papers/Abstracts Submissions and Invited Sessions Proposals: April 16th, 2010
Authors Notifications: May 5th, 2010
Camera-ready, full papers: May 26th, 2010
Submissions for Face-to-Face or for Virtual Participation are both accepted. Both kinds of submissions will have the same reviewing process and the accepted papers will be included in the same proceedings.

Pre-Conference and Post-conference Virtual sessions (via electronic forums) will be held for each session included in the conference program, so that sessions papers can be read before the conference, and authors presenting at the same session can interact during one week before and after the conference. Authors can also participate in peer-to-peer reviewing in virtual sessions.
All Submitted papers/abstracts will go through three reviewing processes: (1) double-blind (at least three reviewers), (2) non-blind, and (3) participative peer reviews. These three kinds of review will support the selection process of those papers/abstracts that will be accepted for their presentation at the conference, as well as those to be selected for their publication in JSCI Journal.
Authors of accepted papers who registered in the conference can have access to the evaluations and possible feedback provided by the reviewers who recommended the acceptance of their papers/abstracts, so they can accordingly improve the final version of their papers. Non-registered authors will not have access to the reviews of their respective submissions.
Registration fees of an effective invited session organizer will be waived according to the policy described in the web page  (click on 'Invited Session', then on 'Benefits for the Organizers of Invited Sessions'), where you can get information about the ten benefits for an invited session organizer. For Invited Sessions Proposals, please visit the conference web site, or directly to

Authors of the best 10%-20% of the papers presented at the conference (included those virtually presented) will be invited to adapt their papers for their publication in the Journal of Systemics, Cybernetics and Informatics.

3DPVT'10: Program Set & Call for Participation

5th International Symposium 3D Data Processing, Visualization and Transmission
Espace Saint Martin, Paris, France, May 17-20, 2010
This meeting presents new research ideas and results related to the capture, representation, compact storage, transmission, processing, editing, optimization and visualization of 3D data.  These topics span a number of research fields from applied  mathematics, computer science, and engineering: computer vision, computer graphics, geometric modeling, signal and image processing, bioinformatics, and statistics. This symposium follows previous highly successful events in Padova 2002, Thessaloniki 2004, Chapel Hill 2006 and Atlanta 2008.

Full conference program for 3DPVT now available at:
Deadline for early registration: April 16th 2010   
           Richard Szeliski (Microsoft Research Redmond, USA)
           Aloysha Efros (CMU, USA)   
           George Drettakis (INRIA, France)   
           Christian Sminchisescu Β (University of Bonn, Germany) - Β 
           Structured Prediction for Computer Vision
           Bennett Wilburn (Microsoft Research Asia, China) -
           Photometric Methods for 3-D Modeling

São Paulo Advanced School of Computing Image Processing and Visualization

IME-USP, July 12-17, 2010 <>

The São Paulo Advanced School of Computing is a biennial academic event organized by the Institute of Mathematics and Statistics of the University of São Paulo (IME-USP), the Institute of Mathematics and Computer Sciences of the University of São Paulo (ICMC-USP) and the Institute of Computing of the University of Campinas (IC-UNICAMP),  with financial support by FAPESP. Its goal is to exhibit research in Computer Science done in the State of São Paulo and attract young talents from Brazil and other countries to carry out their PhD and post-doctoral studies in its institutions. The event covers themes in areas in which the three organizing institutions show excellence. This 1st edition of the School will take place at IME-USP, from July 12th to July 17th, 2010, and the theme is Image Processing and Visualization. The School will include the following activities: short intensive courses (minicourses), presentations of graduate programs offered by the three São Paulo institutions and their current research projects, short presentations by advanced participating students and some invited talks. The minicourses will be taught by the following experts:

Alexandre Xavier Falcão (IC-UNICAMP)
Alexandru C. Telea (University of Groningen, Netherlands)
Jayaram K. Udupa (University of Pennsylvania, Philadelphia)
Maria Cristina Ferreira de Oliveira, Agma Juci Machado Traina, Rosane
Minghim (ICMC-USP)
Roberto Marcondes Cesar Junior (IME-USP)

More details about the schedule can be found on the conference web page. There are available slots for a total of 65 students and 10 researchers. Among those, 30 foreign students and 15 Brazilian students will be selected to receive financial support for travel and hotel costs. The selection of students getting financial support will be done by the organizing committee through analysis of a letter of the candidate describing his/her intentions to attend the school, description of the candidate's current research project, curricula and recommendation letters. The selection of further participants will also be based on those criteria. For more details, visit the registration page. The deadline for registration is May 15th.