Friday, February 26, 2010

WISMA 2010 Deadline Extension - 11th International Workshop of the Multimedia Metadata Community

CALL for PAPERS Workshop on Interoperable Social
Multimedia Applications (WISMA 2010)

11th International Workshop of the Multimedia Metadata Community

Submission due: 14th March 2010 - Workshop dates: 19th-20th May 2010
Workshop venue: Universitat Politècnica de Catalunya, Barcelona (Spain)

In the Web 2.0, a growing amount of multimedia content is being shared on Social Networks. Due to the dynamic and ubiquitous nature of this content (and associated descriptors), new interesting challenges for indexing, access, and search and retrieval have arisen. In addition, there is a growing concern on privacy protection, as a lot of personal data is being exchanged. Teenagers (and even younger kids), for example, require special protection applications; while adults are willing to have a higher control over the access to content. Furthermore, the integration of mobile technologies with the Web 2.0 applications is also an interesting area of research that needs to be addressed; not only in terms of content protection, but also considering the implementation of new and enriched context-aware applications. Finally, social multimedia is also expected to improve the performance of traditional multimedia information search and retrieval approaches by contributing to bridge the semantic gap. The integration of these aspects,
however, is not trivial and has created a new interdisciplinary area of research. In any case, there is a common issue that needs to be addressed in all the previously identified social multimedia applications: the interoperability and extensibility of their applications. Thus, the workshop is particularly interested in research contributions based on standards.

Recommended topics include, but are not limited to, the following:
• Privacy in social networks
• Access control in social networks
• Social media analysis
• Social media retrieval
• Context-awareness in social networks
• Mobile applications scenario
• Social networks ontologies and interoperability
• Security and privacy ontologies
• Content distribution over social networks
• Multimedia ontologies and interoperability
• Multimedia search and retrieval
• Semantic metadata management
• Collaborative tagging
• Interaction between access control and privacy policies
• Social networks and policy languages
• Policy management

Research Papers: Papers should describe original and significant work in the research practice of related topics.
(i) Long papers: up to 8 pages, will normally be focused on research studies, applications and experiments.
(ii) Short papers: up to 4 pages, will be particularly suitable for reporting work-in-progress, interim results, or as a position paper submission.

Applications and Industrial Presentations: Proposals for presentations of applications or tools, including project reports, industrial practices and models, or tools/systems demonstrations.
Abstract: 2 pages.

All submissions and proposals are to be in English and submitted in PDF format at the WISMA paper submission web site ( on or before 14th March 2010. Papers should be formatted according to LNCS style
( The workshop
proceedings are to be published as a volume at CEUR Workshop Proceedings (

General Chair: Jaime Delgado (UPC, Spain).

International Programme Committee:
Anna Carreras (Universitat Politècnica de Catalunya, Spain), Ansgar Scherp
(University of Koblenz-Landau, Germany), Bill Grosky (University of
Michigan, USA), Chris Poppe (Ghent University - IBBT, Belgium), Christian
Timmerer (Alpen-Adria-University Klagenfurt, Austria), Dominik Renzel (RWTH
Aachen University, Germany), Frédéric Dufaux (EPFL, Switzerland), Harald
Kosch (University of Passau, Germany), Herve Bourlard (Idiap, Switzerland),
Jaime Delgado (Universitat Politècnica de Catalunya, Spain), Laszlo
Böszörmenyi (Klagenfurt University, Austria), Marc Spaniol (MPI -
Saarbrücken, Germany), Markus Strohmaier (Know Center Graz, Austria),
Mathias Lux (Klagenfurt University, Austria), Michael Granitzer (Know Center
Graz, Austria), Oge Marques (Florida Atlantic University, USA), Ralf Klamma
(RWTH Aachen University, Germany), Richard Chbeir (Bourgogne University,
France), Romulus Grigoras (ENSEEIHT, France), Ruben Tous (Universitat
Politècnica de Catalunya, Spain), Savvas Chatzichristofis (Democritus
University of Thrace, Greece), Vincent Charvillat (ENSEEIHT, France),
Vincent Oria (NJIT, USA), Werner Bailer (Joanneum Research Graz, Austria),
Yu Cao (California State University, Fresno, USA).

Applied mathematics: The statistics of style

Nature 463, 1027-1028 (25 February 2010) -Published online 24 February 2010

Bruno A. Olshausen & Michael R. DeWeese

A mathematical method has been developed that distinguishes between the paintings of Pieter Bruegel the Elder and those of his imitators. But can the approach be used to spot imitations of works by any artist?

What makes the style of an artist unique? For paintings or drawings, what comes to mind are specific brush or pen strokes, the manner in which objects are shaded, or how characters or landscapes are portrayed. Art historians are skilled at identifying such details through visual inspection, and art collectors and museums currently rely on this type of expert analysis to authenticate works of art. Might it be possible to automate this process to provide a more objective assessment? Is it possible to teach a computer to analyse art? In an article in Proceedings of the National Academy of Sciences, Hughes et al.1 demonstrate that subtle stylistic differences between the paintings of Pieter Bruegel the Elder and those of his imitators, which were at one time misattributed by art historians, may be reliably detected by statistical methods.

Hughes and colleagues' work is the latest in a stream of research findings that have emerged over the past few decades in the field of 'image statistics'. The players in this field are an unlikely cadre of engineers, statisticians and neuroscientists who are seeking to characterize what makes images of the natural environment different from unstructured or random images (such as the 'static' on a computer monitor or television). Answering this question is central to the problem of coding and transmitting images over the airwaves and the Internet, and, it turns out, it is just as important for understanding how neurons encode and represent images in the brain.

The first image statisticians were television engineers, who, as early as the 1950s, were trying to exploit correlations in television signals to compress the signals into a more efficient format. Around the same time, pioneering psychologists and neuroscientists such as Fred Attneave and Horace Barlow were using ideas from information theory to work out how the particular structures contained in images shape the way that information is coded by neurons in the brain. Since then, others have succeeded in developing specific mathematical models of natural-image structure — showing, for example, that the two-dimensional power spectrum varies with spatial frequency, f, roughly as 1/f2 (ref. 2), and that the distribution of contrast in local image regions is invariant across scale3, 4, 5.

Investigators also began applying these and related models to characterize the statistical structure of paintings by particular artists. It was shown, for example, that Jackson Pollock's drip paintings have fractal structure6, and that Bruegel's drawings could be distinguished from those of his imitators by the shape of the histogram of wavelet filter outputs, which represent how much spatial structure is present at different scales and orientations7. It is this latter work that formed the basis for Hughes and colleagues' study1. Instead of using standard wavelet filters, they apply a set of filters that are adapted to the statistics of Bruegel's drawings through a method known as sparse coding.

In a sparse-coding model, local regions of an image are encoded in terms of a 'dictionary' of spatial features; importantly, the dictionary is built up, or trained, from the statistics of an ensemble of images, so that only a few elements from the dictionary are needed to encode any given region. Essentially, sparsity forces the elements of the dictionary to match spatial patterns that tend to occur in the images with frequencies significantly higher than chance, thus providing a snapshot of structure contained in the data. Neuroscientists have shown that such dictionaries, when trained on a large ensemble of natural scenes, match the measured receptive-field characteristics of neurons in the primary visual cortex of mammals. These and other empirical findings have lent support to the idea that sparse coding may be used by neurons for sensory representation in the cortex8.

Rather than attempting to form a generic code adapted to natural scenes, Hughes et al.1 asked what sort of dictionary results from training on one specific class of image — the drawings of Pieter Bruegel the Elder. The dictionary that emerges, not surprisingly, differs from that adapted for natural scenes. In some sense, Hughes et al. have evolved an artificial visual system that is hyper-adapted to Bruegel's drawings. Such a visual system will be adept at representing other drawings from this class — that is, other authentic drawings by Bruegel — because they result in sparse encodings. However, it will not be so adept at representing images outside this class, such as drawings by other artists and even those attempting to imitate Bruegel, because they will result in denser encodings — more dictionary elements will be needed to describe each image region (Fig. 1). To put it another way, a picture may be worth a thousand words, but if it's an authentic Bruegel, it should take only a few Bruegel dictionary elements to represent it faithfully.

Figure 1: Sparse-coding analysis of artistic style.
Figure 1 : Sparse-coding analysis of artistic style. Unfortunately we are unable to provide accessible alternative text for this. If you require assistance to access this image, or to obtain a text description, please contact

Hughes and colleagues1 show that small image patches taken from a collection of authentic works by Pieter Bruegel the Elder (a) can be used to generate a 'dictionary' of visual elements attuned to the statistics of his style (b). A test image (c) can then be authenticated by recreating it with a combination of dictionary elements. If recreation of the test image requires only a few dictionary elements, it is sparse, and labelled 'authentic', whereas if accurate encoding of the test image requires many dictionary elements, it is labelled as an 'imitation'.

High resolution image and legend (122K)

Can such an approach be used to authenticate works by any artist? And how robust can one expect it to be in practice? Key to the success of this study1 is the fact that all of the analyses were performed on one particular type of artwork produced by Bruegel — drawings of landscapes. However, Bruegel worked in a variety of media, and his subject matter spanned a wide range of content. Moreover, an individual artist may use various styles. Developing algorithms capable of generalizing across these variations presents a much more challenging problem. Another concern is that it may be possible to defeat this method by generating images that are sparse for a wide range of dictionaries. For example, a geometrical abstract painting by Piet Mondrian would presumably yield a highly sparse representation using a dictionary trained on nearly any artist. Worse still, images randomly generated from the learned dictionary elements would also exhibit high sparsity but would look nothing like a real Bruegel. Thus, sparsity alone may be too fragile a measure for authentification.

One might question other technical choices made by the authors, such as the exclusive use of kurtosis (a statistical measure often used to quantify the degree of 'peakedness' of a probability distribution) to characterize the sparsity of filter outputs; and the analysis of statistical significance is at times puzzling. But Hughes and colleagues have taken a bold step. This is an exciting area of research that goes even beyond forgery detection. Indeed, it begs the question of whether it might be possible to fully capture the style of an artist using statistics. The field of natural-image statistics has advanced beyond the simple sparse-coding models used here, and it is now possible to characterize complex relationships among dictionary elements9, 10. Intriguingly, all of these models are generative — that is, they can be used to synthesize images matching the statistics captured by the model, as has already been done successfully with textures11. One exciting possibility is that computers could generate novel images that convincingly emulate the style of a particular artist. Perhaps someday the best Bruegel imitators will be computers.

Bruno A. Olshausen and Michael R. DeWeese are at the Redwood Center for Theoretical Neuroscience, Helen Wills Neuroscience Institute, School of Optometry (B.A.O.) and Department of Physics (M.R.D.), University of California, Berkeley, Berkeley, California 94720, USA.

Tuesday, February 23, 2010

ICCGI 2010 | Call for Papers

September 20-25, 2010 - Valencia, Spain

General page:

Call for Papers:

Submission deadline: April 20, 2010

Sponsored by IARIA,

Extended versions of selected papers will be published in IARIA Journals:

Publisher: CPS ( see: )

Archived: IEEE CSDL (Computer Science Digital Library) and IEEE Xplore

Submitted for indexing: Elsevier's EI Compendex Database, EI's Engineering Information Index

Other indexes are being considered: INSPEC, DBLP, Thomson Reuters Conference Proceedings Citation Index

Please note the Poster Forum and Work in Progress options.

The topics suggested by the conference can be discussed in term of concepts, state of the art, research, standards, implementations, running experiments, applications, and industrial case studies. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited to, topic areas.

All tracks are open to both research and industry contributions, in terms of Regular papers, Posters, Work in progress, Technical/marketing/business presentations, Demos, Tutorials, and Panels.

Before submission, please check and conform with the Editorial rules:

ICCGI 2010 Tracks (tracks' topics and submission details: see CfP on the site)

Industrial systems

Control theory and systems; Fault-tolerance and reliability; Data engineering; Enterprise computing and evaluation; Electrical and electronics engineering; Economic decisions and information systems; Advanced robotics; Virtual reality systems; Industrial systems and applications; Industrial and financial systems; Industrial control electronics; Industrial IT solutions

Evolutionary computation

Algorithms, procedures, mechanisms and applications; Computer architecture and systems; Computational sciences; Computation in complex systems; Computer and communication systems; Computer networks; Computer science theory; Computation and computer security; Computer simulation; Digital telecommunications; Distributed and parallel computing; Computation in embedded and real-time systems; Soft computing; User-centric computation

Autonomic and autonomous systems

Automation and autonomous systems; Theory of Computing; Autonomic computing; Autonomic networking; Network computing; Protecting computing; Theories of agency and autonomy; Multi-agent evolution, adaptation and learning; Adjustable and self-adjustable autonomy; Pervasive systems and computation; Computing with locality principles; GRID networking and services; Pervasive computing; Cluster computing and performance; Artificial intelligence Computational linguistics; Cognitive technologies; Decision making; Evolutionary computation; Expert systems; Computational biology


Models and techniques for biometric technologies; Bioinformatics; Biometric security; Computer graphics and visualization; Computer vision and image processing; Computational biochemistry; Finger, facial, iris, voice, and skin biometrics; Signature recognition; Multimodal biometrics; Verification and identification techniques; Accuracy of biometric technologies; Authentication smart cards and biometric metrics; Performance and assurance testing; Limitations of biometric technologies; Biometric card technologies; Biometric wireless technologies; Biometric software and hardware; Biometric standards

Knowledge data systems

Data mining and Web mining; Knowledge databases and systems; Data warehouse and applications; Data warehousing and information systems; Database performance evaluation; Semantic and temporal databases; Database systems Databases and information retrieval; Digital library design; Meta-data modeling

Mobile and distance education

Human computer interaction; Educational technologies; Computer in education; Distance learning; E-learning; Mobile learning Cognitive support for learning; Internet-based education; Impact of ICT on education and society; Group decision making and software; Habitual domain and information technology; Computer-mediated communications; Immersing authoring; Contextual and cultural challenges in user mobility

Intelligent techniques, logics, and systems

Intelligent agent technologies; Intelligent and fuzzy information processing; Intelligent computing and knowledge management; Intelligent systems and robotics; Fault-tolerance and reliability; Fuzzy logic & systems; Genetic algorithms; Haptic phenomena; Graphic recognition; Neural networks; Symbolic and algebraic computation; Modeling, simulation and analysis of business processes and systems

Knowledge processing

Knowledge representation models; Knowledge languages; Cognitive science; Knowledge acquisition; Knowledge engineering; Knowledge processing under uncertainty; Machine intelligence; Machine learning; Making decision through Internet; Networking knowledge plan

Information technologies

Information technology and organizational behavior; Agents, data mining and ontologies; Information retrieval systems; Information and network security; Information ethics and legal evaluations; Optimization and information technology; Organizational information systems; Information fusion; Information management systems; Information overload; Information policy making; Information security; Information systems; Information discovery

Internet and web technologies

Internet and WWW-based computing; Web and Grid computing; Internet service and training; IT and society; IT in education and health; Management information systems; Visualization and group decision making; Web based language development; Web search and decision making; Web service ontologies; Scientific web intelligence; Online business and decision making; Business rule language; E-Business; E-Commerce; Online and collaborative work; Social eco-systems and social networking; Social decisions on Internet; Computer ethics

Digital information processing

Mechatronics; Natural language processing; Medical imaging; Image processing; Signal processing; Speech processing; Video processing; Pattern recognition; Pattern recognition models; Graphics & computer vision; Medical systems and computing

Cognitive science and knowledge agent-based systems

Cognitive support for e-learning and mobile learning; Agents and cognitive models; Agents & complex systems; computational ecosystems; Agent architectures, perception, action & planning in agents; Agent communication: languages, semantics, pragmatics & protocols; Agent-based electronic commerce and trading systems Multi-agent constraint satisfaction; Agent programming languages, development environments and testbeds; Computational complexity in autonomous agents; Multi-agent planning and cooperation; Logics and formal models of for agency verification; Nomadic agents; Negotiation, auctions, persuasion; Privacy and security issues in multi-agent systems

Mobility and multimedia systems

Mobile communications; Multimedia and visual programming; Multimedia and decision making; Multimedia systems; Mobile multimedia systems; User-centered mobile applications; Designing for the mobile devices; Contextual user mobility; Mobile strategies for global market; Interactive television and mobile commerce

Systems performance

Performance evaluation; Performance modeling; Performance of parallel computing; Reasoning under uncertainty; Reliability and fault-tolerance; Performance instrumentation; Performance monitoring and corrections; Performance in entity-dependable systems; Real-time performance and near-real time performance evaluation; Performance in software systems; Performance and hybrid systems; Measuring performance in embedded systems

Networking and telecommunications

Telecommunication and Networking; Telecommunication Systems and Evaluation; Multiple Criteria Decision Making in Information Technology; Network and Decision Making; Networks and Security; Communications protocols (SIP/H323/MPLS/IP); Specialized networks (GRID/P2P/Overlay/Ad hoc/Sensor); Advanced services (VoIP/IPTV/Video-on-Demand; Network and system monitoring and management; Feature interaction detection and resolution; Policy-based monitoring and managements systems; Traffic modeling and monitoring; Traffic engineering and management; Self-monitoring, self-healing and self-management systems; Man-in-the-loop management paradigm

Software development and deployment

Software requirements engineering; Software design, frameworks, and architectures; Software interactive design; Formal methods for software development, verification and validation; Neural networks and performance; Patterns/Anti-patterns/Artifacts/Frameworks; Agile/Generic/Agent-oriented programming; Empirical software evaluation metrics; Software vulnerabilities; Reverse engineering; Software reuse; Software security, reliability and safety; Software economics; Software testing and debugging; Tracking defects in the OO design; Distributed and parallel software; Programming languages; Declarative programming; Real-time and embedded software; Open source software development methodologies; Software tools and deployment environments; Software Intelligence; Software Performance and Evaluation

Knowledge virtualization

Modeling techniques, tools, methodologies, languages; Model-driven architectures (MDA); Service-oriented architectures (SOA); Utility computing frameworks and fundamentals; Enabled applications through virtualization; Small-scale virtualization methodologies and techniques; Resource containers, physical resource multiplexing, and segmentation; Large-scale virtualization methodologies and techniques; Management of virtualized systems; Platforms, tools, environments, and case studies; Making virtualization real; On-demand utilities Adaptive enterprise; Managing utility-based systems; Development environments, tools, prototypes

Systems and networks on the chip

Microtechnology and nanotechnology; Real-time embedded systems; Programming embedded systems; Controlling embedded systems; High speed embedded systems; Designing methodologies for embedded systems; Performance on embedded systems; Updating embedded systems; Wireless/wired design of systems-on-the-chip; Testing embedded systems; Technologies for systems processors; Migration to single-chip systems

Context-aware systems

Context-aware autonomous entities; Context-aware fundamental concepts, mechanisms, and applications; Modeling context-aware systems; Specification and implementation of awareness behavioral contexts; Development and deployment of large-scale context-aware systems and subsystems; User awareness requirements Design techniques for interfaces and systems; Methodologies, metrics, tools, and experiments for specifying context-aware systems; Tools evaluations, Experiment evaluations

Networking technologies

Next generation networking; Network, control and service architectures; Network signalling, pricing and billing; Network middleware; Telecommunication networks architectures; On-demand networks, utility computing architectures; Next generation networks [NGN] principles; Storage area networks [SAN]; Access and home networks; High-speed networks; Optical networks; Peer-to-peer and overlay networking; Mobile networking and systems; MPLS-VPN, IPSec-VPN networks; GRID networks; Broadband networks

Security in network, systems, and applications

IT in national and global security; Formal aspects of security; Systems and network security; Security and cryptography; Applied cryptography; Cryptographic protocols; Key management; Access control; Anonymity and pseudonymity management; Security management; Trust management; Protection management; Certification and accreditation; Virii, worms, attacks, spam; Intrusion prevention and detection; Information hiding; Legal and regulatory issues

Knowledge for global defense

Business continuity and availability; Risk assessment; Aerospace computing technologies; Systems and networks vulnerabilities; Developing trust in Internet commerce; Performance in networks, systems, and applications; Disaster prevention and recovery; IT for anti-terrorist technology innovations (ATTI); Networks and applications emergency services; Privacy and trust in pervasive communications; Digital rights management; User safety and protection

Information Systems [IS]

Management Information Systems; Decision Support Systems; Innovation and IS; Enterprise Application Integration; Enterprise Resource Planning; Business Process Change; Design and Development Methodologies and Frameworks; Iterative and Incremental Methodologies; Agile Methodologies; IS Standards and Compliance Issues; Risk Management in IS Design and Development; Research Core Theories; Conceptualisations and Paradigms in IS; Research Ontological Assumptions in IS Research; IS Research Constraints, Limitations and Opportunities; IS vs Computer Science Research; IS vs Business Studies

IPv6 Today - Technology and deployment

IP Upgrade - An Engineering Exercise or a Necessity?; Worldwide IPv6 Adoption - Trends and Policies; IPv6 Programs, from Research to Knowledge Dissemination; IPv6 Technology - Practical Information; Advanced Topics and Latest Developments in IPv6; IPv6 Deployment Experiences and Case Studies; IPv6 Enabled Applications and Devices


Continuous and Discrete Models; Optimal Models; Complex System Modeling; Individual-Based Models; Modeling Uncertainty; Compact Fuzzy Models; Modeling Languages; Real-time modeling; Peformance modeling


Multicriteria Optimization; Multilervel Optimization; Goal Programming; Optimization and Efficiency; Optimization-based decisions; Evolutionary Optimization; Self-Optimization; Extreme Optimization; Combinatorial Optimization; Disccrete Optimization; Fuzzy Optimization; Lipschitzian Optimization; Non-Convex Optimization; Convexity; Continuous Optimization; Interior point methods; Semidefinite and Conic Programming


Complexity Analysis; Computational Complexity; Complexity Reduction; Optimizing Model Complexity; Communication Complexity; Managing Complexity; Modeling Complexity in Social Systems; Low-complexity Global Optimization; Software Development for Modeling and Optimization; Industrial applications

Friday, February 19, 2010

IST 2010

A new set of deadlines

The 2010 IEEE International Conference on Imaging Systems and Techniques (IST 2010) will take place in Thessaloniki, Greece. Thessaloniki is one of the largest cities of Greece with great historical and cultural significance, and capital of the Greek region of Macedonia, owing its name to the sister of Alexander of Great named Thessaloniki. It is a major Mediterranean port city built amphitheatrically and located in the northern part of Greece in a unique geographical location surrounded by sea and mountains, in the vicinity (50 km) of  the secluded peninsula of Chalkidiki where the great Greek philosopher Aristotle was born and taught.  Thessaloniki is considered a cultural, touristic, industrial, commercial and political centre and a major transportation hub for the rest of southeastern Europe.

IST 2010 deals with the design, development, evaluation and applications of imaging systems, instrumentation, and measuring techniques, aimed at the enhancement of the detection and image quality. Applications for aerospace, medicine and biology, molecular imaging, metrology, ladar and lidars, radars, homeland security, and industrial imaging with emphasis on industrial tomography, corrosion imaging, and  non-destructive evaluation(NDE) will be covered. The following areas will be particularly considered:

DETECTORS AND IMAGE FORMATION - design, development and characterization of high resolution electronic imaging detectors, such as optical detectors and cameras, ionizing radiation (x-rays, gamma rays) detectors, detector geometry and electrical parameters, physics, quantum efficiency, collection efficiency, semiconductor detectors, hybrid detectors, ultrasound transducers, MRI coils, phased array antenna elements, novel detection mechanisms, and image formation processes.

IMAGING SYSTEM DESIGN, INSTRUMENTATION AND MEASURING TECHNIQUES - imaging system design parameters, such as spectral response, spatial resolution, contrast resolution, temporal response, modulation transfer function (MTF), system efficiency, noise analysis, data acquisition systems, and measuring techniques. Imaging quality parameters as applied to optical imaging, Computed Tomography (CT), MRI, digital radiography, single-photon emission computed tomography (SPECT), positron emission tomography (PET), ultrasound, multi-fusion/multi-modality imaging, contrast agents, nano-imaging, nano-instrumentation imagery such as AFM, NSOM, SEM, confocal microscopy, multi-functional imaging.

LINEAR AND NONLINEAR TECHNIQUES FOR IMAGE PROCESSING - advanced image enhancement and processing algorithms, fuzzy neural and evolutionary techniques for image enhancement, noise estimation and filtering, image restoration, feature extraction, edge detection, image analysis and classification, figures of merit for assessing image quality, algorithms for image interpolation, post-processing techniques for correction of coding errors, data fusion, and high-level computer vision.

EMERGING TECHNOLOGIES - Novel imaging principles and/or concepts leading to the development of high resolution high-specificity imaging technological paradigms on areas such active/passive imaging achitectures, UV/V/NIR/IR imaging and arrays, THz imaging systems, environmental monitoring, imaging and spectroscopy, nano-imaging, quantum dots imaging, mine detection, biometric imaging, security imaging, cargo inspection, IED detection, efficient target detection, identification, discrimination techniques, defects and surface anomalies, corrosion imaging, imaging of composite structures, tomographic imaging, multi-modality imaging, microarray imaging chips, miniaturized portable imaging devices, physiological imaging, guided biopsy imaging, biomedical optics and cancer detection, optical polarimetric imaging, and advanced electromagnetic imaging techniques.

Tuesday, February 16, 2010

Call for Papers

The purpose of the 3rd International Conference on Agents and Artificial Intelligence (ICAART) is to bring together researchers, engineers and practitioners interested in the theory and applications in these areas. Two simultaneous but strongly related tracks will be held, covering both applications and current research work within the area of Agents, Multi-Agent Systems and Software Platforms, Distributed Problem Solving and Distributed AI in general, including web applications, on one hand, and within the area of non-distributed AI, including the more traditional areas such as Knowledge Representation, Planning, Learning, Scheduling, Perception and also not so traditional areas such as Reactive AI Systems, Evolutionary Computing and other aspects of Computational Intelligence and many other areas related to intelligent systems, on the other hand.
A substantial amount of research work is ongoing in these knowledge areas, in an attempt to discover appropriate theories and paradigms to use in real-world applications. Much of this important work is therefore theoretical in nature. However there is nothing as practical as a good theory, as Boltzman said many years ago, and some theories have indeed made their way into practice. Informatics applications are pervasive in many areas of Artificial Intelligence and Distributed AI, including Agents and Multi-Agent Systems; This conference intends to emphasize this connection, therefore, authors are invited to highlight the benefits of Information Technology (IT) in these areas. Ideas on how to solve problems using agents and artificial intelligence, both in R&D and industrial applications, are welcome. Papers describing advanced prototypes, systems, tools and techniques and general survey papers indicating future directions are also encouraged. Papers describing original work are invited in any of the areas listed below. Accepted papers, presented at the conference by one of the authors, will be published in the Proceedings of ICAART with an ISBN. A book with the best papers of the conferene will be published by Springer-Verlag. Acceptance will be based on quality, relevance and originality. Both full research reports and work-in-progress reports are welcome. There will be both oral and poster sessions.
Special sessions, dedicated to case-studies and commercial presentations, as well as tutorials dedicated to technical/scientific topics are also envisaged: companies interested in presenting their products/methodologies or researchers interested in holding a tutorial are invited to contact the conference secretariat. Additional information can be found at



Each of these topic areas is expanded below but the sub-topics list is not exhaustive. Papers may address one or more of the listed sub-topics, although authors should not feel limited by them. Unlisted but related sub-topics are also acceptable, provided they fit in one of the following conference areas:
1. Artificial Intelligence
2. Agents


  • › Knowledge Representation and Reasoning
  • › Uncertainty in AI
  • › Model-Based Reasoning
  • › Machine Learning
  • › Logic Programming
  • › Ontologies
  • › Data Mining
  • › Constraint Satisfaction
  • › State Space Search
  • › Case-Based Reasoning
  • › Cognitive Systems
  • › Natural Language Processing
  • › Intelligent User Interfaces
  • › Reactive AI
  • › Vision and Perception
  • › Pattern Recognition
  • › Ambient Intelligence
  • › AI and Creativity
  • › Artificial Life
  • › Soft Computing
  • › Evolutionary Computing
  • › Neural Networks
  • › Fuzzy Systems
  • › Planning and Scheduling
  • › Game playing
  • › Expert Systems
  • › Industrial applications of AI


  • › Distributed Problem Solving
  • › Agent Communication Languages
  • › Agent Models and Architectures
  • › Multi-Agent Systems
  • › Brokering and matchmaking
  • › Cooperation and Coordination
  • › Conversational Agents
  • › Negotiation and Interaction Protocols
  • › Programming Environments and Languages
  • › Task Planning and Execution
  • › Autonomous Systems
  • › Cognitive Robotics
  • › Group Decision Making
  • › Web Intelligence
  • › Web Mining
  • › Semantic Web
  • › Grid Computing
  • › Agent Platforms and Interoperability
  • › SOA and Software Agents
  • › Simulation
  • › Economic Agent Models
  • › Social Intelligence
  • › Security and Reputation
  • › Mobile Agents
  • › Pervasive Computing
  • › Privacy, safety and security

Monday, February 15, 2010

Plink Art

Plink Art is an app for your mobile phone that lets you identify almost any work of art just by taking a photo of it.

Search for "PlinkArt" on the Android Market

demo 4

The coolest feature of Plink Art is instant art identification. Just snap a photo and if the painting is in our database our visual search system will recognise it and tell you all about it. Currently Plink knows about tens of thousands of famous paintings. Try it and you might be surprised!

demonstartion 1

Plink is great for exploring too. Browse by timeline, movement or gallery, or just hit random and let Plink surprise you.

demonstartion 2

When you find a painting you love, discuss it with others, share it with your friends, or order a print to hang on your wall.

demonstartion 3

Sunday, February 14, 2010

Springer Open Choice

Springer operates a program called Springer Open Choice for the majority of its journals. Open Choice allows authors to decide how their articles are published in the leading and highly respected journals that Springer publishes.

Choosing open access means making your journal article freely available to everyone, everywhere in exchange for your payment of an open access publication fee.

If authors choose open access in the Springer Open Choice program, they will not be required to transfer their copyright. The final published version of all articles can be archived in institutional or funder repositories and can be made publicly accessible immediately. For PubMed Central we are happy to deposit the article’s full-text XML simultaneously with its publication.

Open access articles which are ordered through the Open Choice program are clearly identified as open access – in the article pdf and HTML, in the article metadata and in the article display at SpringerLink.

No matter which option you choose, all Springer articles are peer-reviewed, professionally and quickly produced, and available on SpringerLink. In addition, every article is registered in CrossRef and included in the appropriate Abstracting and Indexing services.

Image Retrieval Prototype Development and Evaluation

Unit: Textual and Visual Pattern Analysis / Work Practice Technology

Proposers Tommaso Colombino
Luca Marchesotti

Duration: 4 to 6 months

Start Date: March / April 2010


Xerox Research Centre Europe (XRCE) is currently developing advanced image retrieval prototypes that leverage different technologies for analyzing all visual aspects of and image namely: content, aesthetic and emotional value. The prototypes were developed within OMNIA (, a three year research project funded by the French government, and PinView (, a European Union funded project. The purpose of this internship, shared between the Work Practice Technology (WPT) and the Textual and Visual Pattern Analysis (TVPA) research groups is to port a version of one the image retrieval prototypes on a multi-touch table display and then participate in the conduct of a series of user preference and evaluation tests. The student will be involved both in the design and conduct of the tests, and in the subsequent analysis and presentation of the results.

The candidate should have knowledge of at least one of the following: flash/Flex, C#/.NET, or C++/C. Experience in the design and conduct of psychological experiments or Voice of Customer studies is not required but would be advantageous. The student should be fluent in French and have a good level of written English.

XRCE provides an informal and relaxed working environment situated in the Parc de Maupertuis in Meylan. The successful candidate will be given the freedom and flexibility to find their own solutions and to work in a way that suits them but will have the guidance and support of experienced full-time Xerox researchers and thereby gain an introduction to the field of commercial research in a world-class research laboratory.

About XRCE

The Xerox Research Centre Europe (XRCE) is a young, dynamic research organization, which aims at creating innovative document technologies to support growth in Xerox content and document management services across the different Xerox businesses

XRCE: Château

XRCE is both a multicultural and multidisciplinary organization set in Grenoble, France. Our domains of research stretch from the social sciences to computing. We have renowned expertise in natural language applications, work practice studies, image-based document processing, distributed applications and knowledge management agents. The diversity of culture and disciplines at XRCE makes it an interesting and stimulating environment to work in, leading to often unexpected discoveries!

XRCE is part of the Xerox Innovation group made up of 800 researchers and engineers in four world-renowned research and technology centres. Xerox is an equal opportunity employer.

The Grenoble site is set in a park in the heart of the French Alps in a stunning location only a few kilometers from the city centre. The city of Grenoble has a large scientific community made up of national research institutes (CNRS, Universities, INRIA) and private industries. Stimulated also by the presence of a large student community, Grenoble has become a resolutely modern city, with a rich heritage and a vibrant cultural scene. It is a lively and cosmopolitan place, offering a host of leisure opportunities. Winter sports resorts just half an hour from campus and three natural parks at the city limits make running, skiing, trekking, climbing and paragliding easily available.
Grenoble is close to both the Swiss and Italian borders.

2010 International Conference on Image and Video Processing and Computer Vision (IVPCV 2010)

The scope of the conference includes all areas of computer vision, video and image processing. Sample topics include but WILL NOT be limited to:
* Architecture, systems and prototyping
* Authentication and watermarking
* Biomedical imaging
* Biomedical sciences
* Circuits and architectures
* Color and texture
* Color reproduction
* Display and printing systems
* Distributed source coding
* Document image processing and analysis
* Face and gesture recognition
* Feature extraction and analysis
* Geophysical and seismic imaging
* Geosciences and remote sensing
* Illumination and reflectance modeling
* Image and video databases
* Image and video retrieval
* Image filtering, restoration and enhancement
* Image indexing and retrieval
* Image representation and rendering
* Image segmentation
* Image/video transmission
* Image-Based Modeling
* Interpolation and super-resolution
* Medical image analysis
* Morphological processing
* Motion and tracking
* Motion detection and estimation
* Multimodality image/video indexing and retrieval
* Nanotechnologies
* Object recognition
* Optimal imaging
* Performance evaluation
* Physics-Based modeling
* Quantization and half toning
* Remote sensing
* Retrieval and editing
* Scanning and sampling
* Segmentation and grouping
* Sensors and early vision
* Shape representation
* Statistical methods and learning
* Stereo and structure from motion
* Stereoscopic and 3-D processing
* Still image coding
* Synthetic-natural hybrid image systems
* Video analysis and event recognition
* Video coding
* Video indexing
* Video segmentation and tracking
* Wireless sensor networks

A BPT Application: Semi-automatic Image Retrieval Tool

This work presents a semi-automatic tool for content retrieval. In contrast to traditional content based image retrieval systems that work with entire images, the
tool we have developed handles individual objects. The availability of a pool of pre-segmented objects found using region analysis allows the human behaviour of presegmentation to be replicated. To generate defined objects in the object pool, segmentation is performed using multidimensional Binary Partition Trees (BPTs). The tree structure uses colour, spatial frequency edge histograms to form semantically meaningful tree nodes. The BPTs can be intuitively browsed and are stored within XML documents for ease of access and analysis. To find an object a node from a query image is matched against the nodes of the BPT of the database image. These are matched according to a collection of MPEG-7 descriptors. Performance evaluation shows high quality segmentations and reliable retrieval

Download the paper

The Google Image Treasure Hunt?
When a search engine indexes an image on the Web, it often has to rely upon the words that it finds associated with that picture. Those words could include the file name, alt text for the image, a caption, as well as other text on the same page.

Those words can be misleading, however, and search engines are trying other approaches to identifying the actual content contained within images. One of the approaches that Google has taken to index images is to have people play each other in a game to label those images. There’s a possibility that Google may add another image game, like the one seen in the screen shot below:

A browser screen shot asking a viewer to pick a car out of an image that also contains a building and some trees, with an address associated with the image.

The new game from Google is described in a patent filing published this week, incorporating a way to identify objects within images. The patent application is:

Object Identification in Images
Invented by Yushi Jing, Michael Fink, Michele Covell, Shumeet Baluja
Assigned to Google
US Patent Application 20100034466
Published February 11, 2010
Filed: August 10, 2009

Google is doing more than using games to understand images, and has started using automated ways to compare images to each other to identify different features contained within those images. A couple of years ago, we saw a Google paper come out titled PageRank for Product Image Search, which described how Google might use that technology. You can see that image technology in action in Google Image Swirl, which came out in Google Labs in November of last year.

Interestingly, a couple of the writers listed on that paper are also listed as inventors of this patent application.

The patent tells us about the possibility of Google releasing an “image treasure hunt” game, where players look through images to identify a particular object within a particular image:

In one example, a particular image including a dog is selected as the target of the “image treasure hunt,” and users who are playing the game are told that the target is a dog. Users then proceed to search through images to find the particular image of the dog, which is the target of the “image treasure hunt.”

Each time a user identifies an image with the dog, the user indicates the location of the dog in the image to see whether that dog in the image is the target of the “image treasure hunt.” As users identify dogs in various images, the locations of dogs in various images is stored to enable retrieval later of each image based on a dog being included in the image. As such, the “image treasure hunt” helps catalog the types of objects included in images.

A person finding an object might be asked to click upon it, or trace its outline. It’s also possible that they would be asked to provide more details about the object. For instance, if the object searched for is a dog, a person finding it might indicate that it’s a particular breed of dog, or that it might be in a certain setting or performing some kind of activity, such as a German Shepherd playing Frisbee at the beach.

It’s possible that once an object has been identified in an image that future displays of that image might show a clickable region around the image that links to more information about the object within that image.

The patent also describes how this game might use incentives to encourage people to play, such as having contests that may or may not include prizes.

The images that might be used in this game could include photos, map images, and pictures from video. So it’s possible that Google could use this method of identifying objects within images to improve their image search, video search, and Google Maps.

Why would Google spend so much effort setting up a game to identify objects within images?

The inventor of the Google Image Labeler game, Luis von Ahn, has an interesting presentation that was given as a Google Tech Talk in 2006, titled Human Computation, where he describes the ability of humans to perform some tasks that are relatively easy for people and hard for computers.

Google Maps has a new feature this morning. If you go to Google Maps, and click on the green flask in the top right corner, you’ll see a new window open in the middle of your screen labeled Google Maps Labs. Amongst the features presently in the lab are:

  • Drag ‘n’ Zoom
  • Aerial Imagery
  • Back to Beta
  • Where in the World Game
  • Rotatable Maps
  • What’s Around Here?
  • LatLng Tooltip
  • LatLng Marker
  • Smart Zoom

Kind of interesting that at least one of those is a game. Since the image treasure hunt could potentially be used with images associated with maps, it’s possible that we might see it appear within the Google Maps Labs applications. Since it could also be used to help improve image search and video search by having people identify objects in images and videos, it might spring up over at Google Labs.

Monday, February 8, 2010

SIGGRAPH ASIA 2009 NVIDIA Presentations

IGGRAPH Asia 2009 was held in Yokohama, Japan. Experts from around the world attended and presented the latest information in graphics technology between December 16th and 19th. NVIDIA, a gold sponsor of SIGGRAPH ASIA 2009, presented several key papers on CUDA development as part of the "GPU Computing Master Class". Here are the presentations given during the class:

Title, Speaker

1. Languages, APIs and Development Tools for GPU Computing, Philip Miller

2. Programming for the CUDA Architecture, Tianyun Ni

3. Programming in OpenCL, Timo Stitch

4. CUDA in the VFX pipeline, Wil Braithwait

5. Development Tools,Takayuki Kazama

6. The Art of Performance Optimization,Wil Braithwait

7. Directions in GPU Computing,Toru Baji

SIGGRAPH 2009 NVIDIA Presentations

For 2009, NVIDIA sponsored or presented at 18 sessions, panels, and demos, including 6 special sessions designed to give you up-to-date information on the latest technical innovations from NVIDIA. Below you will find various presentations and videos that we are making publicly available. If you have any feedback about these links, feel free to discuss them on the developer forums at

Wednesday, February 3, 2010

New Version of img(Rummager)

Untitled - 1

SpCD Descriptor Improvements
Binarization using the Otsu Method
SURF Features

img Retrieval
New Menus. The descriptors are now better organized.
BTDH Bug Fixed
SpCD Bug Fixed
CCD Fusion
    -Using HIS*
        -Download Empirical (Historical) Files From the WEB.
    -Using Z-Score
    -Using Borda Count
    -Using IRP
    -Using Linear Sum
Search Using Multiple Queries and Evaluate the Results of all the queries
    -Help Menu Added
New Sliding Show Method

Check the change log from the Help Menu
New Help Files