Tuesday, February 26, 2013

This Amazing 3-D Desktop Was Born at Microsoft

Article from wired.com

LONG BEACH, California – The history of computer revolutions will show a logical progression from the Mac to the iPad to something like this SpaceTop 3-D desktop, if computer genius Jinha Lee has anything to say about it.

The Massachusetts Institute of Technology grad student earned some notice last year for the ZeroN, a levitating 3-D ball that can record and replay how it is moved around by a user. Now, following an internship at Microsoft Applied Science and some time off from MIT, Lee is unveiling his latest digital 3-D environment, a three-dimensional computer interface that allows a user to “reach inside” a computer screen and grab web pages, documents, and videos like real-world objects. More advanced tasks can be triggered with hand gestures. The system is powered by a transparent LED display and a system of two cameras, one tracking the users’ gestures and the other watching her eyes to assess gaze and adjust the perspective on the projection.

Lee’s new 3-D desktop, which he just showed off at the annual TED conference in Long Beach, California, is still in the early stages. But it lights the way toward the sort of quantum leap that’s all too rare in computer interfaces. It took decades to get from the command-line interface to the graphical user interface and Apple’s Macintosh. It took decades more to get from the Mac to the touch interface of iPhones and iPads. Lee and people like him might just get us to the next revolution sooner.

Others are working along similar lines. Gesture-based control has been incorporated into Microsoft’s Kinect, Samsung’s Smart TV platform, and products from startups like Leap Motion and SoftKinect (not to mention in cinema fantasyland ). Three dimensional display interfaces, meanwhile, have been brewing at the University of Iowa (home to “Leonar3Do” ), in the Kickstarter gaming

Read more

Augmented Reality SDK for App Development

Develop native Augmented Reality applications for IOS, Android and Windows deployment using the metaio SDK. The metaio SDK includes a powerful 3-D rendering engine in addition to plug-ins for Unity. Create your own application once, using the metaio SDK and deploy to all major operating systems and devices via AREL, the Augmented Reality Experience Language. Take advantage of the advance tracking features, such as Markerless 2-D and 3-D Tracking, client-based visual search and SLAM. There are no hidden fees, costs, or extra plug-ins necessary to get started... The metaio SDK is the best value fo AR developer's solutions on the market!

Wednesday, February 20, 2013

How It Feels [through Glass]

http://www.google.com/glass/start

Monday, February 18, 2013

The Argus® II Retinal Prosthesis System

The Argus II Retinal Prosthesis System (“Argus II”) is the world’s first and only approved device intended to restore some functional vision for people suffering from blindness. Argus II is approved for use in the United States and the European Economic Area.

HUMANITARIAN DEVICE: Authorized by Federal (U.S.) law to provide electrical stimulation of the retina to induce visual perception in blind patients with severe to profound retinitis pigmentosa and bare light or no light perception in both eyes. The effectiveness of this device for this use has not been demonstrated.

 

http://2-sight.eu/en/product-en

Friday, February 15, 2013

1st CFP - IRF Conference 2013

C A L L F O R P A P E R S

6th IRF Conference : October 7-9 2013, Limassol, Cyprus Organised by the Cyprus University of Technology and the MUMIA Cost Action

Conference: http://cyprusconferences.org/irf2013/

T H E I R F C O N F E R E N C E

The 6th Information Retrieval Facility Conference 2013 provides once again a multidisciplinary scientific forum for researchers in Information Retrieval and related areas. The conference aims at bringing young researchers into contact with the industry at an early stage, emphasizing the applicability of IR solutions to real industry cases and the respective challenges.

The 6th IRF Conference addresses 3 complementary research areas:

* Information Retrieval

* Machine Translation for search solutions

* Interactive Information Access

The 6th IRF Conference targets researchers who are interested in:

* Learning about complementary technologies for the development of next generation search solutions

* Applying their results to real business needs

* Joining the international research network of the MUMIA Cost Action

* Discussing results obtained by using the IRF or other public data resources

All papers will undergo a review process with each paper being reviewed by at least three members of the programme committee.

The proceedings of all previous IRF Conferences have been published by Springer in the LNCS series, and we have contacted Springer regarding 2013.

All paper submissions must be written in English following the LNCS author guidelines. We welcome two different types of submissions (Science, Industry).

- Science Papers

For researchers and students in the fields of Information Retrieval, Machine Translation for Search Solutions and Interactive Information Access.

Papers submitted must refer to novel, unpublished research. Full papers must not exceed 12 pages including references and figures.

- Industry Papers

For developers and implementers of novel technology in the fields of Information Retrieval, Machine Translation for Search Solutions and Interactive Information Access. For business and industry representatives using IR technologies to search and analyse large quantities of information. Papers of this type should not exceed 4 pages including references and figures.

- Competitive Demonstrations

Research groups and industry members are invited to demonstrate their information access tools. In previous years, the PatOlympics have proven an exciting interaction driver between users and creators of patent search systems. This year, the competitive demo opens to general purpose IR tools and interested participants are invited to submit a 2 page description of their system. Details to follow.

O R G A N I S A T I O N

General Chair:

John Tait (JohnTait.net Ltd)

Programme Chairs:

Evangelos Kanoulas (Google)

Mihai Lupu (Vienna University of Technology)

T O P I C S T O B E A D D R E S S E D

We seek papers on novel, unpublished research in one or more topics mentioned below:

* IR Models

* IR Evaluation

* User Modeling, Personalization and Interactive IR

* Machine Learning, Categorization, and Clustering for IR

* Cross-Language IR

* Visualization of Search Results

* Ontologies

* Reasoning

* Semantic Annotation

* Information Extraction and Summarization

* Named Entity Recognition

* Machine Translation

* Question Answering

* Patent Analytics

* Scientific Paper Search

* Biomedical Information Search

* Enterprise Search

* Web Search

* Human Factors in IR

Multi-disciplinary papers combining topics from multiple areas are particularly welcome.

I M P O R T A N T D A T E S

Deadline for paper submission: May 31, 2013 Acceptance decision and reviews to authors: July 7, 2013 Deadline for submission of final paper: July 22, 2013 Speaker registration: July 22, 2013 Early registration deadline: July 22, 2013 IRF Conference: Oct 7-9, 2013

For more information, please consult our website http://cyprusconferences.org/irf2013/ or, contact the organising chairs via email: irfc@cyprusconferences.org

IRFC 2013 is organised by the Cyprus University of Technology and the MUMIA Cost Action.

Visual Information Retrieval using Java and LIRE Synthesis Lectures on Information Concepts, Retrieval, and Services

Mathias Lux, Alpen Adria Universität Klagenfurt, Austria

Oge Marques, Florida Atlantic University

Abstract

Visual information retrieval (VIR) is an active and vibrant research area, which attempts at providing means for organizing, indexing, annotating, and retrieving visual information (images and videos) from large, unstructured repositories.

The goal of VIR is to retrieve matches ranked by their relevance to a given query, which is often expressed as an example image and/or a series of keywords. During its early years (1995-2000), the research efforts were dominated by content-based approaches contributed primarily by the image and video processing community. During the past decade, it was widely recognized that the challenges imposed by the lack of coincidence between an image's visual contents and its semantic interpretation, also known as semantic gap, required a clever use of textual metadata (in addition to information extracted from the image's pixel contents) to make image and video retrieval solutions efficient and effective. The need to bridge (or at least narrow) the semantic gap has been one of the driving forces behind current VIR research. Additionally, other related research problems and market opportunities have started to emerge, offering a broad range of exciting problems for computer scientists and engineers to work on.

In this introductory book, we focus on a subset of VIR problems where the media consists of images, and the indexing and retrieval methods are based on the pixel contents of those images -- an approach known as content-based image retrieval (CBIR). We present an implementation-oriented overview of CBIR concepts, techniques, algorithms, and figures of merit. Most chapters are supported by examples written in Java, using Lucene (an open-source Java-based indexing and search implementation) and LIRE (Lucene Image REtrieval), an open-source Java-based library for CBIR.

Table of Contents: Introduction / Information Retrieval: Selected Concepts and Techniques / Visual Features / Indexing Visual Features / LIRE: An Extensible Java CBIR Library / Concluding Remarks

http://www.morganclaypool.com/doi/abs/10.2200/S00468ED1V01Y201301ICR025