Pages

Thursday, November 29, 2012

How Google Plans to Find the UnGoogleable

Article from http://www.technologyreview.com – Author: Tom Simonite

The company wants to improve its mobile search services by automatically delivering information you wouldn’t think to search for online.

For three days last month, at eight randomly chosen times a day, my phone buzzed and Google asked me: “What did you want to know recently?” The answers I provided were part of an experiment involving me and about 150 other people. It was designed to help the world’s biggest search company understand how it can deliver information to users that they’d never have thought to search for online.
Billions of Google searches are made every day—for all kinds of things—but we still look elsewhere for certain types of information, and the company wants to know what those things are.

“Maybe [these users are] asking a friend, or they have to look up a manual to put together their Ikea furniture,” says Jon Wiley, lead user experience designer for Google search. Wiley helped lead the research exercise, known as the Daily Information Needs Study.

If Google is to achieve its stated mission to “organize the world’s information and make it universally accessible,” says Wiley, it must find out about those hidden needs and learn how to serve them. And he says experience sampling—bugging people to share what they want to know right now, whether they took action on it or not—is the best way to do it. “Doing that on a mobile device is a relatively new technology, and it’s getting us better information that we really haven’t had in the past,” he says.
Wiley isn’t ready to share results from the study just yet, but this participant found plenty of examples of relatively small pieces of information that I’d never turn to Google for. For example, how long the line currently is in a local grocery store. Some offline activities, such as reading a novel, or cooking a meal, generated questions that I hadn’t turned to Google to answer—mainly due to the inconvenience of having to grab a computer or phone in order to sift through results.

Wiley’s research may take Google in new directions. “One of the patterns that stands out is the multitude of devices that people have in their lives,” he says. Just as mobile devices made it possible for Google to discover unmet needs for information through the study, they could also be used to meet those needs in the future.

Contextual information provided by mobile devices—via GPS chips and other sensors—can provide clues about a person and his situation, allowing Google to guess what that person wants. “We’ve often said the perfect search engine will provide you with exactly what you need to know at exactly the right moment, potentially without you having to ask for it,” says Wiley.

Google is already taking the first steps in this direction. Google Now offers unsolicited directions, weather forecasts, flight updates, and other information when it thinks you need them (see “Google’s Answer to Siri Thinks Ahead”). Google Glass—eyeglass frames with an integrated display (see “You Will Want Google’s Goggles”)—could also provide an opportunity to preëmptively answer questions or provide useful information. “It’s the pinnacle of this hands-free experience, an entirely new class of device,” Wiley says of Google Glass, and he expects his research to help shape this experience.

Google may be heading toward a new kind of search, one that is very different from the service it started with, says Jonas Michel, a researcher working on similar ideas at the University of Texas at Austin. “In the future you might want to search very new information from the physical environment,” Michel says. “Your information needs are very localized to that place and event and moment.”

Finding the data needed to answer future queries will involve more than just crawling the Web. Google Now already combines location data with real-time feeds, for example, from U.S. public transit authorities, allowing a user to walk up to a bus stop and pull out his phone to find arrival times already provided.

Michel is one of several researchers working on an alternative solution—a search engine for mobile devices dubbed Gander, which communicates directly with local sensors. A pilot being installed on the University of Texas campus will, starting early next year, allow students to find out wait times at different cafés and restaurants, or find the nearest person working on the same assignment.

Back at Google, Wiley is more focused on finding further evidence that many informational needs still go unGoogled. The work may ultimately provide the company with a deeper understanding of the value of different kinds of data. “We’re going to continue doing this,” he says. “Seeing how things change over time gives us a lot of information about what’s important.”

Article from http://www.technologyreview.com

Wednesday, November 28, 2012

Autonomous Flying Robots: Davide Scaramuzza at TEDxZurich

This talk is about autonomous, vision-controlled micro flying robots. Micro flying robots are vehicles that are less than 1 meter in size and weigh less than 1kg. Potential applications of these robots are search and rescue, inspection, environment monitoring, etc. Additionally, they can complement human intervention in all those environments where no human can access to (such as a searching for survivors in a damaged building after an earthquake), thus, reducing the risk for the human rescuers. In all these applications, current flying robots are still tele-operated by expert professionals.


Indeed, in order to be truly autonomous, current flying robots rely on GPS or motion-capture systems. Unfortunately, GPS does not work indoors, while motion-capture systems require prior modification of the environment where the robots are supposed to operate, which is not possible in environments that are still to be explored. Therefore, my idea consists of using just cameras onboard the robot. Cameras do for a robot what eyes do for a human. They allow it to perceive the environment and safely navigate within it without bumping into obstacles. Additionally, they allow it to build a map of the environment which can be used to plan the intervention of human rescuers. This talk presents our progress towards this endeavor, open challenges, and future applications.
Davide Scaramuzza (born 1980 in Italy) is Professor of Robotics at the Artificial Intelligence Lab of the University of Zurich where he leads the Robotics and Perception Group and Adjunct Faculty at ETH Zurich of the Master in Robotics Systems and Control. He received his PhD in 2008 in Robotics and Computer Vision at ETH Zurich. He was Postdoc a both ETH Zurich and the University of Pennsylvania , where he worked on autonomous navigation of micro aerial vehicles. From 2009 to 2012, he led the European project "sFly" (www.sfly.org), which focused on autonomous navigation of micro helicopters in GPS-denied environments using vision as the main sensor modality. For his research, he was awarded the Robotdalen Scientific Awards (2009) and the European Young Researcher Award (2012), sponsored by the IEEE and the European Commission. He is coauthor of the 2nd edition of the book "Introduction to Autonomous Mobile Robots" (MIT Press). He is also author of the first open-source Omnidirectional Camera Calibration Toolbox for MATLAB (a popular software simulation tool), which, besides thousands of downloads worldwide, is also currently in use at NASA, Philips, Bosch, and Daimler. His research interests are field and service robotics, intelligent vehicles, and computer vision. Specifically, he investigates the use of cameras as the main sensors for robot navigation, mapping, exploration, reasoning, and interpretation. His interests encompass both ground and flying vehicles.
In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.* (*Subject to certain rules and regulations)

Saturday, November 24, 2012

Microsoft’s Google Glass rival tech tips AR for live events

[Article from http://www.slashgear.com/microsofts-google-glass-rival-tech-tips-ar-for-live-events-22258053/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+slashgear+%28SlashGear%29]

Microsoft is working on its own Google Glass alternative, a wearable computer which can overlay real-time data onto a user’s view of the world around them. The research, outed in a patent application published today for “Event Augmentation with Real-Time Information” (No. 20120293548), centers on a special set of digital eyewear with one or both lenses capable of injecting computer graphics and text into the user’s line of sight, such as to label players in a sports game, flag up interesting statistics, or even identify objects and offer contextually-relevant information about them.

The digital glasses would track the direction in which the wearer was looking, and adjust its on-screen graphics accordingly; Microsoft also envisages a system whereby eye-tracking is used to select areas of focus within the scene. Information shown could follow a preprogrammed script – Microsoft uses the example of an opera, where background detail about the various scenes and arias could be shown in order – or on an ad-hoc basis, according to contextual cues from the surrounding environment.

Actually opting into that data could be based on social network checkins, Microsoft suggests, or by the headset simply using GPS and other positioning sensors to track the wearer’s location. The hardware itself could be entirely self-contained, within glasses, as per what we’ve seen of Google’s Project Glass, or it could split off the display section from a separate “processing unit” in a pocket or worn on the wrist, with either a wired or wireless connection between the two.

In Microsoft’s cutaway diagram – a top-down perspective of one half of the AR eyewear – there’s an integrated microphone (910) and a front-facing camera for video and stills (913), while video is shown to the wearer via a light guide (912). That (along with a number of lenses) works with standard eyeglass lenses (916 and 918), whether prescription or otherwise, while the opacity filter (914) helps improve light guide contrast by blocking out some of the ambient light. The picture itself is projected from a microdisplay (920) through a collimating lens (922). There are also various sensors and outputs, potentially including speakers (930), inertial sensors (932) and a temperature monitor (938).

Microsoft is keeping its options open when it comes to display types, and as well as generic liquid crystal on silicon (LCOS) and LCD there’s the suggestion that the wearable could use Qualcomm’smirasol or a Microvision PicoP laser projector. An eye-tracker (934) could be used to spot pupil movement, either using IR projection, an internally-facing camera, or another method.

Whereas Google has focused on the idea of Glass as a “wearable smartphone” that saves users from pulling out their phone to check social networks, get navigation directions, and shoot photos and video, Microsoft’s interpretation of augmented reality takes a slightly different approach in building around live events. One possibility we could envisage is that the glasses might be provided by an entertainment venue, such as a sports ground or theater, just as movie theaters loan 3D glasses for the duration of a film.

That would reduce the need for users to actually buy the (likely expensive) glasses themselves, and – since they’d only be required to last the duration of the show or game – the battery demands would be considerably less than a full day. Of course, a patent application alone doesn’t mean Microsoft is intending a commercial release, but given the company’s apparently increasing focus on entertainment (such as the rumored Xbox set-top box) it doesn’t seem too great a stretch.

Suggested paper: “Conjunctive ranking function using geographic distance and image distance for geotagged image retrieval”

Nowadays, an enormous number of photographic images are uploaded on the Internet by casual users. In this study, we consider the concept of embedding geographical identification of locations as geotags in images. We attempt to retrieve images having certain similarities (or identical objects) from a geotagged image dataset. We then define the images having identical objects as orthologous images. Using content-based image retrieval (CBIR), we propose a ranking function--orthologous identity function (OIF)--to estimate the degree to which two images contain similarities in the form of identical objects; OIF is a similarity rating function that uses the geographic distance and image distance of photographs. Further, we evaluate the OIF as a ranking function by calculating the mean reciprocal rank (MRR) using our experimental dataset. The results reveal that the OIF can improve the efficiency of retrieving orthologous images as compared to using only geographic distance or image distance.

Published in:
Cover ImageProceeding

GeoMM '12 Proceedings of the ACM multimedia 2012 workshop on Geotagging and its applications in multimedia

http://dl.acm.org/citation.cfm?id=2390795&CFID=210086744&CFTOKEN=71993723

Playing Catch and Juggling with a Humanoid Robot

Entertainment robots in theme park environments typically do not allow for physical interaction and contact with guests. However, catching and throwing back objects is one form of physical engagement that still maintains a safe distance between the robot and participants. Using a theme park type animatronic humanoid robot, we developed a test bed for a throwing and catching game scenario. We use an external camera system (ASUS Xtion PRO LIVE) to locate balls and a Kalman filter to predict ball destination and timing. The robot's hand and joint-space are calibrated to the vision coordinate system using a least-squares technique, such that the hand can be positioned to the predicted location. Successful catches are thrown back two and a half meters forward to the participant, and missed catches are detected to trigger suitable animations that indicate failure. Human to robot partner juggling (three ball cascade pattern, one hand for each partner) is also achieved by speeding up the catching/throwing cycle. We tested the throwing/catching system on six participants (one child and five adults, including one elderly), and the juggling system on three skilled jugglers.

Wednesday, November 21, 2012

ACM/IEEE Joint Conference on Digital Libraries 2013

The ACM/IEEE Joint Conference on Digital Libraries (JCDL 2013) is a major international forum focusing on digital libraries and associated technical, practical, organizational, and social issues. JCDL encompasses the many meanings of the term digital libraries, including (but not limited to) new forms of information institutions and organizations; operational information systems with all manner of digital content; new means of selecting, collecting, organizing, distributing, and accessing digital content; theoretical models of information media, including document genres and electronic publishing; and theory and practice of use of managed content in science and education.
JCDL 2013 will be held in Indianapolis, Indiana (USA), 23-25 July 2013. The program is organized by an international committee of scholars and leaders in the digital libraries field and attendance is expected to include several hundreds of researchers, practitioners, managers, and students.
IMPORTANT DATES
* Full paper submissions due: 28 January 2013
* Short Papers, Panels, Posters, Demonstrations, Workshops, Tutorials due: 4 February 2013
* Doctoral Consortium submissions due: 15 April 2013
* Notification of acceptance for Workshops and Tutorials: 15 March 2013
* Notification for Papers, Panels, Posters, Demonstrations, Workshops, Tutorials: 29 March 2013
* Notification of acceptance for Doctoral Consortium: 6 May 2013
* Conference: 22-26 July 2013
** Tutorials and Doctoral Consortium: 22 July 2013
** Main conference: 23-25 July 2013
** Workshops: 25-26 July 2013
CONFERENCE FOCUS
The intended community for this conference includes those interested in all aspects of digital libraries such as infrastructure; institutions; metadata; content; services; digital preservation; system design; scientific data management; workflows; implementation; interface design; human-computer interaction; performance evaluation; usability evaluation; collection development; intellectual property; privacy; electronic publishing; document genres; multimedia; social, institutional, and policy issues; user communities; and associated theoretical topics. JCDL welcomes submissions in these areas.
Submissions that resonate with the JCDL 2013 theme of Digital Libraries at the Crossroads are particularly welcome; however, reviews, though they will consider relevance of proposals to digital libraries generally, will not give extra weight to theme-related proposals over proposals that speak to other aspects of digital libraries. The conference sessions, workshops and tutorials will cover all aspects of digital libraries.
Participation is sought from all parts of the world and from the full range of established and emerging disciplines and professions including computer science, information science, web science, data science, librarianship, data management, archival science and practice, museum studies and practice, information technology, medicine, social sciences, education and humanities. Representatives from academe, government, industry, and others are invited to participate.
JCDL 2013 invites submissions of papers and proposals for posters, demonstrations, tutorials, and workshops that will make the conference an exciting and creative event to attend. As always, the conference welcomes contributions from all the fields that intersect to enable digital libraries. Topics include, but are not limited to:
* Collaborative and participatory information environments
* Cyberinfrastructure architectures, applications, and deployments
* Data mining/extraction of structure from networked information
* Digital library and Web Science curriculum development
* Distributed information systems
* Extracting semantics, entities, and patterns from large collections
* Evaluation of online information environments
* Impact and evaluation of digital libraries and information in education
* Information and knowledge systems
* Information policy and copyright law
* Information visualization
* Interfaces to information for novices and experts
* Linked data and its applications
* Personal digital information management
* Retrieval and browsing
* Scientific data curation, citation and scholarly publication
* Social media, architecture, and applications
* Social networks, virtual organizations and networked information
* Social-technical perspectives of digital information
* Studies of human factors in networked information
* Theoretical models of information interaction and organization
* User behavior and modeling
* Visualization of large-scale information environments
* Web archiving and preservation
PAPER SUBMISSIONS
Paper authors may choose between two formats: Full papers and short papers. Both formats will be included in the proceedings and will be presented at the conference. Full papers typically will be presented in 20 minutes with 10 minutes for questions and discussion. Short papers typically will be presented in 10 minutes with 5 minutes for questions and discussion. Both formats will be rigorously peer reviewed. Complete papers are required -- abstracts and incomplete papers will not be reviewed.
Full papers report on mature work, or efforts that have reached an important milestone. Short papers will highlight efforts that might be in an early stage, but are important for the community to be made aware of. Short papers can also present theories or systems that can be described concisely in the limited space.
Full papers must not exceed 10 pages. Short papers are limited to at most 4 pages. All papers must be original contributions. The material must therefore not have been previously published or be under review for publication elsewhere. All contributions must be written in English and must follow the ACM http://www.acm.org/sigs/pubs/proceed/template.html formatting guidelines (templates available for authoring in LaTex2e and Microsoft Word). Papers are to be submitted via the conference's EasyChair submission page: http://www.easychair.org/conferences/?conf=jcdl13.
All accepted papers will be published by ACM as conference proceedings and electronic versions will be included in both the ACM and IEEE digital libraries.
POSTER AND DEMONSTRATION SUBMISSIONS
Posters permit presentation of late-breaking results in an informal, interactive manner. Poster proposals should consist of a title, extended abstract, and contact information for the authors, and should not exceed 2 pages. Proposals must follow the conference's formatting guidelines and are to be submitted via the conference's EasyChair submission page: http://www.easychair.org/conferences/?conf=jcdl13. Accepted posters will be displayed at the conference and may include additional materials, space permitting. Abstracts of posters will appear in the proceedings.
Demonstrations showcase innovative digital libraries technology and applications, allowing you to share your work directly with your colleagues in a high-visibility setting. Demonstration proposals should consist of a title, extended abstract, and contact information for the authors and should not exceed 2 pages. All contributions must be written in English and must follow the ACM http://www.acm.org/sigs/pubs/proceed/template.htmlformatting guidelines (templates available for authoring in LaTex2e and Microsoft Word), and are to be submitted via the conference's EasyChair submission page:
http://www.easychair.org/conferences/?conf=jcdl13.  Abstracts of demonstrations will appear in the proceedings.
PANELS AND INVITED BRIEFINGS
Panels and invited briefings will complement the other portions of the program with lively discussions of controversial and cutting-edge issues that are not addressed by other program elements. Invited briefing panels will be developed by the Panel co-chairs David Bainbridge (davidb@cs.waikato.ac.nz) and George Buchanan (George.Buchanan.1@city.ac.uk) and will be designed to address a topic of particular interest to those building digital libraries -- they can be thought of as being mini-tutorials. Panel ideas may be stimulated or developed in part from synergistic paper proposals (with consensus of involved paper proposal submitters).
This year stand-alone formal proposals for panels also will be accepted (http://www.easychair.org/conferences/?conf=jcdl13); however, please keep in mind that panel sessions are few and so relatively few panel proposals will be accepted. Panel proposals should include a panel title, identify all panel participants (maximum 5), include a short abstract as well as an uploaded extended abstract in PDF (not to exceed 2 pages) describing the panel topic, how the panel will be organized, the unique perspective that each speaker brings to the topic, and an explicit confirmation that each speaker has indicated a willingness to participate in the session if the proposal is accepted. For more information about potential panel proposals, please contact the Panel co-chairs named above.
TUTORIAL SUBMISSIONS
Tutorials provide an opportunity to offer in-depth education on a topic or solution relevant to research or practice in digital libraries. They should address a single topic in detail over either a half-day or a full day. They are not intended to be venues for commercial product training.
Experts who are interested in engaging members of the community who may not be familiar with a relevant set of technologies or concepts should plan their tutorials to cover the topic or solution to a level that attendees will have sufficient knowledge to follow and further pursue the material beyond the tutorial. Leaders of tutorial sessions will be expected to take an active role in publicizing and recruiting attendees for their sessions.
Tutorial proposals should include: a tutorial title; an abstract (1-2 paragraphs, to be used in conference programs); a description or topical outline of tutorial (1-2 paragraphs, to be used for evaluation); duration (half- or full-day); expected number of participants; target audience, including level of experience (introductory, intermediate, advanced); learning objectives; a brief biographical sketch of the presenter(s); and contact information for the presenter(s).
Tutorial proposals are to be submitted in electronic form via the conference's EasyChair submission page: http://www.easychair.org/conferences/?conf=jcdl13.
WORKSHOP SUBMISSIONS
Workshops are intended to draw together communities of interest -- both those in established communities and those interested in discussion and exploration of a new or emerging issue. They can range in format from formal, perhaps centering on presentation of refereed papers, to informal, perhaps centering on an extended round-table discussions among the selected participants.
Submissions should include: a workshop title and short description; a statement of objectives for the workshop; a topical outline for the workshop; identification of the expected audience and expected number of attendees; a description of the planned format and duration (half-day, full-day, or one and a half day); information about how the attendees will be identified, notified of the workshop, and, if necessary, selected from among applicants; as well as contact and biographical information about the organizers. Finally, if a workshop or closely related workshop has been held previously, information about the earlier sessions should be provided -- dates, locations, outcomes, attendance, etc.
Workshop proposals are to be submitted in electronic form via the conference's EasyChair submission page: http://www.easychair.org/conferences/?conf=jcdl13.
DOCTORAL SUBMISSIONS
The Doctoral Consortium is a workshop for Ph.D. students from all over the world who are in the early phases of their dissertation work. Ideally, students should have written or be close to completing a thesis proposal, and be far enough away from finishing the thesis that they can make good use of feedback received during the consortium.
Students interested in participating in the Doctoral Consortium should submit an extended abstract describing their digital library research. Submissions relating to any aspect of digital library research, development, and evaluation are welcomed, including: technical advances, usage and impact studies, policy analyses, social and institutional implications, theoretical contributions, interaction and design advances, and innovative applications in the sciences, humanities, and education. See http://jcdl2013.org/doctoral-consortium for a more extensive description of the goals of the Doctoral Consortium and for complete proposal requirements.
Doctoral consortium proposals are to be submitted via the conference's EasyChair submission page: http://www.easychair.org/conferences/?conf=jcdl13
IMPORTANT NOTES FOR ALL SUBMISSIONS
All contributions must be submitted in electronic form via the JCDL 2013 submission Web page, following ACM http://www.acm.org/sigs/pubs/proceed/template.html?format guidelines and using the ACM template. Please submit all papers in PDF format.

Don’t Photoshop it…MATLAB it!

Article from http://blogs.mathworks.com

I'd like to welcome back guest blogger Brett Shoelson for the continuation of his series of posts on implementing image special effects in MATLAB. Brett, a contributor for the File Exchange Pick of the Week blog, has been doing image processing with MATLAB for almost 20 years now.

Contents

imadjust as an Image Enhancement Tool

In my previous post in this guest series, I introduced my image adjustment GUI, and used it to enhance colors in modified versions of images of a mandrill and of two zebras. For both of those images, I operated on all colorplanes uniformly; i.e., whatever I did to the red plane, I also did to green and blue. The calling syntax forimadjust is as follows:

imgOut = imadjust(imgIn,[low_in; high_in],[low_out; high_out],gamma);

The default inputs are:

imgOut = imadjust(imgIn,[0; 1],[0; 1],1);

Different input parameters will produce different effects. In fact, imadjust should often be the starting point for simply correcting illumination issues with an image:

URL = 'http://blogs.mathworks.com/pick/files/DrinkingZebra1.jpg';
img = imrotate(imread(URL),-90);
enhanced = imadjust(img,[0.00; 0.35],[0.00; 1.00], 1.00);
subplot(1,2,1);imshow(img);title('Original');
subplot(1,2,2);imshow(enhanced);title('|imadjust|-Enhanced');


You may recall that when I modified the image of two zebras in my previous post, I not only increased low_out, but I also reversed (and tweaked) the values for low_out and high_out:

imgEnhanced = imadjust(imgEnhanced,[0.30; 0.85],[0.90; 0.00], 0.90);

In reversing those input values, I effectively reversed the image. In fact, for a grayscale image, calling

imgOut = imadjust(imgIn,[0; 1],[1; 0],1); % Note the reversal of low_out and high_out

is equivalent to calling imgOut = imcomplement(imgIn):

img = imread('cameraman.tif');
img1 = imadjust(img,[0.00; 1.00],[1.00; 0.00], 1.00);
img2 = imcomplement(img);
assert(isequal(img1,img2))% No error is thrown!
figure;subplot(1,2,1);imshow(img);xlabel('Original image courtesy MIT');
subplot(1,2,2);imshow(img1);


Now recognize that ImadjustGUI calls imadjust behind the scenes, using the standard syntax. If you read the documentation for imadjust carefully, you will learn that the parameter inputs low_in, high_in, low_out, high_out, and gamma need not be scalars. In fact, if those parameters are specifed appropriately as 1-by-3 vectors, then imadjust operates separately on the red, green, and blue colorplanes:

newmap = imadjust(map,[low_in; high_in],[low_out; high_out],gamma)

% ...transforms the colormap associated with an indexed image.
% If low_in, high_in, low_out, high_out, and gamma are scalars, then the
% same mapping applies to red, green, and blue components.
%
% Unique mappings for each color component are possible when low_in and
% high_in are both 1-by-3 vectors, low_out and high_out are both 1-by-3 vectors,
% or gamma is a 1-by-3 vector.

That works for adjusting colormaps; it also works for adjusting images. As a result, you can readily reverse individual colorplanes of an input RGB image, and in doing, create some cool effects!

Andy Warhol Meets an Elephant

Andy Warhol famously created iconic images of Marilyn Monroe and other celebrities, casting them in startling, unexpected colors, and sometimes tiling them to create memorable effects. We can easily produce similar effects by reversing and saturating individual colorplanes of RGB images. (I wrote ImadjustGUI to facilitate, interactively, those plane-by-plane intensity adjustments.)


Reading and Pre-Processing the Elephant

First, of course, we read and display the elephant:

URL = 'http://blogs.mathworks.com/pick/files/ElephantProfile.jpg';
img = imread(URL);

He's a wrinkly old fellow (below left). I'd like to bring out those wrinkles by enhancing contrast in the image. There are a few ways to do that, but I learned about my favorite way by reading through the "Gray-Scale Morphology" section of DIPUM, 2nd Ed. Specifically, the authors of this (most excellent) book indicated (on page 529) that one could combine topat and bottomhat filters to enhance contrast. (I built the appropriate combination of those filters behind the "Contrast Enhancement" button of MorphTool.) So, using MorphTool-generated code:

SE = strel('Disk',18);
imgEnhanced = imsubtract(imadd(img,imtophat(img,SE)),imbothat(img,SE));


Now, operating with imadjust plane by plane, reversing the red and blue planes, and modifying the gamma mapping, I can easily find my way to several interesting effects. For instance:

imgEnhanced1 = imadjust(imgEnhanced,[0.00 0.00 0.00; 1.00 0.38 0.40],[1.00 0.00 0.70; 0.20 1.00 0.40], [4.90 4.00 1.70]);
imgEnhanced2 = imadjust(imgEnhanced,[0.13 0.00 0.30; 0.75 1.00 1.00],[0.00 1.00 0.50; 1.00 0.00 0.27], [5.90 0.80 4.10]);


So, two more of those interesting effects, and then we can compose the four-elephants image above:

imgEnhanced3 = imadjust(img,[0.20 0.00 0.09; 0.83 1.00 0.52],[0.00 0.00 1.00; 1.00 1.00 0.00], [1.10 2.70 1.00]);
imgEnhanced4 = imadjust(img,[0.20 0.00 0.00; 0.70 1.00 1.00],[1.00 0.90 0.00; 0.00 0.90 1.00], [1.30 1.00 1.00]);

I also wanted to flip two of those enhanced images. fliplr makes it easy to flip a 2-dimensional matrix, but it doesn't work on RGB images. So I flipped them plane-by-plane, and concatenated cat the flipped planes in the third ( z -) dimensioninto new RGB images:

r = fliplr(imgEnhanced2(:,:,1));
g = fliplr(imgEnhanced2(:,:,2));
b = fliplr(imgEnhanced2(:,:,3));
imgEnhanced2 = cat(3,r,g,b);

CompositeImage = [imgEnhanced1 imgEnhanced2; imgEnhanced3 imgEnhanced4]; % (Images 2 and 4 are flipped plane-by-plane.)

Next Up: Put Me In the Zoo!


All images except "cameraman" copyright Brett Shoelson; used with permission.

Get the MATLAB code

Article from http://blogs.mathworks.com

Call for Papers: WIAMIS 2013: The 14th International Workshop on Image and Audio Analysis for Multimedia Interactive Services

Topics of interest include, but are not limited to:

– Multimedia content analysis and understanding
– Content-based browsing, indexing and retrieval of images, video and audio
– Advanced descriptors and similarity metrics for multimedia
– Audio and music analysis, and machine listening
– Audio-driven multimedia content analysis
– 2D/3D feature extraction
– Motion analysis and tracking
– Multi-modal analysis for event recognition
– Human activity/action/gesture recognition
– Video/audio-based human behavior analysis
– Emotion-based content classification and organization
– Segmentation and reconstruction of objects in 2D/3D image sequences
– 3D data processing and visualization
– Content summarization and personalization strategies
– Semantic web and social networks
– Advanced interfaces for content analysis and relevance feedback
– Content-based copy detection
– Analysis and tools for content adaptation
– Analysis for coding efficiency and increased error resilience
– Multimedia analysis hardware and middleware
– End-to-end quality of service support
– Multimedia analysis for new and emerging applications
– Advanced multimedia applications

Important dates:

- Proposal for Special Sessions: 4th January 2013
- Notification of Special Sessions Acceptance: 11th January 2013
- Paper Submission: 8th March 2013
- Notification of Papers Acceptance: 3rd May 2013
- Camera-ready Papers: 24th May 2013

See http://wiamis2013.wp.mines-telecom.fr/ for more information.