Pages

Wednesday, October 26, 2011

Nano-spring make transparent, super-stretchy skin-like sensors

Article from:

 http://www.sciencecodex.com/read/stanford_researchers_build_transparent_superstretchy_skinlike_sensor-80227

article-0-0E85B7F000000578-63_468x260

When the nanotubes are airbrushed onto the silicone, they tend to land in randomly oriented little clumps. When the silicone is stretched, some of the "nano-bundles" get pulled into alignment in the direction of the stretching.

When the silicone is released, it rebounds back to its original dimensions, but the nanotubes buckle and form little nanostructures that look like springs.

"After we have done this kind of pre-stretching to the nanotubes, they behave like springs and can be stretched again and again, without any permanent change in shape," Bao said.

Stretching the nanotube-coated silicone a second time, in the direction perpendicular to the first direction, causes some of the other nanotube bundles to align in the second direction. That makes the sensor completely stretchable in all directions, with total rebounding afterward.

Additionally, after the initial stretching to produce the "nano-springs," repeated stretching below the length of the initial stretch does not change the electrical conductivity significantly, Bao said. Maintaining the same conductivity in both the stretched and unstretched forms is important because the sensors detect and measure the force being applied to them through these spring-like nanostructures, which serve as electrodes.

The sensors consist of two layers of the nanotube-coated silicone, oriented so that the coatings are face-to-face, with a layer of a more easily deformed type of silicone between them.

The middle layer of silicone stores electrical charge, much like a battery. When pressure is exerted on the sensor, the middle layer of silicone compresses, which alters the amount of electrical charge it can store. That change is detected by the two films of carbon nanotubes, which act like the positive and negative terminals on a typical automobile or flashlight battery.

The change sensed by the nanotube films is what enables the sensor to transmit what it is "feeling." Whether the sensor is being compressed or extended, the two nanofilms are brought closer together, which seems like it might make it difficult to detect which type of deformation is happening. But Lipomi said it should be possible to detect the difference by the pattern of pressure.

Using carbon nanotubes bent to act as springs, Stanford researchers have developed a stretchable, transparent skin-like sensor. The sensor can be stretched to more than twice its original length and bounce back perfectly to its original shape. It can sense pressure from a firm pinch to thousands of pounds. The sensor could have applications in prosthetic limbs, robotics and touch-sensitive computer displays. Darren Lipomi, a postdoctoral researcher in Chemical Engineering and Zhenan Bao, associate professor in Chemical Engineering, explain their work.

(Photo Credit: Steve Fyffe, Stanford News Service)

With compression, you would expect to see sort of a bull's-eye pattern, with the greatest deformation at the center and decreasing deformation as you go farther from the center.

"If the device was gripped by two opposing pincers and stretched, the greatest deformation would be along the straight line between the two pincers," Lipomi said. Deformation would decrease as you moved farther away from the line.

Bao's research group previously created a sensor so sensitive to pressure that it could detect pressures "well below the pressure exerted by a 20 milligram bluebottle fly carcass" that the researchers tested it with. This latest sensor is not quite that sensitive, she said, but that is because the researchers were focused on making it stretchable and transparent.

"We did not spend very much time trying to optimize the sensitivity aspect on this sensor," Bao said.

"But the previous concept can be applied here. We just need to make some modifications to the surface of the electrode so that we can have that same sensitivity."

Article from:

 http://www.sciencecodex.com/read/stanford_researchers_build_transparent_superstretchy_skinlike_sensor-80227

Artificial intelligence community mourns John McCarthy

Article from http://www.bbc.co.uk/news/technology-15444222

John McCarthyArtificial intelligence researcher, John McCarthy, has died. He was 84.

The American scientist invented the computer language LISP.

It went on to become the programming language of choice for the AI community, and is still used today.

Professor McCarthy is also credited with coining the term "Artificial Intelligence" in 1955 when he detailed plans for the first Dartmouth conference. The brainstorming sessions helped focus early AI research.

Prof McCarthy's proposal for the event put forward the idea that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it".

The conference, which took place in the summer of 1956, brought together experts in language, sensory input, learning machines and other fields to discuss the potential of information technology.

Other AI experts describe it as a critical moment.

"John McCarthy was foundational in the creation of the discipline Artificial Intelligence," said Noel Sharkey, Professor of Artificial Intelligence at the University of Sheffield.

"His contribution in naming the subject and organising the Dartmouth conference still resonates today."

LISP

Prof McCarthy devised LISP at Massachusetts Institute of Technology (MIT), which he detailed in an influential paper in 1960.

The computer language used symbolic expressions, rather than numbers, and was widely adopted by other researchers because it gave them the ability to be more creative.

"The invention of LISP was a landmark in AI, enabling AI programs to be easily read for the first time," said Prof David Bree, from the Turin-based Institute for Scientific Interchange.

"It remained the AI language, especially in North America, for many years and had no major competitor until Edinburgh developed Prolog."

Regrets

In 1971 Prof McCarthy was awarded the Turing Award from the Association for Computing Machinery in recognition of his importance to the field.

He later admitted that the lecture he gave to mark the occasion was "over-ambitious", and he was unhappy with the way he had set out his new ideas about how commonsense knowledge could be coded into computer programs.

However, he revisted the topic in later lectures and went on to win the National Medal of Science in 1991.

After retiring in 2000, Prof McCarthy remained Professor Emeritus of Computer Science at Stanford University, and maintained a websitewhere he gathered his ideas about the future of robots, the sustainability of human progress and some of his science fiction writing.

"John McCarthy's main contribution to AI was his founding of the field of knowledge representation and reasoning, which was the main focus of his research over the last 50 years," said Prof Sharkey

"He believed that this was the best approach to developing intelligent machines and was disappointed by the way the field seemed to have turned into high speed search on very large databases."

Prof Sharkey added that Prof McCarthy wished he had called the discipline Computational Intelligence, rather than AI. However, he said he recognised his choice had probably attracted more people to the subject.

Article from http://www.bbc.co.uk/news/technology-15444222

Tuesday, October 25, 2011

Throwable Camera Creates 360-Degree Panoramic Images

Article from http://mashable.com/2011/10/25/throwable-ball-camera/

Are you, like so many others, tired of all those old-fashioned cameras you have to hold in order to take pictures? Well here’s a camera you get to throw.

The Throwable Panoramic Ball Camera is a foam-padded ball studded with 36 fixed-focus, 2-megapixel mobile phone camera modules capable of taking a 360-degree panoramic photo.

You use the camera by throwing it directly in the air. When the camera reaches the apex — measured by an accelerometer in the camera — all 36 cameras automatically take a picture. These distinct pictures are then digitally stitched together and uploaded via USB where they are presented in a spherical panoramic viewer. This lets users interactively explore their photos including a zoom function.

 

SEE ALSO: The Development of the Camera: From Ancient to Instant [INFOGRAPHIC]

The results — as seen in the video above — are pretty darn impressive, but the Ball Camera is definitely not meant for shaky hands. Any spin on the ball when it’s thrown could distort the final image and you certainly wouldn’t want to drop the thing despite its 3D-printed foam padding. The 2-megapixel cameras are adequate but the quality drops as soon as users try to zoom in on distant elements. Besides, it looks a little difficult to fit the thing into a purse, let alone your pocket.

Right now, the Throwable Panoramic Ball Camera is not available to buy, though its creators have it pending a patent. Cool idea, but is it practical? Would you ever buy a camera you could throw? Let us know in the comments.

http://mashable.com/2011/10/25/throwable-ball-camera/

Monday, October 24, 2011

Rendering Synthetic Objects into Legacy Photographs

Kevin Karsch, Varsha Hedau, David Forsyth, Derek Hoiem
To be presented at SIGGRAPH Asia 2011

Abstract

We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of the scene that is suitable for realistically rendering synthetic objects with diffuse, specular, and even glowing materials while accounting for lighting interactions between the objects and the scene. We demonstrate in a user study that synthetic images produced by our method are confusable with real scenes, even for people who believe they are good at telling the difference. Further, our study shows that our method is competitive with other insertion methods while requiring less scene information. We also collected new illumination and reflectance datasets; renderings produced by our system compare well to ground truth. Our system has applications in the movie and gaming industry, as well as home decorating and user content creation, among others.

http://kevinkarsch.com/publications/sa11.html

Top 10 ACM SIGMM Downloads

http://sigmm.org/records/records1103/featured04.html

Here we present the top downloaded ACM SIGMM articles from the ACM Digital Library, from July 2010 to June 2011. We are hoping that this list gives a much deserved exposure to the ACM SIGMM's best articles.

  1. Guo-Jun Qi, Xian-Sheng Hua, Yong Rui, Jinhui Tang, Tao Mei, Meng Wang, Hong-Jiang Zhang. Correlative multilabel video annotation with temporal kernels. In ACM Trans. Multimedia Comput. Commun. Appl. 5(1), 2008
  2. Michael S. Lew, Nicu Sebe, Chabane Djeraba, and Ramesh Jain. Content-based multimedia information retrieval: State of the art and challenges. In ACM Trans. Multimedia Comput. Commun. Appl. 2(1), 2006
  3. Ba Tu Truong, Svetha Venkatesh. Video abstraction: A systematic review and classification. In ACM Trans. Multimedia Comput. Commun. Appl. 3(1), 2007
  4. Yu-Fei Ma, Hong-Jiang Zhang. Contrast-based image attention analysis by using fuzzy growing. In ACM Multimedia 2003
  5. Simon Tong and Edward Chang. Support vector machine active learning for image retrieval. In ACM Multimedia 2001
  6. J.-P. Courtiat, R. Cruz de Oliveira, L. F. Rust da Costa Carmo. Towards a new multimedia synchronization mechanism and its formal definition. In ACM Multimedia 1994
  7. Gabriel Takacs, Vijay Chandrasekhar, Natasha Gelfand, Yingen Xiong, Wei-Chao Chen, Thanos Bismpigiannis, Radek Grzeszczuk, Kari Pulli, Bernd Girod. Outdoors augmented reality on mobile phone using loxel-based visual feature organization. In ACM SIGMM MIR 2008
  8. Jiajun Bu, Shulong Tan, Chun Chen, Can Wang, Hao Wu, Lijun Zhang, Xiaofei He. Music recommendation by unified hypergraph: combining social media information and music content. In ACM Multimedia 2010
  9. Mathias Lux, Savvas A. Chatzichristofis. Lire: lucene image retrieval: an extensible java CBIR library. In ACM Multimedia 2008

Thursday, October 20, 2011

FaceLight – Silverlight 4 Real-Time Face Detection

This article describes the simple facial recognition method that searches for a certain sized skin color region in a webcam snapshot. This technique is not as perfect as a professional computer vision library like OpenCV and the Haar-like features they use, but it runs in real time and works for most webcam scenarios.

Friday, October 14, 2011

C

‎#include<stdio.h>
int main()
{
cout<< "Goodbye Dennis Ritchie";
return 0;
}

Tuesday, October 11, 2011

ACM International Conference on Multimedia Retrieval (ICMR) 2012

Effectively and efficiently retrieving information based on user needs is one of the most exciting areas in multimedia research. The Annual ACM International Conference on Multimedia Retrieval (ICMR) offers a great opportunity for exchanging leading-edge multimedia retrieval ideas among researchers, practitioners and other potential users of multimedia retrieval systems. This conference, puts together the long-lasting experience of former ACM CIVR and ACM MIR series, is set up to illuminate the state of the arts in multimedia (text, image, video and audio) retrieval.

ACM ICMR 2012 is soliciting original high quality papers addressing challenging issues in the broad field of multimedia retrieval.


Topics of Interest (not limited to)
• Content/semantic/affective based indexing and retrieval
• Large-scale and web-scale multimedia processing
• Integration of content, meta data and social network
• Scalable and distributed search
• User behavior and HCI issues in multimedia retrieval
• Advanced descriptors and similarity metrics
• Multimedia fusion
• High performance indexing algorithms
• Machine learning for multimedia retrieval
• Ontology for annotation and search
• 3D video and model processing
• Large-scale summarization and visualization
• Performance evaluation
• Very large scale multimedia corpus
• Navigation and browsing on the Web
• Retrieval from multimodal lifelogs
• Database architectures for storage and retrieval
• Novel multimedia data management systems and applications
• Applications in forensic, biomedical image and video collections

Important Dates

Paper Submission: January 15, 2012
Notification of Acceptance: March 15, 2012
Camera-Ready Papers Due: April 5, 2012
Conference Date: June 5 - 8, 2012

http://www.icmr2012.org/index.html

Monday, October 10, 2011

Kinect Object Datasets: Berkeley's B3DO, UW's RGB-D, and NYU's Depth Dataset

Articlw from http://quantombone.blogspot.com/2011/10/kinect-object-datasets-berkeleys-b3do.html

Why Kinect?

The Kinect, made by Microsoft, is starting to become quite a common item in Robotics and Computer Vision research.  While the Robotics community has been using the Kinect as a cheap laser sensor which can be used for obstacle avoidance, the vision community has been excited about using the 2.5D data associated with the Kinect for object detection and recognition.  The possibility of building object recognition systems which have access to pixel features as well as 2.5D features is truly exciting for the vision hacker community!

 


Berkeley's B3DO

First of all, I would like to mention that it looks like the Berkeley Vision Group jumped on the Kinect bandwagon.  But the data collection effort will be crowdsourced -- they need your help!  They need you to use your Kinect to capture your own home/office environments and upload it to their servers  This way, a very large dataset will be collected, and we, the vision hackers, can use machine learning techniques to learn what sofas, desks, chairs, monitors, and paintings look like.  They Berkeley hackers have a paper on this at one of the ICCV 2011workshops in Barcelona, here is the paper information:

A Category-Level 3-D Object Dataset: Putting the Kinect to Work
Allison Janoch, Sergey Karayev, Yangqing Jia, Jonathan T. Barron, Mario Fritz, Kate Saenko, Trevor Darrell
ICCV-W 2011
[pdf] [bibtex]


UW's RGB-D Object Dataset

On another note, if you want to use 3D for your own object recognition experiments then you might want to check out the following dataset: University of Washington's RGB-D Object Dataset.  With this dataset you'll be able to compare against UW's current state-of-the-art.

 

In this dataset you will find RGB+Kinect3D data for many household items taken from different views.  Here is the really cool paper which got me excited about the RGB-D Dataset:


A Scalable Tree-based Approach for Joint Object and Pose Recognition
Kevin Lai, Liefeng Bo, Xiaofeng Ren, and Dieter Fox
In the Twenty-Fifth Conference on Artificial Intelligence (AAAI), August 2011.


NYU's Depth Dataset

I have to admit that I did not know about this dataset (created by by Nathan Silberman of NYU), until after I blogged about the other two datasets.  Check out the NYU Depth Dataset homepage. However the internet is great, and only a few hours after posted this short blog post, somebody let me know that I left out this really cool NYU dataset.  In fact, it looks like this particular dataset might be at the LabelMe-level regarding dense object annotations, but with accompanying Kinect data.  Rob Fergus & Co strike again!

Nathan Silberman, Rob Fergus. Indoor Scene Segmentation using a Structured Light Sensor. To Appear: ICCV 2011 Workshop on 3D Representation and Recognition

Sunday, October 9, 2011

The PHD Movie!!!

Screening @ Cyprus University of Technology
11/16/2011 - 7:00PM - CUT
Organized by: Cyprus University of Technology
Add to my google calendar

Visit www.phdcomics.com/movie to find a screening at your school. Is The PHD Movie not coming to your school? Ask your administration to sponsor a screening!

Saturday, October 8, 2011

Exploring Photobios - SIGGRAPH 2011

Read more: http://grail.cs.washington.edu/photobios/

Friday, October 7, 2011

Panasonic unveils first robotic hairdresser

The annual CEATEC technology in Tokyo gives Japanese technology companies a chance to let their hair down and show off robots far, far too odd for Western consumption.
Robot unicyclists and 'robot companions' are regulars at the show - often unveiled by otherwise normal technology companies. This year, Panasonic unveiled the first robotic hairdresser - as well as a robot 'doctor'.
Panasonic's robot hair washer uses advanced robot 'fingers' to massage the scalp while washing your head with jets of water and soap - rather like a car wash for your skull.
Information provided by cctv.com Thank you http://www.cctv.com

Thursday, October 6, 2011

Recent Image Retrieval Techniques

Untitled - 1

http://sglab.kaist.ac.kr/~sungeui/IR/Slides/

Tuesday, October 4, 2011

“Practical Image and Video Processing Using MATLAB®”

 

ref=as_li_ss_tl

This is the first book to combine image and video processing with a practical MATLAB(R)-oriented approach in order to demonstrate the most important image and video techniques and algorithms. Utilizing minimal math, the contents are presented in a clear, objective manner, emphasizing and encouraging experimentation.

The book has been organized into two parts. Part I: Image Processing begins with an overview of the field, then introduces the fundamental concepts, notation, and terminology associated with image representation and basic image processing operations. Next, it discusses MATLAB(R) and its Image Processing Toolbox with the start of a series of chapters with hands-on activities and step-by-step tutorials. These chapters cover image acquisition and digitization; arithmetic, logic, and geometric operations; point-based, histogram-based, and neighborhood-based image enhancement techniques; the Fourier Transform and relevant frequency-domain image filtering techniques; image restoration; mathematical morphology; edge detection techniques; image segmentation; image compression and coding; and feature extraction and representation.

Part II: Video Processing presents the main concepts and terminology associated with analog video signals and systems, as well as digital video formats and standards. It then describes the technically involved problem of standards conversion, discusses motion estimation and compensation techniques, shows how video sequences can be filtered, and concludes with an example of a solution to object detection and tracking in video sequences using MATLAB(R).

Extra features of this book include:

More than 30 MATLAB(R) tutorials, which consist of step-by-step guides to exploring image and video processing techniques using MATLAB(R)

Chapters supported by figures, examples, illustrative problems, and exercises

Useful websites and an extensive list of bibliographical references

This accessible text is ideal for upper-level undergraduate and graduate students in digital image and video processing courses, as well as for engineers, researchers, software developers, practitioners, and anyone who wishes to learn about these increasingly popular topics on their own.

http://www.ogemarques.com/

Call for participation in the ICPR 2012 Contests

http://www.icpr2012.org/contests.html

We are happy to announce the opening of the six ICPR 2012 Contests, to be
held on November 11, 2012 in conjunction with the 21st International Conference on Pattern Recognition (www.icpr2012.org). The aim of the contests is to encourage better scientific development through comparing competing
approaches on a common dataset.

The Contests (see www.icpr2012.org/contests.html for full details and links):

    Gesture Recognition Challenge and Kinect Grand Prize
    HEp-2 Cells Classification
    Human activity recognition and localization
    Kitchen Scene Context based Gesture Recognition
    Mitosis Detection in Breast Cancer
    People tracking in wide baseline camera networks

Publications
================
There are no 'publications' for the contest participants other than what each contest organizer prepares. Contest participants are encouraged to submit their results as a normal paper to the main conference where it
will be reviewed as normal. Short introductions for each contest are planned to be included in the main proceedings.

Registration
================
Attending the contest sessions requires registration for the contest, which can be done using the main conference registration form. Registration for the main conference is not obligatory, but is necessary if you want to also attend the main conference.

Dates
================
Each Contest has its own time schedule. See the website of each contest for the dates.
The results of the competitions will be announced at the conference:November 11, 2012

CFP: VII Conf. on Articulated Motion and Deformable Objects (AMDO 2012)

Andratx, Mallorca, Spain
11-13 July, 2012
http://dmi.uib.es/~ugiv/AMDO
amdo@uib.es

The Spanish Association for Pattern Recognition and Image Analysis (AERFAI) and the Mathematics and Computer Science Department of UIB are organising the seventh nternational conference AMDO 2012 that will take place in Puerto de Andratx, Mallorca. This conference is the natural evolution of AMDO previous workshops. The new goal of this conference is to promote interaction and collaboration among researchers working directly in the areas covered by the main tracks. The new perceptual user interfaces and the emerging  echnologies increase the relation between aeas involved with human-computer interaction. The perspective of the AMDO 2012 conference will be to strengthen the relationship between the many areas that have as a key point the study of the human body using computer technologies as the main tool.
It is a great opportunity to encourage links between research in the areas of computer vision, computer graphics, advanced multimedia applications and multimodal interfaces that share common problems and frequently use similar techniques or tools. In this particular edition the related topics are divided in several tracks, including the topics above proposed.


AMDO 2012 will consist of three days of lecture sessions, both  regular and invited presentations, a poster session and international tutorials. The conference fee (approx.450 euro) includes a social program (conference dinner, coffee breaks, snacks and cultural activities). Students, AERFAI and EG members can register at a reduced fee.


TOPICS INCLUDE (but not restricted to):
Track 1: Advanced Computer Graphics (Human Modelling & Animation)
Track 2: Human Motion (Analysis, Tracking, 3D Reconstruction & Recognition)
Track 3: Multimodal User Interaction & Applications
Track 4: Affective Interfaces (recognition and interpretation of emotions, ECAs - Embodied Conversational Agents in HCI)


PAPER SUBMISSION AND REVIEW PROCESS
Papers should describe original and unpublished work about the above or closely related topics. Please submit your paper electronically at our website (see URL above) using the software provided. All submissions should be in Adobe Acrobat (pdf). The AMDO2012 secretariat must receive your paper before March 12, 2012, 17:00 GMT
(London time). Up to ten pages will be considered. All papers submitted will be subjected to a blind review process by at least three members of the program committee. The review paper must not provide names and affiliation, and should include a title, a 150-word abstract, keywords and paper manuscript. Accepted papers will
appear in the LNCS Springer-Verlag international proceedings that will be published and distributed to all participants at the workshop. For more details and news, visit our web page. Selected papers will be nominated to be published in an extended version
in a newsletter with impact index.


N.B. Submission implies the willingness of at least one of the authors to register and to present the communication at the conference, if accepted.


IMPORTANT DEADLINES:
Submission of papers March 12, 2012
Notification of acceptance April 12, 2012
Camera-ready April 30, 2012
Early registration May 31, 2012
Late registration June 30 2012
VII AMDO Conference 2012 11-13 July 2012