Monday, August 29, 2011

An eBay for science.

Reported by Zoë Corbyn, in Nature News, 19 Aug. 2011.

Read More:

Last week, Science Exchange in Palo Alto, California, launched a website allowing scientists to outsource their research to ‘providers’ — other researchers and institutions that have the facilities and equipment to meet requesting scientists’ needs. Nature asked the company’s co-founder, researcher-turned-entrepreneur Elizabeth Iorns, how the website works, and what an online marketplace for experiments could mean for the future of research.

What is Science Exchange?

It is an online marketplace for scientific experiments. Imagine eBay, but for scientific knowledge. You post an experiment that you want to outsource, and scientific service providers submit bids to do the work. The goal is to make scientific research more efficient by making it easy for researchers to access experimental expertise from core facilities with underutilized capacity.

Where did the idea come from?

It was through my work as a breast-cancer biologist at the University of Miami in Florida. I wanted to conduct some experiments outside my field, and realized that I needed an external provider. What followed was an entirely frustrating process, and when I found the provider it was difficult to pay them because they were outside my university’s purchasing system. When I talked to other scientists, it became clear that this was a really big problem, but also one that could be solved with a marketplace. Development of the website started around a kitchen table in Miami in April.

Why would researchers want to participate?

So they can access technologies that their university doesn’t offer; if their own institutional facilities are too busy; if they just generally want to speed up the research process; or if they want a good deal. Prices can vary dramatically: for example, through our platform I have seen bids to perform a microRNA study ranging from US$3,500 to $9,000. Those who do the work can also build reputations independent of their publications by gaining feedback from those they work with.

Why might universities want their facilities to participate?

There are huge budget incentives. It allows institutions to make the most of their existing facilities, which means that they don’t have to subsidize them as much. Also, if researchers can use Science Exchange to access the latest equipment, institutions can be more flexible about when they buy new instruments.

How are you intending to make a profit?

We take a small commission if we match a researcher with a provider and they use us to do the transaction — 5% for projects under $5,000, which is tiny in comparison with what researchers can save by examining prices from multiple providers. For projects costing more than $5,000, it is a lower commission and a sliding scale: we aren’t going to charge $50,000 on a $1-million experiment.

How are you funded?

By Ycombinator, a start-up accelerator programme in Mountain View, California, and angel investors. We have raised $320,000 so far and are looking to raise another $1 million. We have big plans to expand.

What has the response been like?

We launched after a short beta period and the growth is crazy. We now have close to 1,000 scientists using our site and 50–100 signing up every day. More than 70 institutions have providers registered with us, including Stanford University in California, Harvard University in Cambridge, Massachusetts, and, of course, Miami.

Is the service limited to particular regions of the world?

Anyone from anywhere can use it. We had initially thought our focus would be in the United States but we have had a lot of interest from overseas researchers, particularly interest in the facilities that are available here.

Read More:

Article From:Zoë Corbyn, in Nature News, 19 Aug. 2011.

Sunday, August 28, 2011


Script# brings productivity to Ajax and JavaScript development. Script# is a free tool that enables developers to author C# source code and subsequently compile it into regular script that works across all modern browsers, and in doing so, leverage the productivity and power of existing .NET tools as well as the Visual Studio IDE. Script# empowers you with a development methodology and approach that brings software engineering, long term maintainability and scalable development approaches for your Ajax applications, components and frameworks.

Script# is used extensively by developers within Microsoft building Ajax experiences in Windows Live, Office to name just a couple, as well as by a external developers and companies including Facebook. If you’re building Ajax-based RIA applications, you owe it to yourself to try Script# today and see if it can help improve your own Ajax development!


The Script# Project

Productivity and better tooling are primary motivators behind Script#. At the same time, a fundamental design tenet and driving philosophy behind the design of Script# is to produce script that resembles hand-written script that is aware and faithful to the script runtime environment found in browsers. Specifically the compiler does not introduce unnecessary layers of abstraction or indirection. The idea is you’re simply writing script in a better and pragmatic way, rather than trying to port a .NET application to the browser, which is more likely to produce impractical results.

Script# allows programming against the DHTML DOM APIs and JavaScript APIs, as well as Silverlight 1.0 script API. The compiler itself isn’t coupled to any one particular framework. You can use Script# to program against Microsoft ASP.NET Ajax as well as other 3rd party frameworks such as ExtJS (via Ext#). At the same time, the compiler is complemented by an optional ScriptFX framework, which is a small framework built using Script# itself. Finally, if you have existing scripts, they can be imported and then used from new C# code so you don’t have to rewrite everything from scratch to start using Script#.

Scripts generated using Script# are honest-to-goodness plain old JavaScript files, that you can freely deploy into your applications, and there is no runtime dependency on the Script# compiler. This is further explained in the Understanding Script#page. You will need .NET 2.0+ and/or Visual Studio on your development machine. You can also use Visual C# Express which is available for free.

Script# is an evolving project, but is quite mature and ready for use in real-world projects such as those listed in the showcase. Script# is being used both internally within Microsoft as well as external applications. It was first released in May 2006 (introductory blog post). Over the course of the last two+ years, it has been regularly updated with new features and bug fixed based on actual usage and feedback from developers like you. You can read about the latest release on the release historypage. Please do continue sending any feedback on Script# that you might have.

The content on this site will be updated periodically to include additional concepts and tutorials. Please subscribe to the Script# feed to stay up-to-date or check this page often.

Thursday, August 25, 2011

Compact Composite Descriptors for Content Based Image Retrieval

Bookcover of Compact Composite Descriptors for Content Based Image Retrieval

Authors: Savvas A. Chatzichristofis, Yiannis S. Boutalis

This book covers the state of the art in image indexing and retrieval techniques paying particular attention in recent trends and applications. It presents the basic notions and tools of content-based image description and retrieval, covering all significant aspects of image preprocessing, features extraction, similarity matching and evaluation methods. Particular emphasis is given in recent computational intelligence techniques for producing compact content based descriptors comprising color, texture and spatial distribution information. Early and late fusion techniques are also used for improving retrieval results from large probably distributed inhomogenous databases. The book reports on the basic international standards and provides an updated presentation of the current retrieval systems. Numerous utilities and techniques are implemented in software, which is provided as a supplementary material under an open-source license agreement. The book is particularly useful for postgraduate students and researchers in the field of image retrieval, who want to easily elaborate and test state of the art techniques and possibly incorporate it in their development.

ISBN-13: 978-3-639-37391-2
ISBN-10: 363937391X
EAN: 9783639373912
Book language: English
Publishing house: VDM Verlag Dr. Müller
Number of pages: 216
Published at: 2011-08-24
Category: Informatics, IT
Price: 79.00 €

Order from : More Books or Amazon

Tuesday, August 23, 2011

Face recognition in London 2012 Olympic Games

Original Article

LONDON (AP) -- Facial recognition technology being considered for London's 2012 Games is getting a workout in the wake of Britain's riots, a senior police chief told The Associated Press on Thursday, with officers feeding photographs of suspects through Scotland Yard's newly updated face-matching program.

Chief Constable Andy Trotter of the British Transport Police said the sophisticated software was being used to help find those suspected of being involved in the worst unrest London has seen in a generation.

But he cautioned that facial recognition makes up only a fraction of the police force's efforts, saying tips have mostly come from traditional sources, such as still images captured from closed circuit cameras, pictures gathered by officers, footage shot by police helicopters or images snapped by members of the public. One department was driving around a large video screen displaying images of suspects.

"There's a mass of evidence out there," Trotter said in a telephone interview. "The public are so enraged that people who wouldn't normally come forward are helping us - especially when they see their neighbors are coming back with brand new TVs."

Prime Minister David Cameron acknowledged Thursday that police were overwhelmed by rioting that began over the weekend in London and spread across the country over four days. Mobs of youths looted stores, set buildings aflame and attacked police officers and other people - a chaotic and humbling scene for a city a year away from hosting the Olympic Games.

At an emergency session of Parliament summoned to discuss the riots, Cameron said authorities were considering new powers, including allowing police to order thugs to remove masks or hoods, evicting troublemakers from subsidized housing and temporarily disabling cell phone instant messaging services. He said the 16,000 police deployed on London's streets to deter rioters and reassure residents would remain through the weekend.

A press officer with Scotland Yard - who also spoke anonymously, in line with force policy - confirmed that facial recognition technology was at the police's disposal, although he gave few other details. He said that generally the technology would only be used to help identify those suspected of serious crimes, such as assault, and that in most cases disseminating photographs to the general public remains a far cheaper and more effective way of finding suspects.

The facial-recognition technology used by police treats the human face like a grid, measuring the distance between a person's nose, eyes, lips and other features. It has recently been upgraded, according to an article published last year in Scotland Yard's bimonthly magazine, "The Job."

The March 2010 article said that the new program has been shown to work far better than older versions of the technology, with one expert quoted as saying that it had shown promise in identifying people from high-quality, face-on shots taken off of surveillance photographs, mobile phones, passports or the Internet.

A law enforcement official told the AP that to use the technology "you have to have a good picture of a suspect and it is only useful if you have something to match it against. In other words, the suspect already has to have a previous criminal record."

He spoke on condition of anonymity because he was not authorized to discuss ongoing investigations.

In another effort to identify suspects, police have released two dozen photos and videos to the picture-sharing website Flickr, where they've already gathered more than 400,000 hits. Some of those photographs have also been published by Britain's brash tabloid press. The Sun recently plastered them across its front page, along with a headline urging readers to report looters to the police.

The photographs on Flickr are mainly grainy images pulled from cameras, which may not be of much use to face-matching software. But detectives are already scanning the Web for pictures of high-quality photographs of rioters' faces, according to photojournalist Guilherme Zauith, who witnessed some of the disturbances in London and later posted images of clashes to the Internet.

Zauith said he was recently contacted by a London detective "saying that they saw my photos online and if I could send it to them to help to identify the people."

"They were looking for all kind of photographs showing faces," he said. Zauith, a 30-year-old Brazilian national, said he turned the photos over to the detective.

The West Midlands police were trying another approach: driving a van equipped with a large screen displaying 50 images of suspects through Birmingham.

Police said the "Digi-Van" will stop at key locations around the city to give shoppers and commuters a good look at the photographs in hopes they can help identify suspects.

Facial recognition technology is already widely employed by free-to-use websites such as Facebook and Google Inc.'s Picasa photo-sharing program.

Such programs have been of increasing interest to authorities as well. A person with the Olympic planning committee, speaking to the AP on condition of anonymity because of the sensitivity of security preparations, said that facial recognition software was being considered for use as a security measure during the Olympic Games.

Meanwhile, detectives are employing a host of other tactics to take aim at the rioters. Police departments across the country have made arrests linked to riot threats and boasts posted to social networking sites.

Trotter said that while investigations had been helped by looters "who publicize their actions on things like Facebook," a lot of arrests have come the old-fashioned way, through officers simply spotting suspects they'd seen before.

"It's not just the face that is recognizable," Trotter said. "It's been in the way they walk, or the clothes they're wearing or even tattoos."

image source:

Monday, August 22, 2011

Introduction to Artificial Intelligence


Introduction to

A bold experiment in distributed education, "Introduction to Artificial Intelligence" will be offered free and online to students worldwide during the fall of 2011. The course will include feedback on progress and a statement of accomplishment. Taught by Sebastian Thrun and Peter Norvig, the curriculum draws from that used in Stanford's introductory Artificial Intelligence course. The instructors will offer similar materials, assignments, and exams.

Artificial Intelligence is the science of making computer software that reasons about the world around it. Humanoid robots, Google Goggles, self-driving cars, even software that suggests music you might like to hear are all examples of AI. In this class, you will learn how to create this software from two of the leaders in the field. Class begins October 10.
Details on the course, including a syllabus is available here. Sign up above to receive additional information about participating in the online version when it becomes available.

A high speed internet connection is recommended as most of the course content will be video based. Access to a copy of Artificial Intelligence: A Modern Approach may be helpful but is not required. Peter Norvig is co-author of this text and is donating all royalties earned from his text to charity. Any edition of the textbook may be used but the third edition is preferred.

Stanford University's School of Engineering also offers other complete online courses at no cost. Click here to access Stanford Engineering Everywhere.

Stop Motion Photographer

Who needs a video camera when you've got 2335 photos? Follow the adventures of two young photographers as they meet and create memories together.


Stop Motion Photographer (Behind The Scenes)

Friday, August 19, 2011

Call for MS students of UCS Lab@SeoulTech-National, Korea (Supervisor: James. J. Park)

Ubiquitous Computing and Security (UCS) Research Lab ( is seeking for some active highly self-motivated full-time MS. students to conduct cutting-edge research in the area of Ubiquitous Computing, Security and Networks. UCS Lab is interested in several research topics in the follows;

1.Security field:
Ubiquitous Security: Home network, RFID, WSN Security
Security Protocol : Key management, Access Control, Authentication, privacy protection
Multimedia Security:  DRM, MPEG-21 IPMP
Digital Forensics and Computer Security
Smartphone and mobile computing Security
IT Convergence Security

2.Intelligent Applications and Services field:
Context Awareness, Smart Home, Ubi-Home, Smartphone/Mobile Services, Ubiquitous and Pervasive Computing

3. Network field:
Wireless Sensor Networks, Mobile Ad hoc Networks, Network Management, Internet Technology, High Speed Networks,

** This field researches will be co-worked with Network Lab (Supervisor: Prof. Kilhung Lee,
The students will be hired to join UCS Research Lab and work on some projects and FTRA( related works.

We are looking for candidates who meets the following requirements:
-TOPIK level 4 or 5 (
-bachelor's degree in Computer Science and Engineering(CSE), extended CSE, or Applied Mathematics.
-outstanding programming skills: C, C++, java, etc.
-good communicative skills in English, both in speaking and in writing;
(**candidates from non-English speaking countries should be prepared to prove their English language skills**).

Support Program for MS. Students in UCS Lab@SeoulTech:
1.Schorship & Dormitory Support
1) The first semester: Both 1,500,000 KW and Dormitory (depend on review results).
2) Since the second semester:
    Both of them will be fully suppored according to academic achievements (score: 4.3/4.5).
    Only schorship (1,5000,000KW) will be supported according to academic achievements (score: 3.7/4.5).
2.Extra Supports: Depand on contributions in projects.
3.Incentive: Depend on publication performances.

For consideration, applications should be received by **Sept. 20, 2011**.
Interested candidates should submit to UCS Lab secretary (Mr.JS Park: by email the following:
  - Motivation letter.
  - Detailed curriculum vitae with Photo.
  - Research future plan.
  - proof of English language skills (if applicable).

Wednesday, August 17, 2011

Orasis Brain-Inspired Image Processing App, is now available in iTunes!

Its main objective is to make your photos look closer to what your eyes perceived at the exact moment your photo was taken. Orasis will make your photos look more realistic, extracting visual information in the dark or bright areas, which was not visible in the original image.


Orasis is based on a PhD research, developed at the Electronics Lab of the Democritus University of Thrace. It incorporates neural characteristics of the Human Visual System, which ensures that the enhanced image will be much closer to what we perceive with our eyes. More information on Orasis, as well as an extended  image database can be found in the website:

The iTunes site is:

Thursday, August 11, 2011

BRISK: Binary Robust Invariant Scalable Keypoints

Stefan Leutenegger, Margarita Chli and Roland Siegwart, "BRISK: Binary Robust Invariant Scalable Keypoints", Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2011.


Effective and efficient generation of keypoints from animage is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the highperformance methods to date.

In this paper we propose BRISK, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK’s adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood

Video Presentation:


Recognition Using Visual Phrases

CVPR 2011 Award Paper

Ali Farhadi (UIUC); Mohammad Amin Sadeghi (University of Illinois at Urbana-Champaign)



In this paper we introduce visual phrases, complex visual composites like “a person riding a horse”. Visual phrases often display significantly reduced visual complexity compared to their component objects, because the appearance of those objects can change profoundly when they participate in relations. We introduce a dataset suitable for phrasal recognition that uses familiar PASCAL object categories, and demonstrate significant experimental gains resulting from exploiting visual phrases. We show that a visual phrase detector significantly outperforms a baseline which detects component objects and reasons about relations, even though visual phrase training sets tend to be smaller than those for objects. We argue that any multi-class detection system must decode detector outputs to produce final results; this is usually done with nonmaximum suppression. We describe a novel decoding procedure that can account accurately for local context without solving difficult inference problems. We show this decoding procedure outperforms the state of the art. Finally, we show that decoding a combination of phrasal and object detectors produces real improvements in detector results.


Tuesday, August 9, 2011

Lire Demo 0.9 alpha 2 just released

Article from


Finally I found some time to go through Lire and fix several of the — for me — most annoying bugs. While this is still work in progress I have a preview with the demo uploaded to New features are:

  • Auto Color Correlogram and Color Histogram features improved
  • Re-ranking based on different features supported
  • Enhanced results view
  • Much faster indexing (parallel, use -server switch for your JVM)
  • Much faster search (re-write of the searhc code in Lire)
  • New developer menu for faster switching of search features
  • Re-ranking of results based on latent semantic analysis

You can find the updated Lire Demo along with a windows launcher here, Mac and Linux users please run it using “java -jar … ” or double click (if your windows manager supports actions like that :)

The source is — of course — GPL and available in the SVN.

Congratulations Mathias! Γελαστούλης

Thursday, August 4, 2011

CFP: EURASIP Journal on Advances in Signal Processing Special Issue On Social Media Processing and Semantic Modeling

Automatic image/video annotation is still imperfect and error-prone due to the semantic gap. As a result, collaborative image/video tagging (such as Flickr and Youtube) has become very popular for people to share, tag and search images/videos. With the exponential growth of such social media, it has become increasingly important to have mechanisms that can support more effective searching from large-scale collections of social media. With the fast development of hardware technologies, users are looking for more and more sophisticated functions for social media processing and semantic modeling. To support the advanced functions, more sophisticated algorithms should be developed for understanding, processing and modeling the underlying semantics of social medias that may contain more than one medium of signals simultaneously. As a result, there are urge demands for more sophisticated algorithms for semantic modeling and processing of social medias. With the recent progress on computation infrastructure such as GFS, MapReduce, Cloud Computing and CUDA, it is possible for us to develop more effective techniques for social medial processing and semantic modeling. The topics for this special issue will include, but are not limited to:

• Advanced semantic models for social media processing, especially for multimedia social media data

• Social media computation and applications on advanced semantic models, such as clustering, reasoning and retrieval

• Novel semantic models for non-traditional signals, such as touch models for haptic devices, gesture models for touch screens, and 3D object models

• Automatic extraction algorithms for semantic models, either model driven or data driven

• Computation algorithms and infrastructures for the problems of model extraction and applications

Before submission authors should carefully read over the journal's Author Guidelines, which are located Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at according to the following timetable:

Manuscript Due October 15, 2011
First Round of Reviews January 15, 2012
Publication Date April 15, 2012

Lead Guest Editor:

Hangzai Luo, Software Engineering Institute, China East Normal University, Shanghai, CHINA;

Guest Editors:

Xiaofei He, State Key Lab of CAD&CG, College of Computer Science, Zhejiang University, Hangzhou, CHINA;

Shin'ichi Satoh, National Institute of Informatics, Tkoyo, Japan;

Jianping Fan, Department of Computer Science, UNC-Charlotte, Charlotte, NC, USA;


Registration deadline: 10th August, 2011

It is a pleasure to announce the Call for Participation to the 7th International Summer School on Pattern Recognition. I write to invite you, your colleagues, and students within your department to attend this event. In 2010, the 6th ISSPR School held at Plymouth was a major success with over 90 participants. The major focus of 2011 summer school includes:

- A broad coverage of pattern recognition areas which will be taught in a tutorial style over five days by leading experts. The areas covered include statistical pattern recognition, Bayesian techniques, non-parametric and neural network approaches including Kernel methods, String matching, Evolutionary computation, Classifiers, Decision trees, Feature selection and Dimensionality reduction, Clustering, Reinforcement learning, and Markov models. For more details visit the event website.

- A number of prizes sponsored by Microsoft and Springer for best research demonstrated by participants and judged by a panel of experts. The prizes will be presented to the winners by Prof. Chris Bishop from Microsoft Research.

- Providing participants with knowledge and recommendations on how to develop and use pattern recognition tools for a broad range of applications.

3 Corporate Scholarships are still available towards discounted registration fee for students till 10th August, 2011 so this is an excellent opportunity for participants to register at an affordable cost. The fee includes registration and accommodation plus meals at the event. The registration process is online through the school website <> which has further details on registration fees. Please note that the number of participants registering each year at the summer school is high with a limited number of seats available, and therefore early registration is highly recommended.

Should you need any help, then please do not hesitate to contact school secretariate at <>

Monday, August 1, 2011

Deb Roy: The birth of a word

MIT researcher Deb Roy wanted to understand how his infant son learned language -- so he wired up his house with video cameras to catch every moment (with exceptions) of his son's life, then parsed 90,000 hours of home video to watch "gaaaa" slowly turn into "water." Astonishing, data-rich research with deep implications for how we learn.

Deb Roy studies how children learn language, and designs machines that learn to communicate in human-like ways. On sabbatical from MIT Media Lab, he's working with the AI company Bluefin Labs. Full bio and more links