Wednesday, November 30, 2011
Thursday, November 17, 2011
A few months ago, we introduced a limited release of Google Scholar Citations, a simple way for authors to compute their citation metrics and track them over time. Today, we’re delighted to make this service available to everyone! Click here and follow the instructions to get started.
Here’s how it works. You can quickly identify which articles are yours, by selecting one or more groups of articles that are computed statistically. Then, we collect citations to your articles, graph them over time, and compute your citation metrics - the widely used h-index; the i-10 index, which is simply the number of articles with at least ten citations; and, of course, the total number of citations to your articles. Each metric is computed over all citations and also over citations in articles published in the last five years.
Your citation metrics will update automatically as we find new citations to your articles on the web. You can also set up automated updates for the list of your articles, or you can choose to review the suggested updates. And you can, of course, manually update your profile by adding missing articles, fixing bibliographic errors, and merging duplicate entries.
As one would expect, you can search for profiles of colleagues, co-authors, or other researchers using their name, affiliation, or areas of interest, e.g., researchers at US universities or researchers interested in genomics. You can add links to your co-authors, if they already have a profile, or you can invite them to create one.
You can also make your profile public, e.g., Alex Verstak, Anurag Acharya. If you choose to make your profile public, it can appear in Google Scholar search results when someone searches for your name, e.g., [alex verstak]. This will make it easier for your colleagues worldwide to follow your work.
We would like to thank the participants in the limited release of Scholar Citations for their detailed feedback. They were generous with their time and patient with an early version. Their feedback greatly helped us improve the service. The key challenge was to make profile maintenance as hands-free as possible for those of you who prefer the convenience of automated updates, while providing as much flexibility as possible for those who prefer to curate their profile themselves.
Here is hoping that Google Scholar Citations will help researchers everywhere view and track the worldwide influence of their own and their colleagues’ work.
Monday, November 14, 2011
Read more at http://www.hughski.com/
The ColorHug is an open source display colorimeter. It allows you to calibrate your screen for accurate color matching.
The ColorHug is a small accessory that measures displayed colors very accurately. It is held on your display and plugged into a spare USB port on the computer for the duration of the calibration.
Have you ever taken a photo and wondered why it does not look the same on your screen as it did the camera?
It's probably because the LCD display on your computer has never been calibrated. This means colors can look washed-out, tinted with certain shades or with different color casts.
About 2 years ago I began working on color management in Linux. It soon became apparent that there was no integrated color management system. The color management support which did exist was often disabled by default in many applications. I have worked hard to make calibrating displays easy ever since. It is my goal to make color management accessable to end users. The hardware for color managing screens was bulky, slow and expensive. With a background in electronics, I thought I could create a device which was smaller, faster and cheaper.
Using the ColorHug it takes about a minute to take several hundred measurements from which the client software creates an ICC color profile. This color profile file can then be saved and used to make colors look correct on your monitor.
Article from THE NEW YORK TIMES
FACIAL recognition technology is a staple of sci-fi thrillers like “Minority Report.”
But of bars in Chicago?
SceneTap, a new app for smart phones, uses cameras with facial detection software to scout bar scenes. Without identifying specific bar patrons, it posts information like the average age of a crowd and the ratio of men to women, helping bar-hoppers decide where to go. More than 50 bars in Chicago participate.
As SceneTap suggests, techniques like facial detection, which perceives human faces but does not identify specific individuals, and facial recognition, which does identify individuals, are poised to become the next big thing for personalized marketing and smart phones. That is great news for companies that want to tailor services to customers, and not so great news for people who cherish their privacy. The spread of such technology — essentially, the democratization of surveillance — may herald the end of anonymity.
And this technology is spreading. Immersive Labs, a company in Manhattan, has developed software for digital billboards using cameras to gauge the age range, sex and attention level of a passer-by. The smart signs, scheduled to roll out this month in Los Angeles, San Francisco and New York, deliver ads based on consumers’ demographics. In other words, the system is smart enough to display, say, a Gillette ad to a male passer-by rather than an ad for Tampax.
Those endeavors pale next to the photo-tagging suggestion tool introduced by Facebookthis year. When a person uploads photos to the site, the “Tag Suggestions” feature uses facial recognition to identify that user’s friends in those photos and automatically suggests name tags for them. It’s a neat trick that frees people from the cumbersome task of repeatedly typing the same friends’ names into their photo albums.
“Millions of people are using it to add hundreds of millions of tags,” says Simon Axten, a Facebook spokesman. Other well-known programs like Picasa, the photo editing software from Google, and third-party apps like PhotoTagger, from face.com, work similarly.
But facial recognition is proliferating so quickly that some regulators in the United States and Europe are playing catch-up. On the one hand, they say, the technology has great business potential. On the other, because facial recognition works by analyzing and storing people’s unique facial measurements, it also entails serious privacy risks.
Using off-the-shelf facial recognition software, researchers at Carnegie Mellon University were recently able to identify about a third of college students who had volunteered to be photographed for a study — just by comparing photos of those anonymous students to images publicly available on Facebook. By using other public information, the researchers also identified the interests and predicted partial Social Security numbers of some students.
“It’s a future where anonymity can no longer be taken for granted — even when we are in a public space surrounded by strangers,” says Alessandro Acquisti, an associate professorof information technology and public policy at Carnegie Mellon who directed the studies. If his team could so easily “infer sensitive personal information,” he says, marketers could someday use more invasive techniques to identify random people on the street along with, say, their credit scores.
Today, facial detection software, which can perceive human faces but not identify specific people, seems benign.
Some video chat sites are using software from face.com, an Israeli company, to make sure that participants are displaying their faces, not other body parts, says Gil Hirsch, the chief executive of face.com. The software also has retail uses, like virtually trying out eyeglasses at eyebuydirect.com, and entertainment applications, like moustachify.me, a site that adds a handle bar mustache to a face in a photo.
But privacy advocates worry about more intrusive situations.
Now, for example, advertising billboards that use facial detection might detect a young adult male and show him an ad for, say, Axe deodorant. Companies that make such software, like Immersive Labs, say their systems store no images or data about passers-by nor do they analyze their emotions.
But what if the next generation of mall billboards could analyze skin quality and then publicly display an ad for acne cream, or detect sadness and serve up an ad for antidepressants?
Read more THE NEW YORK TIMES
Monday, November 7, 2011
Is the default ‘Photos’ app on the iPhone too limiting, too boring and not convenient enough for your needs? Do you want to use a novel and amazing 3D interface for browsing, searching, and presenting your photos with the iPhone?
Photo Ring turns your iPhone into a convenient 3D photo browser with a stunning interface that enables you to keep track of hundreds of photos at a glance. Moreover, its color sorting technology allows you to save time on task and provides you with a 3D slideshow feature that allows for an automized presentation of your photos. Due to its innovative and natural 3D arrangement and powerful color-based organization feature, searching for photos on your iPhone and showing them to your friends becomes an exiting and fun task!
- Innovative and intuitive 3D browsing interface (zoomable 3D Ring)
- Interactive 3D slideshow (animated 3D Wall; with pause/fast-forward/reverse feature)
- Convenient interaction (e.g., kinetic ring rotation by wipe or tilt)
- Sorting of photos by recording time
- Sorting of photos by color
- Inspection of EXIF/TIFF metadata of photos
- Browsing of photos from different folders/events
- Fullscreen photo mode with convenient switching function
Important note (Nov 5, 2011): if your iPhone/iPad doesn’t run iOS 5 already, please wait until our next update (v2.2, available in a few days), which will fix a serious bug that only occurs for older iOS versions and prevents the app from loading your photos.
Saturday, November 5, 2011
Madrid, Spain, 10-13 July 2012
Papers due: January, 15, 2012
Notification: March, 15 2012
Camera ready: April, 15 2012
Conference dates: July, 10-13, 2012
The new multimedia standards (for example, MPEG-21) facilitate the seamless integration of multiple modalities into interoperable multimedia frameworks, transforming the way people work and interact with multimedia data. These key technologies and multimedia solutions interact and collaborate with each other in increasingly effective ways, contributing to the multimedia revolution and having a significant impact across a wide spectrum of consumer, business, healthcare, education, and governmental domains. Moreover, the emerging mobile computing and ubiquitous networking technologies enable users to access fully broadband mobile applications and new services anytime and everywhere. The continuous efforts have been dedicated to research and development in this wide area including wireless mobile networks, ad-hoc and sensor networks, smart user devices and advanced sensor devices, mobile and ubiquitous computing platforms, and new applications and services including location-based, context-aware, or social networking services.
This conference provides an opportunity for academic and industry professionals to discuss recent progress in the area of multimedia and ubiquitous environment including models and systems, new directions, novel applications associated with the utilization and acceptance of ubiquitous computing devices and systems. MUE 2012 is the next event in a series of highly successful the International Conference on Multimedia and Ubiquitous Engineering MUE-11 (Loutraki, Greece, June 2011), MUE-10 (Cebu, Philippines, August 2010), MUE-09 (Qingdao, China, June 2009), MUE-08 (Busan, Korea, April 2008), and MUE-07 (Seoul, Korea, April 2007).
Topics of interest
* Ubiquitous Computing and Technology
* Context-Aware Ubiquitous Computing
* Parallel/Distributed/Grid Computing
* Novel Machine Architectures
* Semantic Web and Knowledge Grid
* Smart Home and Generic Interfaces
* AI and Soft Computing in Multimedia
* Computer Graphics and Simulation
* Multimedia Information Retrieval (images, videos, hypertexts, etc.)
* Internet Multimedia Mining
* Medical Image and Signal Processing
* Multimedia Indexing and Compression
* Virtual Reality and Game Technology
* Current Challenges in Multimedia
* Protocols for Ubiquitous Services
* Ubiquitous Database Methodologies
* Ubiquitous Application Interfaces
* IPv6 Foundations and Applications
* Smart Home Network Middleware
* Ubiquitous Sensor Networks / RFID
* U-Commerce and Other Applications
* Databases and Data Mining
* Multimedia RDBMS Platforms
* Multimedia in Telemedicine
* Multimedia Embedded Systems
* Multimedia Network Transmission/Streaming
* Entertainment Industry
* E-Commerce and E-Learning
* Novel Multimedia Applications
* Computer Graphics
* Multimedia network transmission/streaming
* Security in Commerce and Industry
* Security in Ubiquitous Databases
* Key Management and Authentication
* Privacy in Ubiquitous Environment
* Sensor Networks and RFID Security
* Multimedia Information Security
* Forensics and Image Watermarking
* Cyber Security
* Intrusion detection
* Biometric Security
* New developments in handheld and mobile information appliances
* New paradigms: mobile cloud, personal networks, social and crowd computing, etc
* Operating systems aspects for personal mobile devices
* New technological advances for personal mobile devices
* End-user interface issues in the design and use of personal technologies
* Enabling technologies for personal multimedia and ubiquitous computing
* Multimedia applications and techniques for personal computing devices
* Usage of personal devices for on-line learning
Submissions should not exceed 8 pages in IEEE CS proceedings paper format, including
tables and figures. All paper submissions must represent original and unpublished work.
Submission of a paper should be regarded as an undertaking that, should the paper be
accepted, at least one of the authors will register for the conference and present the
work. Submissions will be conducted electronically on the conference website.