Pages

Tuesday, September 23, 2008

Next version of Windows Live Photo Gallery

In just a short while brand new beta versions of Windows Live Photo Gallery and the completely brand new Windows Live Movie Maker will be available for free at http://download.live.com! In addition to Photo Gallery and Movie Maker, this beta release includes significant updates to all of the Windows Live software applications for your Windows PC, including Messenger, Mail, Writer, Toolbar and Family Safety. You’ll find sweet new features across the products. If you want to hear more about what we’re delivering across Windows Live, check out this blog post from Chris Jones.
Here are some of cool things you can do with the new beta version of Photo Gallery:

People tagging: Photo Gallery automatically finds the people in your photos so you can add a name to the face. Then later, all you need to do is type in someone’s name to see all the photos of that person.
Photos from friends: See new photos from the people you know as soon as they post them online. Their new shots come to you automatically in Photo Gallery.
Photo editing: Adjust exposure, color, or detail by hand, or use auto adjust—either way, your photos look great. You can even create amazing panoramas—Photo Gallery automatically stitches them together for you.
More support for 3rd parties! We’ve heard tons of requests for more publishing options, so this release includes a new Publishing API that enables the community to build plug-ins for virtually any sharing service. A new resource for developers will be at http://dev.live.com/photogallery.

Read More

Monday, September 22, 2008

Stanford Engineering Everywhere

For the first time in its history, Stanford is offering some of its most popular engineering classes free of charge to students and educators around the world. Stanford Engineering Everywhere (SEE) expands the Stanford experience to students and educators online. A computer and an Internet connection is all you need. View lecture videos, access reading lists and other course handouts, take quizzes and tests, and communicate with other SEE students, all at your convenience. This fall, SEE launches its programming by offering one of Stanford’s most popular sequences: the three-course Introduction to Computer Science taken by the majority of Stanford’s undergraduates and seven more advanced courses in artificial intelligence and electrical engineering. Stanford Engineering Everywhere offers: Anytime and anywhere access to complete lecture videos via streaming or downloaded media. Full course materials including syllabi, handouts, homework, and exams. Online social networking with fellow SEE students. Support for PCs, Macs and mobile computing devices. Stanford encourages fellow educators to use Stanford Engineering course materials in their own classrooms. A Creative Commons license allows for free and open use, reuse, adaptation and redistribution of Stanford Engineering Everywhere material.
http://see.stanford.edu/default.aspx

Wednesday, September 17, 2008

The First International Conference on Advances in Multimedia

The rapid growth of information on the Web, its ubiquity and pervasiveness makes the www the biggest repository. While the volume of information may be useful, it creates new challenges for information retrieval, identification, understanding, selection, etc. Investigating new forms of platforms, tools, principles offered by Semantic Web opens another door to enable humans programs, or agents to understand what records are about, and allows integration between domain-dependent and media-dependent knowledge. Multimedia information has always been part of the Semantic Web paradigm, but requires substantial effort to integrate both.
The new technological achievements in terms of speed and the quality of expanding and creating a vast variety of multimedia services like voice, email, short messages, Internet access, m-commerce, to mobile video conferencing, streaming video and audio.
Large and specialized databases together with these technological achievements have brought true mobile multimedia experiences to mobile customers. Multimedia imply adoption of new technologies and challenges to operators and infrastructure builders in terms of ensuring fast and reliable services for improving the quality of web information retrieval.
Huge amounts of multimedia data are increasingly available. The knowledge of spatial and/or temporal phenomena becomes critical for many applications, which requires techniques for the processing, analysis, search, mining, and management of multimedia data.
MMEDIA 2009 aims to provide an international forum by researchers, students, and professionals for presenting recent research results on advances in multimedia, mobile and ubiquitous multimedia and to bring together experts from both academia and industry for the exchange of ideas and discussion on future challenges in multimedia fundamentals, mobile and ubiquitous multimedia, multimedia ontology, multimedia user-centered perception, multimedia services and applications, and mobile multimedia.

Read More

Thursday, September 4, 2008

Mpeg-7 Descriptors Are Available For Download

Download tha latest Version of MPEG-7 Descriptors for C#

The implementation of these descriptors is based on Lire image retrieval System (Lire). The source code is a modification of the implementation that can be found in the LIRE retrieval system. The original version of the descriptors' implementation is written in Java and is available online as open sourcei under the General Public License (GPL).

Introducing Picasa 3.0 (and big changes for Picasa Web Albums)

A little over two years ago, we launched Picasa Web Albums to make publishing photos online easy. Now Picasa Web Albums hosts billions of online photos from around the globe, with users adding millions of new snapshots every day. Each of these photos records a different moment, or a different perspective, but one thing they all have in common is that in each case, the person behind the camera wanted to share their experience with a friend, their extended family, or maybe the world.

Today, we're rolling out major technology upgrades to both Picasa and Picasa Web Albums. As you might have guessed, these are largely focused on how we share and enjoy our photos with others.

For starters, there's a brand-new feature called "name tags" in Picasa Web Albums that helps you quickly label all the people in your photos, so you can organize and share your photos based on who's in the picture. Name tags uses advanced technology to automatically group similar faces together. That way, you can quickly label all the people you care about in your photo collection. Once you've labeled your photos, it's then a snap to do things like create a slideshow with every picture of you and your best friend, or easily share party photos with everybody who appears in that photo album.

IADIS International Conference Informatics 2009

This conference event shall host fundamental topics on Informatics. In addition, its scope is not just limited to fundamental theory, but it should also cover the impact of Informatics on society and human life, and it shall furthermore complement theory foundations with technical considerations and practice. Today, most research focuses on technical aspects of Informatics, but this event shall serve as well as platform for an exchange of ideas, which respect that all technology primarily has to serve human in their daily life and ease daily demands and problems. In this context non-technical people, handicapped people and third world have to be addressed by appropriate solutions and adequate engineering. Also in the technical realization of the principles of Informatics, the human factor plays an important role when Informatics is applied in manufacturing, engineering, and administration processes.
 

Main topics have been identified (see below). However, innovative contributions that don’t fit into these areas will also be considered since they might be of benefit to conference attendees.

 

Acceptance will be based primarily on originality, significance and quality of contribution


Call For Papers

Special issue on "Data Semantics for Multimedia Systems" of MTAP

Multimedia Tools and Applications (Springer) Special Issue on Data Semantics for Multimedia Systems Manuscript due: December 15, 2008

Details

More Details

In the last decade, substantial progress has been made in content-based analysis and multimedia streaming to facilitate the development of large-scale multimedia information systems. Together with the recent progress on semantic web, it is now possible to build a new generation of multimedia applications that enable large-scale semantic representation, analysis, and delivery of multimedia data from heterogeneous data sources.  However, there is still a long way to go for mature solutions of multimedia database systems that are capable of processing semantics-rich, large-volume multimedia data. It could be even more challenging if such systems are under stringent functional and non-functional (e.g., QoS) requirements. 

The goal of this special issue is to bring the semantic web community and multimedia processing & computing community together and provide a forum for multidisciplinary research opportunities, with a focus on how to apply the semantic technologies to the acquisition, generation, transmission, storage, processing, and retrieval of multimedia information. Discussions on future challenges in multimedia information manipulation, as well as practical solutions for the design and implementation of multimedia database software systems are also encouraged.

 Topics of interest include but are not limited to practical areas that span both semantic technologies and multimedia processing & computing:

.    Automatic generation of multimedia presentations

.    Semantic multimedia metadata extraction

.    Annotation tools and methods for multimedia semantics

.    Media ontology generation/learning/reasoning

.    Content-based multimedia analysis

.    Multimedia indexing, searching, and retrieving

.    Multimedia streaming

.    Semantic-based QoS control and scheduling

.    Semantic-based Internet data streaming and delivery

.    Multimedia standards (e.g., MPEG-7 and XMP) and Semantic Web

.    Semantics  enabled  multimedia  applications  (including annotation,  browsing)

.    Semantics  enabled  networking  and  middleware  for  multimedia applications

Emgu.CV-1.3.0.0 is available

Emgu CV is a cross platform .Net wrapper to the Intel OpenCV image-processing library. Allowing OpenCV functions to be called from .NET compatible languages such as C#, VB, VC++, IronPython etc. The wrapper can be compiled in Mono and run on Linux / Solaris / Mac OS X

Change Log

  • Added Bgra color type
  • In Image class, added SByte for depth type.
  • Improved ImageBox functionality
  • Improved Histogram class
  • It is now possible to create Image<,> object from any type of Bitmap
  • Support for reading image from ".gif" and ".exig" file
  • Added MotionHistory class and Motion Detection Example
  • Added EigenObjectRecognizer class for PCA base object recognition
  • Added PlannarSubdivision class, which can be used for Delaunay's Triangulation and Voronoi's Diagram. Added PlannarSubdivision example.
  • Fix a bug in MCvConnectedComponent structure
  • Bug fixes in CvInvoke.cvCreateVideoWriter and CvInvoke.cvFloodFill function call
  • Many more functions added to CvInvoke class
  • Many more structures wrapped in Emgu CV
  • The released assemblies are now strong signed.
  • Starting from this version of Emgu CVImageBox uses ZedGraph to display color histogram. If ImageBox is never used in your project, you can remove it from the dependency.

Wednesday, September 3, 2008

Octree Color Quantization

In 1988, M. Gervautz and W. Purgathofer of Austria's Technische UniversitŠt Wien published an article entitled "A Simple Method for Color Quantization: Octree Quantization." They proposed an elegant new method for quantizing color bitmap images by employing octrees-tree-like data structures whose nodes contain pointers to up to eight subnodes. Properly implemented, octree color quantization is at least as fast as the median-cut method and more memory-efficient.

The basic idea in octree color quantization is to graph an image's RGB color values in a hierarchical octree. The octree can go up to nine levels deep-a root level plus one level for each bit in an 8-bit red, green, or blue value-but it's typically restricted to fewer levels to conserve memory. Lower levels correspond to less significant bits in RGB color values, so allowing the octree to grow deeper than five or six levels has little or no effect on the output. Leaf nodes (nodes with no children) store pixel counts and running totals of the red, green, and blue color components of the pixels encoded there, while intermediate nodes form paths from the topmost level in the octree to the leaves. This is an efficient way to count colors and the number of occurrences of each color because no memory is allocated for colors that don't appear in the image. If the number of leaf nodes happens to be equal to or less than the number of palette colors you want, you can fill a palette simply by traversing the octree and copying RGB values from its leaves.

The beauty of the octree method is what happens when the number of leaf nodes n exceeds the desired number of palette colors k. Each time adding a color to the octree creates a new leaf, n is compared to k. If n is greater than k, the tree is reduced by merging one or more leaf nodes into the parent. After the operation is complete, the parent, which was an intermediate node, is a leaf node that stores the combined color information of all its former children.

Because the octree is trimmed continually to keep the leaf count under k, you end up with an octree containing k or fewer leaves whose RGB values make ideal palette colors. No matter how many colors the image contains, you can walk the octree and pick leaves off it to formulate a palette. Better yet, the octree never requires memory for more than k+1 leaf nodes plus some number of intermediate nodes.

There are two parts of an octree that I want to study: the parent-child relationship between nodes and the significance of the RGB data in each leaf. Figure 1 shows the parent-child relationship for each node. At a given level in the tree, a value from zero to 7, derived from the RGB color value, identifies a child node. At the uppermost (root) level, bit 7 of the red value is combined with bit 7 of the green value and bit 7 of the blue value to form a 3-bit index. Bit 7 from the red value becomes bit 2 in the index, bit 7 from the green value becomes bit 1 in the index, and bit 7 from the blue value becomes bit zero in the index. At the next level, bit 6 is used instead of bit 7, and the bit number keeps decreasing as the level number increases. For red, green, and blue color values equal to 109 (binary 01101101), 204 (11001100), and 170 (10101010), the index of the first child node is 3 (011), the index of the second child node is 6 (110), and so on. This mechanism places the more significant bits of the RGB values at the top of the tree. In this example, the octree's depth is restricted to five levels, which allows you to factor in up to 4 bits of each 8-bit color component. The remaining bits are effectively averaged together.

Read More