Jun 25, 2007

From Microsoft Labs ... Seadragon and Photosynth

Webstock Mini-June
We were given a brief glimpse of this technology at Webstock Mini-June conference held here in Wellington last week. The presenter was Leigh Blackall, he showed us some features of Second Life virtual world. Proceeding on that topic he also gave us a brief glimpse of Photosynth. Curiosity drove me to the source.

TED
With the competition in internet innovation hotting up these days a lot of brilliant ideas are finding voice. An exciting development to come out of Microsoft Labs was demonstrated at the TED (Technology, Entertainment, Design) talks held every year in California, United States.

Seadragon
This was the first of the two technologies presented. Using the new Windows Media Photo Format, examples were shown of collages of photographs, that can be zoomed endlessly down to the tiniest level, smoothly and effortlessly. At one stage the software architect Blaise Aguera Y Arcas, mentioned that they were using 300 MegaPixel images. To put that in perspective, most images we get from digital cameras are only around 3 to 5 MegaPixels, and these are capable of being printed in large poster format without distortion. To facilitate this kind of technology I would normally assume that the software that can manipulate objects of this size would be very sophisticated and complex, requiring machines at super-computer level including giant memory stores connected to a very large bandwidth network. Yet, this project, which is under incubation at Microsoft Live Labs promises:

  1. Speed of navigation is independent of the size or number of objects.

  2. Performance depends only on the ratio of bandwidth to pixels on the screen.

  3. Transitions are smooth as butter.

  4. Scaling is near perfect and rapid for screens of any resolution.

Seadragon Screenshot

PhotoSynth
This other project from Microsoft Live Labs allows one to see the photos in a context of 3D virtual world representation of the place where the photograph was taken. The model seems to map the position of the camera, the position of the photographed subject, and the photograph itself. Now, integrated with countless other photographs in the same area. What you effectively build up is a 3D world showing the view from any point in any direction using available photographic detail of the area.

Some latest digital cameras that come with built in Global Positioning System (GPS) sensors could, theoretically, encode the photograph itself with the data (or metadata) of where it was taken. To get the effective 3D coordinates, the metadata should also include the height in terms of 'above sea-level' of the photographer. However, Photosynth works from the other end - it takes your photographs, analyzes them, and working from familiar features in the photo - such as the architectural lines of a known building, determines the position and angle of the camera. Thus, the onus is lifted from the user to provide the location metadata for the photo.

There are further benefits envisaged to this technology such as Smart Photos - the idea of taking a photo of some item with your cellphone-camera, and sending it to a service that determines what's in the picture and sending you back information about the subject. Great for tourists!

You can install and try out the software, but note the system requirements before you begin.

One particular word said by the presenter is going to stick around in my mind - metaverse. It seems to be a reference to a universe built by amalgamation of all kinds of related objects, facilitated by the metadata of those objects.

The TED Presentation

2 comments:

monty said...

Yeah M$ will have to do some integrated tools and application for their operating systems. Calculators, paint and Notepad for the Ultimate Edition is just not enough.

They have to rival OS X core image platform many developers and University students have jumped in to the core image technology because its so easy to develop and work with, since its integrated with the OS X it makes it faster and better. Here is info of Core Image platform directly from apple

"Core Image provides a plug-in style architecture for accessing filters, transitions and effects packages called Image Units. Image Units provide centralized management for image processing plug-ins that can be shared across all host applications. Developers can spend more time creating new Image Units instead of porting code to work across heterogeneous applications. Image Units achieve unprecedented reliability and stability as well as performance.

Developers can easily create their own Image Units by describing filters and effects in simple expressions compiled at runtime. Or developers can freely access one hundred Image Units bundled in Tiger, including blurs, color adjustments, distortions, edge sharpeners and transitions. (See below.)

for more check it out here
http://www.apple.com/macosx/features/coreimage/

Aditya Prateek Anand said...

This isn't about Operating systems and Graphic Processing Units. It's about leveraging internet platforms to deliver visual applications. There's also potential to integrate social networking and Web 2.0 concepts into virtual worlds. Apple is not moving into that area - it will probably leave it to Google, Sun and Adobe to compete with Microsoft on that front. Microsoft is aware that the future of the desktop could be the internet and that's where it aims to compete. Apple does not have any competing platforms in that direction ... yet.