The best talk I saw at the Web 2.0 conference in Berlin this year was from Blaise Aguera y Arcas, Software Architect at Microsoft Live Labs. He showcased the latest updates of Photosynth, a new technology from Microsoft Labs that stitches photos together from any number of sources to create (the illusion) of a 3-D model of a building or landmark. If you’ve not seen this yet, do so. Here’s a brief video of Blaise showcasing Photosynth at the TED conference.

Basically, the software recognizes unique points on photos of a stationary geo-location and is able to align them with other photos. If you get enough photos in a collection, you effectively have a 3-D version of the original location. Take Notre Dame in Paris: you can point Photosynth at a collection of photos on Flickr, forn instance, and Photosynth compilies a 3-D rendering of the building. Sure, there are some ugly seams, but it’s a pretty amazing results nonetheless. With the ubiquity of digital cameras these days, we could potentially have every place on earth represented in 3-D on the web in the future.

The interesting thing would be to apply this principle to tagging. If you have a rich, complex folksonomy, would you be able to pick out unique descriptive points, and then be able to “sew” the terms together to get a clearer semantic picture of the objects being described? I suppose that’s  what things like Twine are trying to do, in a sense.

Check out Blaise’s TED talk.

About Jim Kalbach

Head of Customer Experience at MURAL

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: