Very interesting demo (from the recent TED event). Essentially Photosynth is a new way of looking at photos and images on the computer. You can display a huge collection of photos, view particular images from different angles and zoom in to minute detail. So far so good. However the software is able generate a collection on the basis of similarities between images. Basically each image is related to the next one.
And this is where things get really interesting, because the software is able to work through the images and when the same feature is found across multiple images it creates a 3D representation by spatially relating each image to each other. It’s very hard to put into words so I recommend viewing the video below, about 4 minutes in the 3D aspect is demoed using images of the Notre Dame Cathedral taken from Flickr, (they simply typed in ‘Notre Dame’ in flickr). The presenter then ‘browses’ through the image.
The Flickr aspect is of course interesting since we are now talking of resources created through social networks using photos taken from any kind of camera (cellphones through to SLRs). As the presenter states we’re taking data from the collective memory of everyone in terms of what the world looks like and linking it all together. A model then emerges representing something which is greater than the sum of the parts (photos). And it will become more detailed as more photos are added. As a photo is added it is tagged with the metadata which allows then person who added it to use it as a portal into the rest of the associated photos.
So we’re talking about images being hyperlinked together on the basis of the image content. One possible future for the web? You enter a term in Google, it returns and image and then you click on the image to travel through related images? Semantic network richness.
I’ve looked at the actual demo software itself. There’s a number of collections to view together with the 3D visualizations, (here’s a link to it).
[youtube]s-DqZ8jAmv0[/youtube]
Thanks to Mike Mylles.