Real world tagging has been around for awhile in the computing space, used for planning and remote viewing, but Tonchidot showed how real world tagging might look in a LIVE environment via a tag-augmented iPhone display. In their demo, you look down a street, and the restaurants are listed on the sides, and as you walk in or zoom in, you can see a menu, Zoom in on a grocery shelf and learn about the products or learn where you can buy them cheaper, or walk through a mall and view location specific notes from your friends, even get warned with popups about hazards such as escalators (important if you are too busy looking at the camera phone and not watching where you are going).
I suppose the best way, and maybe the only way, of showing what Tonchidot is all about is to share the YouTube video with you. If a picture is worth a thousand words, then in this case, a video expresses what words could never adequately describe.
The only thing we don't know about World Camera is whether or not they can really build it. As a technology demonstration, it is really fantastic, but the language barrier with the presenter at this event prevented us from learning if this technology is really real, or just virtually virtual. Ahh, but isn't that the way the future is...