We are, any of us who post photos online anyway, laying the groundwork for a fully navigable, virtual copy of the entire physical world.
The combination of vast, ever-updating databases of digital photography on social networks, along with increasingly accurate digital maps, has already resulted in a few prototypes of “Matrix”-like systems by large tech companies (read below for more of these).
But it’s critical to keep in mind that amateur photographers are generating most of the photos these days (not pros or tech company employees or contractors), so the imagery is both more raw, random and revealing. Social media has become the dominant repository for those photos, meaning most new photos are readily publicly accessible. Now all we need is for someone to come along and develop the program that will scrape publicly posted photographic imagery on social networks and compile the imagery (and eventually video) into a continuously updating, comprehensive, fully interactive and navigable reproduction of Earth.
Perhaps even more interesting, older imagery will serve as a virtual reality record of the world, allowing users to effectively time travel back to the end of the first decade of the 21st century, when cameraphones first became cheap and massively popular (and further back than that in some cases, once older imagery has been updated and appropriately tagged and categorized, easier than it sounds, actually).
There would be manifold uses for such a “Matrix” system, especially to journalists, historians, lawyers and law enforcement. If even a rudimentary system were in place, a user would be able to visit any specific place and time (within the bounds of the system’s chronology) and see what was occurring at that instant from multiple viewpoints. It may be scary to conceive, but all of us who take and upload photos are in some way participating to the most massive surveillance apparatus ever known to humanity.
In the beginning…
But let’s back up: For those who haven’t seen it (spoiler alert), “The Matrix” was a 1999 sci-fi action movie, the seminal film by the Wachowskis. It was the story of a cynical computer hacker and office drone named Neo (online), who stumbles across the ultimate of all mindfuck conspiracies: The world as we know it is actually a gigantic virtual reality simulation (the titular Matrix), constructed by an army of intelligent machines who long ago overthrew and enslaved humanity and now uses us as a source of unwitting battery power. A few bright and odd human computer hackers have managed to escape the Matrix and are fighting back as underdogs from the “real world” – a bombed-out, gray husk of civilization – using post-apocalyptic, almost steampunk technology. Neo joins up with this crew. Also, these rebels have come up with ways to hack back into the Matrix and bend it to their will, albeit slightly – summoning up racks of weapons and skills they never had – kung-fu, helicopter piloting, to name a few– to battle the virtual manifestations of the machine overlords back in the Matrix, a bunch of white guys who look like 1950s cold-war spies in dapper suits and sunglasses.
It’s a gloriously fun and fantastical movie, an orgy of feel-good, take no prisoners violence against the perfectly despicable, politically correct evil: Artificial intelligence run amok and disguised as “The Man.” It’s also in many ways the perfect cinematic embodiment of the discontented 1990s: A grungy, gothic nihilistic opera, an ode to the correctness of anarchism and black-helicopter, WTO paranoia that gives way to new-age spiritual optimism and revolutionary fervor, a paean to the self-righteous Gen Xer, set to “Rage Against the Machine,” and “Smashing Pumpkins.”
The Matrix we’re all building in 2012 isn’t much like the movie version, not yet at least. We’re not in any danger of booting up a massive artificial intelligence network that could stab us in the back, “Et tu Brute?,” not from what I can tell.
But I do think that we are on the verge of getting enough accurate, reliable and consistently updating visual imagery from around the globe online so that a planetwide virtual reality simulation could be constructed within a few years – at the least a crude one.
The evidence for this is mounting steadily.
Google and the desert of the real
Recall in late 2011, a Dutch ad agency called “Pool Worldwide” released a modified version of Google Street View, the ground-level, 360-rotatable photography that Google has been adding to its maps since 2007, which turned Street View into a first-person shooter of sorts, adding a gun sight and bullet sound effects to the views of the world. Forget “The Matrix Online” and the console videogames of the movie – this free add-on was closer to the film than any of those.
Google, not amused, quickly blocked the app. In doing so, Google’s message was clear: We want Google Maps to remain primarily a reliable resource for real, updated information about the world: A tool, not a toy. The desert of the real, if you will.
Or not. Maybe Google just wanted to control the game itself. After all, Google in early November 2012 just released an augmented reality game called “Ingress” for Android smartphones that relies heavily on Google Maps, having users visit real world structures and perform tasks on their smartphones to either aid or fight a fictional mind-warping energy force. The app doesn’t rely on Street View very much yet, but that could and likely will change with future updates and games of this type.
It’s not just Google that offers 360-degree, ground-level views of the world anymore, though: Nokia HERE Maps, which powers Microsoft’s Bing Maps and Amazon’s Maps, does too.
There are indications that MapBox, a D.C.-based mapping startup that uses OpenStreetMap, a free, crowdsourced alternative world map to Google Maps, may be preparing to release something like this in the near future as well.
Back in 2010, Google in an update added the ability for users of its acquired Picasa photo-sharing social network to geotag their photos, that is – assign them to a particular location on Google Maps, whether that is a city, a restaurant, a park, etc., – so now that when users search for these places in Google Maps and click to check the “Photos” filter in the upper-right hand corner (see screenshot below) they can see collections of millions of photos from users in any given area.
Google Maps also allows users to search for specific photo topics – art, architecture, nature, landscape, park, etc., narrowing and specifying the immense volume of photos users have taken and uploaded and assigned to a particular spot.
Before that, in 2009, Google took another one of its acquisitions, the geosocial photo sharing website Panoramio, and began adding photos by its users to Google Street Views of famous landmarks and locations around the world.
In practical terms, this means that those Panoramio photos are now visible to any Google Street View user. The photos appear as tiny thumbnails floating in the sky and in front of buildings and structures, when a user has Google Street View and the Photos filter turned on. Hovering over one of these thumbnails blows the photo up to full size and re-orients it in the proper position and proper angle corresponding to the scene, producing a trippy, ghostly, fractured view of that specific landmark or structure from the recent past.
It’s not hard to think of how something this could later be extrapolated to show photos of the distant past. In fact, one artist, Shawn Clover, has already produced such still images combining photos of the devastation caused by the 1906 great San Francisco earthquake with views of the present, but those haven’t been integrated into any digital map service yet, to my knowledge.
While not exactly the most practical view of the world, such layers of historical imagery from the distant and recent past would undoubtedly make an incredible application for Google Glass, the upcoming computerized glasses that the search giant will begin shipping in 2013 (to selected developers).
Already, Google has previewed the ability in Google Glass to show on its tiny, glasses frame-mounted screen some immersive, 360-rotatable views of areas completely different from those the wearer is currently inhabiting and gazing at, such as a jungle scene that David Pogue was allowed to demo.
Google is clearly interested in providing users with location-based information, both historic and current, as indicated by the launch of Google’s new local trivia app “Field Trip” (the app shows information from other websites and apps such as Atlas Obscura, Wikipedia and Zagat).
Microsoft = Morpheus
To give credit where it’s due, Microsoft has actually blazed the trail on the kind of image synthesizing software that will be necessary to combine publicly posted photos into a “Matrix,” namely with a product known as Photosynth, which began as a project out of the University of Washington in 2006, using technology Microsoft acquired from a company called Seadragon in early 2006, founded by Blaise Aguera y Arcas, now a Microsoft Bing architect.
Photosynth is perhaps the singular product most like the simulation part of “The Matrix” yet, albeit a still-image version: An algorithm recognizes image similarities and continuity, and stitches them together into interactive panoramas – panoramas that jump from photo to photo as the viewer rotates, but which (unlike Google Maps) are entirely spatially free-form, permitting the viewer to rotate beyond the horizontal axis – over and around an object, zooming in on details with the appropriately-sized and angled user-generated photos automatically orienting themselves into place. Here’s a demo video from 2007 that illustrates the capabilities:
A public version of Photosynth was released by Microsoft in 2008 and is still available for free online, but has failed to catch on in any broad sense, perhaps because the process to make and navigate “Synths” seems substantially more complicated than just snapping photos or panoramas. Microsoft itself admits that “synths” are “more complex to navigate than panoramas because you are moving from photo to photo.”
Microsoft has taken an important step forward into securing more mainstream adoption and use of Photosynth, releasing a mobile app for iOS in 2011 that has since gone on to see over 7 million downloads and received an extremely high rating of 4.7 out of 5 stars. Mobile, more so in photography and location-sharing than in any other user activity online, is clearly the path to future success.
Still, these are all pretty geeky, niche technologies and products. Their users are likely to be mostly amateur and pro photographers, mappers and developers.
I need cameras, lots of cameras
But this is rapidly changing. Photo-sharing, location-sharing and panoramic imagery – all the basic ingredients needed to create a virtual interactive copy of the world – are now going mainstream, thanks in part to the launch of successful products in the past few years, namely: Instagram, Facebook, Twitter, Apple’s Panorama feature in iOS 6 and Google’s Photo Sphere for Android.
It’s no coincidence that all of these – with the exception of Facebook – were designed for mobile devices, smartphones in particular.
Indeed, it’s hard to overstate the impact mobile has had on the popularity of photo- and location-sharing. The increasing availability of mobile phones, which now outnumber computers globally and many of which have GPS and cameras, means that much of Earth’s population is now capable of adding geo-tagged imagery to the collective repository dispersed across various social networks online.
Of course, many people still live in jurisdictions where free expression and amateur photography aren’t quite as accepted as they are in North America and many Western European democracies (and even in some cases in these regions we’ve seen lately a clampdown by police on press photography during politically contentious situations such as the Occupy Wall Street demonstrations in the fall of 2011). But the proliferation of cameraphones and the fact that, at present, citizens outnumber police, indicates that governments will never truly be able to stop photography they don’t approve of from being taken and uploaded to the Web.
Photos have proven to be among the most popularly uploaded and viewed kinds of media online. Give everyone a camera and an Internet connection in their pocket and watch as they do the work of creating the most comprehensive image library ever assembled. Give them the ability to share their location and many of them will also do so.
But even those users who don’t tag locations to their photos, even those who are uncomfortable with services like Foursquare that are entirely location-based, are sort of shit out of luck, because image recognition software, e.g. Google Image Search – which finds similar photos and photos based on search keywords, means that even those photos that don’t have a location tagged to them can be fairly easily and quickly pinpointed to a particular spot (even a particular photographer in many cases, thanks to the metadata produced along with each digital photo).
So cameraphones are the most fundamental component necessary for creating a “Matrix,” but they alone aren’t enough: What’s further necessary is easy and attractive options for uploading, disseminating and finding photos online.
Instagram, Facebook and Twitter are the clear leaders in this space. Instagram and Facebook have now joined forces, a natural alliance in some sense given that Facebook overtook Flickr to become the dominant photo-storing website online a few years ago.
But Instagram, which was only released in 2010, has had a remarkably swift ascent, with over 100 million users before the end of 2012. Instagram has also become something of a journalistic darling lately, as in late October, users in the U.S. Northeast bombarded the social network with realtime photos of their situations during Hurricane Sandy.
The number of photos tagged to that disaster #Sandy, some 800,000, were then eclipsed just weeks later by the 10 million photos shared on Thanksgiving. Importantly in both cases, Instagram’s geolocation feature played a major role. In Sandy, the role was obvious – showcasing the highly varying levels of damage around the Tri-State area. In Thanksgiving, users were able to communicate with location where they’d traveled to for the holiday, and Instagram, the company itself, highlighted one-located based event: The Macy’s Thanksgiving Day Parade in NYC and preceding balloon inflation.
New desktop Web profiles to show off and share Instagram users' photo collections will likely lead to further adoption and usage of the product. What Instagram, now just another division of Facebook, ends up doing with its ever-expanding library of geotagged imagery remains to be seen.
But a Google Maps-like photo product is not out of the question, especially after Instagram explicitly added a map view to user profiles with the release of Instagram 3.0 in August 2012 (Importantly for businesses, Instagram’s point-of-interest (POI) database also showcases all of the public user imagery tagged to a particular place, so it’s easy for users to see one of their friends' photos at a cool-looking restaurant, for example, click the name of the restaurant, then check through all the photos that have been posted there to see if they like the overall look/ambiance/food available at the place. Or see all of the cliched Instagram photos of people “Scream”-ing silently in front of Munch’s seminal painting now at MoMA.)
Meanwhile, Instagram, praised for its simplicity the frictionlessness of its sharing options, for now lacks at least one built-in feature that is poised to become just about as commonplace as multicolored, faux-vintage filters: Panoramas.
Apple released a Panorama feature as a default camera option in iOS 6 in September, and Google a few weeks later released its own panorama Photo Sphere, in its Jelly Bean 4.2 update for Android smartphones.
Third-party panorama apps have been available for some time for both iOS and Android phones, but the inclusion of the feature as default going forward indicates just how Google and Apple think users want to be using and will be using there cameras, and where both companies think and want users to be spending their time. Panoramas, incidentally, are also a helpful component in assembling a fully rotatable, navigable, digital copy of the world – a Matrix.
Worlds within worlds
One key thing to keep in mind about Instagram, panoramas, and photo apps for smartphones in general is that many users of these products take photos of the indoors as much or more as they do of outdoor scenes.
This is another huge advantage to the public repositories of photos on social networks over commercial photography generated by Google’s Street View Cars and photographers and their equivalents at Nokia and Microsoft. Whereas the big tech companies engaged in mapping projects must, in some jurisdictions, secure permission to photograph the outside of private residences, much less the interiors of businesses, users in most buildings in the West are at least free in practice to (and consistently do) capture imagery of their surrounds. Sure, there’s been a bit of noise about the legality of user photos of interiors in some unique situations (namely during polling places in the U.S. Election 2012), but by and large, users have in practice been taking and sharing photos of wherever they are – inside a bar, restaurant, courthouse, bus, White House, etc. – with impunity over the past several years.
There may some legal and government pushback on using photos of private interiors in large-scale projects such as a Matrix simulation, but we haven’t seen much yet in Western Democratic countries, and I bet that any strong pushback that arises will be toothless and quick to pass. The sheer amount of photography is a tidal wave that seems destined to smash through any such attempts to regulate or restrict it.
Google itself has begun actively courting interior panoramas and floor plans of businesses for use in public products, namely Google Maps and Google Plus.
As more businesses seek to embrace technology, social media and seem “with it” to their increasingly tech savvy customer bases, this trend of 360-degree panoramas of interiors and more detailed maps is only poised to accelerate. It’d also be easy to layer user photos snapped and uploaded to any of Google’s products, but namely Google Plus, onto these views as well.
Now all we need is someone to begin stitching all of these sources of imagery together, combining user generated photos on social networks like Facebook, Instagram and Twitter with the Street View imagery captured by Google and its mapping competitors, and put a Microsoft Photosynth-like navigation layer around them all.
Google appears best positioned to do this at this time, but the increasing scrutiny of legal and government bodies around the world on Google’s expanding reach across industries may have the effect of deterring or slowing such a project at the company.
Still, someone is going to create a Matrix at some point, either in the commercial or government sector. And I’d rather have it be publicly available and free for all to use and contribute to (or not, if they so choose).