If you’ve ever scrolled back through old travel photos but can no longer remember where they were taken, Google may soon be able to help.
Software engineers at Google have created a deep-learning system called PlaNet that can find a location of where a photograph was taken by looking at its pixels. The system, which is in its early stages, may outperform humans in finding the precise location of a site without a landmark, reports the MIT Technology Review.
According to a report on the project by its developers, people can use “informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details” to help determine an approximate location, and the technology uses a similar process. They note that a photo of a typical beach could be taken on many different coasts around the world, and without a landmark, humans can use things like road signs to figure out where it is, while PlaNet system uses pixels.
The system separates the world into a grid with thousands of geographic cells. The cells are then connected to geotagged images – millions of images taken from around the internet – from that area. PlaNet can then use visible cues to find where an image was taken with “superhuman levels of accuracy in some cases”.
“We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-travelled human to distinguish”.
For any traveller not convinced that a computer can know more than they do – the researchers tested that theory. They let the programme “compete against ten-well travelled human subjects in a game of Geoguessr” – an online guessing game where users determine a location from an image – and PlaNet won 28 of the 50 rounds.