More recently, the Mountain View, Calif., tech giant bought a desktop mapping program from the Australian-based company Where 2 Technologies. That was in October 2004, and in February of the following year, Google launched its own mapping initiative called Google Maps and promised to chart the planet, both from the air and in 360° street-level views. A Google Street View car (above) travels through Austria.

Today, the app attracts more than one billion users per month and is available in several variations including Google Maps, Google Earth, Street View, and Waze. The very popular Israeli-based GPS Waze program was purchased by Google in June 2013 for $966 million. You can load on your phone the following offerings from Google’s cartography shop: talking road maps to guide your travels; 360° curbside views of your destination; or a computer program that renders a 3D representation of Earth by “superimposing satellite images, aerial photography, and GIS (geographical imagery systems) data onto a 3D globe, allowing users to see cities and landscapes from various angles” (Wikipedia).


For most of Google’s GIS development, Google engineers haven’t been very forthcoming about how they’re building the gigantic mapping (the land) and charting (the rivers, lakes, and seas) databases. But now, two recent blogposts provide interesting insights into how the information is collected and used.

On July 22, 2019, director of engineering Andrew Lookingbill, posted “Google Maps 101: how we map the world,” and then on December 13, 2019, senior product manager Thomas Escobar posted “Google Maps 101: how imagery powers our maps”. Both offer ground-level views of how GIS is replacing traditional cartography.

Because no one yet has copyrighted the geographic features of the planet, Google Maps has made much more progress than the company’s attempt to create a comprehensive free public world library. Google says that it has 36 million square miles of HD satellite imagery and that collection covers the areas where 98% of the world’s population resides. The company also has now disclosed that it has recorded more than 10 million miles of images for its Street View database. The wealth of these collections is available for free to everyone in its four apps.

How Google uses its Street View cars and Street View trekkers, where vehicles can’t travel, is discussed in the two “how-to” blog posts.


The process begins with imagery from Street View vehicles and satellites. Street View was launched in 2007, and in the intervening years Google has amassed more than 170 billion images from 87 countries “from the depths of Antarctica to the top of Mount Kilimanjaro.” Street View images from the cars are now supplemented with photos taken on high-resolution cameras by trekkers carried on backpacks by walkers.

Because the world is ceaselessly changing, Google teams regularly revisit already covered sections of the landscape. In residential areas, the images are usually updated every two to three years, but rural areas don’t see the Street View cars returning nearly as often.

The next step is to add data to “bring the map to life.” The data is from more than 1,000 trusted third-party sources from around the world. These include the USGS (United States Geological Survey), the National Institute of Statistics and Geography (INEGI) in Mexico, along with smaller sources such as local municipalities and even housing developers.

Because imagery and data are static, Google deploys data operations teams all over the world who help gather the images, vet the authoritative data sources, and train the machine-learning models for accuracy. There are also communities of local guides and Google Maps users equipped with the ability to check and recommend changes through the Send Feedback button in the app.

Lookingbill explains the role of machine learning in the mapmaking process: “[It’s] to increase the speed of our mapping. Machine learning allows our team to automate our mapping process, while maintaining high levels of accuracy.” The technique also solves problems like indistinct images of buildings that might get mapped as just blobs. Because they’re important as landmarks for people using the maps, Google enhanced its software with building-recognition algorithms that can recognize which images correspond with building edges and shapes.


In the December 2019 blog post, Thomas Escobar discusses the equipment used to capture the imagery in Google Maps. If you haven’t seen an actual Street View car slowly cruising on local streets, you probably have seen photos of them. They’ve been working for more than a decade. Each one of these cars is equipped with nine cameras that capture high-definition images from every possible vantage point. The cameras are athermal, which means they can function in locales with extreme temperatures, from Death Valley in midsummer to the mountains in Nepal in the winter. The cars have their own photo-processing centers on board and LIDAR systems that use lasers to measure distances.

That’s for the cars, but there are more places without roads than with. Escobar explains that for all of those places, “There’s also the Street View trekker, a backpack that collects images from places where driving isn't possible.”

Image: Google

“These trekkers,” he adds, “are carried by boats, sheep, camels, and even scout troops to gather high-quality photos from multiple angles, often in some of the hardest-to-map places around the world.”

Images: Google

In order to produce an overall vista of the mapped areas, you have to align and stitch together individual sets of images. For this, Google turned not to a modern digital solution as with the machine learning algorithms, but instead to a photographic technique developed in the early 1900s called photogrammetry, which utilizes measurements from the photographs. As Escobar explains, “Google’s approach is unique in that it utilizes billions of images, similar to putting a giant jigsaw puzzle together that spans the entire globe. By refining our photogrammetry technique over the last 10 years, we’re now able to align imagery from multiple sources—Street View, aerial, and satellite imagery, along with authoritative datasets—with accuracy down to a meter.”

With a promise to continue in future blog posts the tutorial on how Google will improve its ongoing effort to map the entire world, Escobar reminds us, “Mapmaking is never done—and we’re constantly working to build new tools and techniques to make imagery collection faster, more accurate, and safer for everyone.”

About the Authors