My Google Map Blog

Tag: AI

Immersive view coming soon to Maps — plus more updates

by Miriam Daniel on May.11, 2022, under 3D Models, Argentina, Australia, Brazil, California, Denmark, England, Germany, Google Earth News, Google Earth Tips, Google Sky, Google maps, Hawaii, Indonesia, Ireland, Italy, Japan, Kenya, Mexico, Natural Landmarks, Netherlands, Sightseeing, Street Views, USA

Google Maps helps over one billion people navigate and explore. And over the past few years, our investments in AI have supercharged the ability to bring you the most helpful information about the real world, including when a business is open and how crowded your bus is. Today at Google I/O, we announced new ways the latest advancements in AI are transforming Google Maps — helping you explore with an all-new immersive view of the world, find the most fuel-efficient route, and use the magic of Live View in your favorite third-party apps.

A more immersive, intuitive map

Google Maps first launched to help people navigate to their destinations. Since then, it’s evolved to become much more — it’s a handy companion when you need to find the perfect restaurant or get information about a local business. Today — thanks to advances in computer vision and AI that allow us to fuse together billions of Street View and aerial images to create a rich, digital model of the world — we’re introducing a whole new way to explore with Maps. With our new immersive view, you’ll be able to experience what a neighborhood, landmark, restaurant or popular venue is like — and even feel like you’re right there before you ever set foot inside. So whether you’re traveling somewhere new or scoping out hidden local gems, immersive view will help you make the most informed decisions before you go.

Say you’re planning a trip to London and want to figure out the best sights to see and places to eat. With a quick search, you can virtually soar over Westminster to see the neighborhood and stunning architecture of places, like Big Ben, up close. With Google Maps’ helpful information layered on top, you can use the time slider to check out what the area looks like at different times of day and in various weather conditions, and see where the busy spots are. Looking for a spot for lunch? Glide down to street level to explore nearby restaurants and see helpful information, like live busyness and nearby traffic. You can even look inside them to quickly get a feel for the vibe of the place before you book your reservation.

The best part? Immersive view will work on just about any phone and device. It starts rolling out in Los Angeles, London, New York, San Francisco and Tokyo later this year with more cities coming soon.

Immersive view lets you explore and understand the vibe of a place before you go

An update on eco-friendly routing

In addition to making places easier to explore, we want to help you get there more sustainably. We recently launched eco-friendly routing in the U.S. and Canada, which lets you see and choose the most fuel-efficient route when looking for driving directions — helping you save money on gas. Since then, people have used it to travel 86 billion miles, saving more than an estimated half a million metric tons of carbon emissions — equivalent to taking 100,000 cars off the road. We’re on track to double this amount as we expand to more places, like Europe.

Still image of eco-friendly routing on Google Maps

Eco-friendly routing has helped save more than an estimated half a million metric tons of carbon emissions

The magic of Live View — now in your favorite apps

Live View helps you find your way when walking around, using AR to display helpful arrows and directions right on top of your world. It's especially helpful when navigating tricky indoor areas, like airports, malls and train stations. Thanks to our AI-based technology called global localization, Google Maps can point you where you need to go in a matter of seconds. As part of our efforts to bring the helpfulness of Google Maps to more places, we’re now making this technology available to developers at no cost with the new ARCore Geospatial API.

Developers are already using the API to make apps that are even more useful and provide an easy way to interact with both the digital and physical worlds at once. Shared electric vehicle company Lime is piloting the API in London, Paris, Tel Aviv, Madrid, San Diego, and Bordeaux to help riders park their e-bikes and e-scooters responsibly and out of pedestrians’ right of way. Telstra and Accenture are using it to help sports fans and concertgoers find their seats, concession stands and restrooms at Marvel Stadium in Melbourne. DOCOMO and Curiosity are building a new game that lets you fend off virtual dragons with robot companions in front of iconic Tokyo landmarks, like the Tokyo Tower. The new Geospatial API is available now to ARCore developers, wherever Street View is available.

DOCOMO and Curiosity game showing an AR dragon, alien and spaceship interacting on top of a real-world image, powered by the ARCore Geospatial API.

Live View technology is now available to ARCore developers around the world

AI will continue to play a critical role in making Google Maps the most comprehensive and helpful map possible for people everywhere.

Comments Off :, more...

How AI and imagery build a self-updating map

by Liam Bolling on Apr.08, 2022, under 3D Models, Argentina, Australia, Brazil, California, Denmark, England, Germany, Google Earth News, Google Earth Tips, Google Sky, Google maps, Hawaii, Indonesia, Ireland, Italy, Japan, Kenya, Mexico, Natural Landmarks, Netherlands, Sightseeing, Street Views, USA

Building a map is complex, and keeping it up-to-date is even more challenging. Think about how often your city, town or neighborhood changes on a day-to-day basis. Businesses and shops open and close, stretches of highway are added, and roadways change. In today’s Maps 101 installment, we’ll dive into two ways Google Maps uses advancements in AI and imagery to help you see the latest information about the world around you every single day.

Automatically updating business hours

Over the past few years, businesses have experienced a lot of change — including constantly updating operating hours based on changing pandemic-related restrictions. To keep up with this pace of change, we developed a machine learning model that automatically identifies if business hours are likely wrong, then instantly updates them with AI-generated predictions.

Let’s look at Liam’s Lemonade Shop as an example. To start, our systems consider multiple factors — such as when Liam last updated their business profile, what we know about other shops' hours, and the Popular Times information for the shop, which uses location trends to show when it tends to be busiest. Since it appears that Liam’s business profile hasn’t been updated in over a year and its busiest hours are typically Thursday afternoons — even though Google Maps says that it's closed at that time — Liam’s business hours are likely out of date.

Still images of a business' hours and Popular Times information on Google Maps

To see if business hours need updating, we check a store’s Popular Times information and when its business profile was last updated.

So what’s next? Our algorithms analyze the business hours of other nearby lemonade shops, information from Liam’s website, and Street View images of Liam’s storefront that look specifically for business hour signs to determine the most accurate business hour prediction. At the same time, we enlist the help of the Google Maps community — including Local Guides and even the business owners themselves through their Google Business Profile — to verify the information we predicted. In Argentina, Australia, Chile, France, Japan, Mexico, New Zealand, Peru, and the United States, we also use Duplex conversational technology to call businesses just like Liam’s and ask for their hours directly. With this new AI-first approach, we’re on track to update the hours for over 20 million businesses around the globe in the next six months - helping you know exactly when your favorite store, restaurant or cafe is open for business .

Road information that reflects the real world

We’re also experimenting with ways we can use imagery to make updates to other helpful information. For instance, starting in the U.S., we’re launching a third-party imagery pilot to let you see the most up-to-date speed limit information in your town, which can help keep you safe while driving. Here’s how it works:

Say our systems think that the speed limit information on a particular highway needs to be updated. With the help of third-party imagery partners that already gather roadway imagery to improve delivery routes, we can request a photo of the specific stretch of road that also includes a speed limit sign. If the partner has this photo available, we then use a combination of AI and help from our operations team to identify the sign in the image, extract the new speed limit information, and update Google Maps.

Picture of an intersection that has a speed limit sign

Representative imagery featuring a speed limit sign, with license plates blurred

Over time, this technology will bring more details to the map that can help make your drives safer and more efficient — like where potholes and school zones are or where new construction is happening. And as with all Google Maps features, we designed this pilot with privacy top of mind. For instance, we only reference images taken on public roads, and partners are required to blur information (like faces and license plates) to avoid potentially identifying someone. For an extra layer of privacy, we blur the photo again when we receive it and delete the photo after we use it to update the map.

AI, imagery and Duplex technology will continue to play a critical role in helping make Google Maps the most comprehensive and useful map possible. For more behind-the-scenes looks at the technology that powers Google Maps, check out the rest of our Maps 101 blog series.

Comments Off :, , more...

How AI is making information more useful

by Prabhakar Raghavan on Sep.30, 2021, under 3D Models, Argentina, Australia, Brazil, California, Denmark, England, Germany, Google Earth News, Google Earth Tips, Google Sky, Google maps, Hawaii, Indonesia, Ireland, Italy, Japan, Kenya, Mexico, Natural Landmarks, Netherlands, Sightseeing, Street Views, USA

Today, there’s more information accessible at people’s fingertips than at any point in human history. And advances in artificial intelligence will radically transform the way we use that information, with the ability to uncover new insights that can help us both in our daily lives and in the ways we are able to tackle complex global challenges.

At our Search On livestream event today, we shared how we’re bringing the latest in AI to Google’s products, giving people new ways to search and explore information in more natural and intuitive ways.


Making multimodal search possible with MUM

Earlier this year at Google I/O, we announced we’ve reached a critical milestone for understanding information with Multitask Unified Model, or MUM for short.

We’ve been experimenting with using MUM’s capabilities to make our products more helpful and enable entirely new ways to search. Today, we’re sharing an early look at what will be possible with MUM. 

In the coming months, we’ll introduce a new way to search visually, with the ability to ask questions about what you see. Here are a couple of examples of what will be possible with MUM.

Animated GIF showing how you can tap on the Lens icon when you’re looking at a picture of a shirt, and ask Google to find you the same pattern — but on another article of clothing, like socks.

With this new capability, you can tap on the Lens icon when you’re looking at a picture of a shirt, and ask Google to find you the same pattern — but on another article of clothing, like socks. This helps when you’re looking for something that might be difficult to describe accurately with words alone. You could type “white floral Victorian socks,” but you might not find the exact pattern you’re looking for. By combining images and text into a single query, we’re making it easier to search visually and express your questions in more natural ways.

Animated GIF showing the point-and-ask mode of searching that can make it easier to find the exact moment in a video that can help you with instructions on fixing your bike.

Some questions are even trickier: Your bike has a broken thingamajig, and you need some guidance on how to fix it. Instead of poring over catalogs of parts and then looking for a tutorial, the point-and-ask mode of searching will make it easier to find the exact moment in a video that can help.


Helping you explore with a redesigned Search page

We’re also announcing how we’re applying AI advances like MUM to redesign Google Search. These new features are the latest steps we’re taking to make searching more natural and intuitive.

First, we’re making it easier to explore and understand new topics with “Things to know.” Let’s say you want to decorate your apartment, and you’re interested in learning more about creating acrylic paintings.

The search results page for the query “acrylic painting” that scrolls to a new feature called “Things to know”, which lists out various aspects of the topic like, “step by step”, “styles” and “using household items."

If you search for “acrylic painting,” Google understands how people typically explore this topic, and shows the aspects people are likely to look at first. For example, we can identify more than 350 topics related to acrylic painting, and help you find the right path to take.

We’ll be launching this feature in the coming months. In the future, MUM will unlock deeper insights you might not have known to search for — like “how to make acrylic paintings with household items” — and connect you with content on the web that you wouldn’t have otherwise found.

Two phone screens side by side highlight a set of queries and tappable features that allow you to refine to more specific searches for acrylic painting or broaden to concepts like famous painters.

Second, to help you further explore ideas, we’re making it easy to zoom in and out of a topic with new features to refine and broaden searches. 

In this case, you can learn more about specific techniques, like puddle pouring, or art classes you can take. You can also broaden your search to see other related topics, like other painting methods and famous painters. These features will launch in the coming months.

A scrolling results page for the query “pour painting ideas” that shows results with bold images and video thumbnails.

Third, we’re making it easier to find visual inspiration with a newly designed, browsable results page. If puddle pouring caught your eye, just search for “pour painting ideas" to see a visually rich page full of ideas from across the web, with articles, images, videos and more that you can easily scroll through. 

This new visual results page is designed for searches that are looking for inspiration, like “Halloween decorating ideas” or “indoor vertical garden ideas,” and you can try it today.

Get more from videos

We already use advanced AI systems to identify key moments in videos, like the winning shot in a basketball game, or steps in a recipe. Today, we’re taking this a step further, introducing a new experience that identifies related topics in a video, with links to easily dig deeper and learn more. 

Using MUM, we can even show related topics that aren’t explicitly mentioned in the video, based on our advanced understanding of information in the video. In this example, while the video doesn’t say the words “macaroni penguin’s life story,” our systems understand that topics contained in the video relate to this topic, like how macaroni penguins find their family members and navigate predators. The first version of this feature will roll out in the coming weeks, and we’ll add more visual enhancements in the coming months.

Across all these MUM experiences, we look forward to helping people discover more web pages, videos, images and ideas that they may not have come across or otherwise searched for. 

A more helpful Google

The updates we’re announcing today don’t end with MUM, though. We’re also making it easier to shop from the widest range of merchants, big and small, no matter what you’re looking for. And we’re helping people better evaluate the credibility of information they find online. Plus, for the moments that matter most, we’re finding new ways to help people get access to information and insights. 

All this work not only helps people around the world, but creators, publishers and businesses as well.  Every day, we send visitors to well over 100 million different websites, and every month, Google connects people with more than 120 million businesses that don't have websites, by enabling phone calls, driving directions and local foot traffic.

As we continue to build more useful products and push the boundaries of what it means to search, we look forward to helping people find the answers they’re looking for, and inspiring more questions along the way.

Comments Off :, , , more...

A smoother ride and a more detailed Map thanks to AI

by Russell Dicker on May.19, 2021, under 3D Models, Argentina, Australia, Brazil, California, Denmark, England, Germany, Google Earth News, Google Earth Tips, Google Sky, Google maps, Hawaii, Indonesia, Ireland, Italy, Japan, Kenya, Mexico, Natural Landmarks, Netherlands, Sightseeing, Street Views, USA

AI is a critical part of what makes Google Maps so helpful. With it, we’re able to map roads over 10 times faster than we could five years ago, and we can bring maps filled with useful information to virtually every corner of the world. Today, we’re giving you a behind-the-scenes look at how AI makes two of the features we announced at I/O possible.

Teaching Maps to identify and forecast when people are hitting the brakes

Let’s start with our routing update that helps you avoid situations that cause you to slam on the brakes, such as confusing lane changes or freeway exits. We use AI and navigation information to identify hard-braking events — moments that cause drivers to decelerate sharply and are known indicators of car crash likelihood — and then suggest alternate routes when available. We believe these updates have the potential to eliminate over 100 million hard-braking events in routes driven with Google Maps each year. But how exactly do we find when and where these moments are likely to occur?


That’s where AI comes in. To do this, we train our machine learning models on two sets of data. The first set of information comes from phones using Google Maps. Mobile phone sensors can determine deceleration along a route, but this data is highly prone to false alarms because your phone can move independently of your car. This is what makes it hard for our systems to decipher you tossing your phone into the cupholder or accidentally dropping it on the floor from an actual hard-braking moment. To combat this, we also use information from routes driven with Google Maps when it's projected on a car’s display, like Android Auto. This represents a relatively small subset of data, but it’s highly accurate because Maps is now tethered to a stable spot — your car display. Training our models on both sets of data makes it possible to spot actual deceleration moments from fake ones, making detection across all trips more accurate. 


Understanding spots along a route that are likely to cause hard-braking is just one part of the equation. We’re also working to identify other contextual factors that lead to hard-braking events, like construction or visibility conditions. For example, if there’s a sudden increase in hard-braking events along a route during a certain time of day when people are likely to be driving toward the glare of the sun, our system could detect those events and offer alternate routes. These details inform future routing so we can suggest safer, smoother routes.

Using AI to go beyond driving

When you’re walking or biking or taking public transit, AI is also there helping you move along safely and easily. Last August we launched detailed street maps which show accurate road widths, along with details about where the sidewalks, crosswalks and pedestrian islands are in an area so people can better understand its layout and how to navigate it. Today, we announced that detailed street maps will expand to 50 more cities by the end of 2021. While this sounds straightforward, a lot is going on under the hood — especially with AI — to make this possible! 

A GIF that shows a before and after comparison of detailed streets maps built from satellite imagery

A before and after comparison of detailed streets maps built from satellite imagery

Imagine that you’re taking a stroll down a typical San Francisco street. As you approach the intersection, you’ll notice that the crosswalk uses a “zebra” pattern — vertical stripes that show you where to walk. But if you were in another city, say London, then parallel dotted lines would define the crosswalks. To account for these differences and accurately display them on the map, our systems need to know what crosswalks look like — not just in one city but across the entire world. It gets even trickier since urban design can change at the country, state, and even city level.

To expand globally and account for local differences, we needed to completely revamp our mapmaking process. Traditionally, we’ve approached mapmaking like baking a cake — one layer at a time. We trained machine learning models to identify and classify features one by one across our index of millions of Street View, satellite and aerial images — starting first with roads, then addresses, buildings and so on. 

But detailed street maps require significantly more granularity and precision than a normal map. To map these dense urban features correctly, we’ve updated our models to identify all objects in a scene at once. This requires a ton of AI smarts. The model has to understand not only what the objects are, but the relationships between them — like where exactly a street ends and a sidewalk begins. With these new full-scene models, we're able to detect and classify broad sets of features at a time without sacrificing accuracy, allowing us to map a single city faster than ever before. 


An image of Google Maps’ single-feature AI models

Single-feature AI model that classifies buildings.

An image of Google Maps’ full-scene AI models

Full-scene AI models that capture multiple categories of objects at once.


Once we have a model trained on a particular city, we can then expand it to other cities with similar urban designs. For example, the sidewalks, curbs, and traffic lights look similar in Atlanta and Ho Chi Minh City — despite being over 9,000 miles away. And the same model works in Madrid as it does in Dallas, something that may be hard to believe at first glance. With our new advanced machine learning techniques combined with our collection of high-definition imagery, we’re on track to bring a level of detail to the map at scale like never before.

AI will continue to play an important role as we build the most helpful map for people around the globe. For more behind-the-scenes looks at the technology that powers Google Maps, check out the rest of our Maps 101 blog series.

More from this Series

Maps 101

Google Maps helps you navigate, explore, and get things done every single day. In this series, we’ll take a look under the hood at how Google Maps uses technology to build helpful products—from using flocks of sheep and laser beams to gather high-definition imagery to predicting traffic jams that haven’t even happened yet.

View more from Maps 101
Comments Off :, , more...

Redefining what a map can be with new information and AI

by Dane Glasgow on Mar.30, 2021, under 3D Models, Argentina, Australia, Brazil, California, Denmark, England, Germany, Google Earth News, Google Earth Tips, Google Sky, Google maps, Hawaii, Indonesia, Ireland, Italy, Japan, Kenya, Mexico, Natural Landmarks, Netherlands, Sightseeing, Street Views, USA

Sixteen years ago, many of us held a printout of directions in one hand and the steering wheel in the other to get around— without information about the traffic along your route or details about when your favorite restaurant was open. Since then, we’ve been pushing the boundaries of what a map can do, propelled by the latest machine learning. This year, we’re on track to bring over 100 AI-powered improvements to Google Maps so you can get the most accurate, up-to-date information about the world, exactly when you need it. Here's a snapshot of how we're using AI to make Maps work better for you with a number of updates coming this year.

Navigate indoors with Live View

We all know that awkward moment when you're walking in the opposite direction of where you want to go — Live View uses AR cues to avoid just that. Live View is powered by a technology called global localization, which uses AI to scan tens of billions of Street View images to understand your orientation. Thanks to new advancements that help us understand the precise altitude and placement of objects inside a building, we’re now able to bring Live View to some of the trickiest-to-navigate places indoors: airports, transit stations and malls. 


If you’re catching a plane or train, Live View can help you find the nearest elevator and escalators, your gate, platform, baggage claim, check-in counters, ticket office, restrooms, ATMs and more. Arrows and accompanying directions will point you the right way. And if you need to pick something up from the mall, use Live View to see what floor a store is on and how to get there so you can get in and out in a snap. Indoor Live View is live now on Android and iOS in a number of malls in Chicago, Long Island, Los Angeles, Newark, San Francisco, San Jose, and Seattle. It starts rolling out in the coming months in select airports, malls, and transit stations in Tokyo and Zurich, with more cities on the way. 

Indoor live view 2

Find your way inside airports, train stations, and malls with Indoor Live View

Plan ahead with more information about weather and air quality 

With the new weather layer, you can quickly see current and forecasted temperature and weather conditions in an area — so you’ll never get caught in the rain without an umbrella. And the new air quality layer shows you how healthy (or unhealthy) the air is —  information that’s especially helpful if you have allergies or are in a smoggy or fire-prone area. Data from partners like The Weather Company, AirNow.gov and the Central Pollution Control Board power these layers that start rolling out on Android and iOS in the coming months. The weather layer will be available globally and the air quality layer will launch in Australia, India, and the U.S., with more countries to come. 

Head outside with the new weather and air quality layers

See helpful air quality and weather information with new layers in Google Maps

Find more eco-friendly options to get around

With insights from the U.S. Department of Energy’s National Renewable Energy Lab, we’re building a new routing model that optimizes for lower fuel consumption based on factors like road incline and traffic congestion. This is all part of the commitment we made last September to help one billion people who use our products take action to reduce their environmental footprint. Soon, Google Maps will default to the route with the lowest carbon footprint when it has approximately the same ETA as the fastest route. In cases where the eco-friendly route could significantly increase your ETA, we’ll let you compare the relative CO2 impact between routes so you can choose. Always want the fastest route? That’s OK too — simply adjust your preferences in Settings. Eco-friendly routes launch in the U.S. on Android and iOS later this year, with a global expansion on the way.

Eco-friendly routes let you choose the route with the lowest carbon footprint

More eco-friendly routes let you choose the route with the lowest carbon footprint

From Amsterdam to Jakarta, cities around the world have established low emission zones — areas that restrict polluting vehicles like certain diesel cars or cars with specific emissions stickers —  to help keep the air clean. To support these efforts, we’re working on alerts to help drivers better understand when they’ll be navigating through one of these zones. You can quickly know if your vehicle is allowed in the area, choose an alternative mode of transportation, or take another route. Low emission zone alerts launch this June in Germany, the Netherlands, France, Spain, and the UK on Android and iOS, with more countries coming soon. 

Low emission zone alerts launch in

Quickly know if your vehicle is allowed in the area, choose an alternative mode of transportation, or take another route with low emission zone alerts

But we know that getting around sustainably goes beyond driving. So we’re making it easier to choose more sustainable options when you’re on the go. Soon you’ll get a comprehensive view of all routes and transportation modes available to your destination — you can compare how long it’ll take to get there by car, transit or bike without toggling between tabs. Using advanced machine learning models, Maps will automatically prioritize your preferred modes  — and even boost modes that are popular in your city. For example, if you bike a lot, we’ll automatically show you more biking routes. And if you live in a city like New York, London, Tokyo, or Buenos Aires where taking the subway is popular, we’ll rank that mode higher. This rolls out globally in the coming months on Android and iOS.

new directions experience

Easily compare different routes and modes of transportation with the new directions experience

Save time with curbside grocery pickup on Maps

Delivery and curbside pickup have grown in popularity during the pandemic — they’re convenient and minimize contact. To make this process easier, we’re bringing helpful shopping information to stores’ Business Profiles on Maps and Search, like delivery providers, pickup and delivery windows, fees, and order minimums. We’re rolling this out on mobile Search starting with Instacart and Albertsons Cos. stores in the U.S., with plans to expand to Maps and other partners.

pickup and delivery actions

Check out helpful information about grocery delivery providers, pickup and delivery windows, fees, and order minimums

This summer, we’re also teaming up with U.S. supermarket Fred Meyer, a division of The Kroger Co., on a pilot in select stores in Portland, Oregon to make grocery pickup easier. After you place an order for pickup on the store’s app, you can add it to Maps. We’ll send you a notification when it’s time to leave, and let you share your arrival time with the store. Your ETA is continuously updated, based on location and traffic. This helps the store prioritize your order so it’s ready as soon as you get there. Check in on the Google Maps app, and they’ll bring your order right out for a seamless, fast, no-contact pickup. 

pickup with google maps

Track your grocery order status, share your ETA, and let the store know you've arrived - all from Google Maps

All of these updates are possible thanks to AI advancements that have transformed Google Maps into a map that can reflect the millions of changes made around the world every day —  in the biggest cities and the smallest towns. Whether you’re getting around, exploring an area, or knocking out errands, let Google Maps help you find your way.

Comments Off :, more...

Rachel Malarich is planting a better future, tree by tree

by Alicia Cormie on Nov.19, 2020, under 3D Models, Argentina, Australia, Brazil, California, Denmark, England, Germany, Google Earth, Google Earth News, Google Earth Tips, Google Sky, Google maps, Hawaii, Indonesia, Ireland, Italy, Japan, Kenya, Mexico, Natural Landmarks, Netherlands, Sightseeing, Street Views, USA

Everyone has a tree story, Rachel Malarich says—and one of hers takes place on the limbs of a eucalyptus tree. Rachel and her cousins spent summers in central California climbing the 100-foot tall trees and hanging out between the waxy blue leaves—an experience she remembers as awe-inspiring. 

Now, as Los Angeles first-ever City Forest Officer, Rachel’s work is shaping the tree stories that Angelenos will tell. “I want our communities to go to public spaces and feel that sense of awe,” she says. “That feeling that something was there before them, and it will be there after them...we have to bring that to our cities.”

Part of Rachel’s job is to help the City of Los Angeles reach an ambitious goal: to plant and maintain 90,000 trees by the end of 2021 and to keep planting trees at a rate of 20,000 per year after that. This goal is about more than planting trees, though: It’s about planting the seeds for social, economic and environmental equity. These trees, Rachel says, will help advance citywide sustainability and climate goals, beautify neighborhoods, improve air quality and create shade to combat rising street-level temperatures. 

To make sure every tree has the most impact, Rachel and the City of Los Angeles use Tree Canopy Lab, a tool they helped build with Google that uses AI and aerial imagery to understand current tree cover density, also known as “tree canopy,” right down to street-level data. Tree inventory data, which is typically collected through on-site assessments, helps city officials know where to invest resources for maintaining, preserving and planting trees. It also helps pinpoint where new trees should be planted. In the case of LA, there was a strong correlation between a lack of tree coverage and the city's underserved communities. 

With Tree Canopy Lab, Rachel and her team overlay data, such as population density and land use data, to understand what’s happening within the 500 square miles of the city and understand where new trees will have the biggest impact on a community. It helps them answer questions like: Where are highly populated residential areas with low tree coverage? Which thoroughfares that people commute along every day have no shade? 

And it also helps Rachel do what she has focused her career on: creating community-led programs. After more than a decade of working at nonprofits, she’s learned that resilient communities are connected communities. 

“This data helps us go beyond assumptions and see where the actual need is,” Rachel says. “And it frees me up to focus on what I know best: listening to the people of LA, local policy and urban forestry.” 

After working with Google on Tree Canopy Lab, she’s found that data gives her a chance to connect with the public. She now has a tool that quickly pools together data and creates a visual to show community leaders what’s happening in specific neighborhoods, what the city is doing and why it’s important. She can also demonstrate ways communities can better manage resources they already have to achieve local goals. And that’s something she thinks every city can benefit from. 

“My entrance into urban forestry was through the lens of social justice and economic inequity. For me, it’s about improving the quality of life for Angelenos,” Rachel says. “I’m excited to work with others to create that impact on a bigger level, and build toward the potential for a better environment in the future.”

And in this case, building a better future starts with one well planned tree at a time.

Comments Off :, , more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...