Google Maps Brings Better Eco Friendly Features And More At Google Io 2022

[TL;DR]

Nearly 24 years ago, Google started with two graduate students, one product, and a big mission: to organize the world’s information and get in universally accessible and useful. In the decades since, we’ve been developing our engineering to deliver on that mission.

The progress we’ve made is because of our years of investment in advanced technologies, from AI to the technical infrastructure that powers information technology all. And once a year — on my favorite twenty-four hour period of the yr 🙂 — we share an update on how it’due south going at Google I/O.

Today, I talked virtually how we’re advancing two key aspects of our mission — knowledge and calculating — to create products that are congenital to help. It’southward exciting to build these products; it’s fifty-fifty more exciting to see what people do with them.

Thanks to anybody who helps the states practise this work, and most particularly our Googlers. We are grateful for the opportunity.

– Sundar


Editor’southward notation: Below is an edited transcript of Sundar Pichai’south keynote accost during the opening of today’south Google I/O Developers Conference.

Hi, anybody, and welcome. Actually, permit’due south make that welcome back! It’s dandy to return to Shoreline Amphitheatre after iii years abroad. To the thousands of developers, partners and Googlers here with us, it’south great to see all of you. And to the millions more joining usa effectually the world — nosotros’re so happy you’re here, too.

Final yr, we shared how new breakthroughs in some of the most technically challenging areas of reckoner science are making Google products more than helpful in the moments that affair. All this piece of work is in service of our timeless mission: to organize the world’s data and go far universally accessible and useful.

I’m excited to show you how we’re driving that mission forrad in two central means: by deepening our agreement of information so that nosotros can turn information technology into knowledge; and advancing the state of calculating, then that knowledge is easier to access, no matter who or where you are.

Today, you’ll see how progress on these two parts of our mission ensures Google products are built to assistance. I’ll first with a few quick examples. Throughout the pandemic, Google has focused on delivering authentic information to help people stay good for you. Over the last yr, people used Google Search and Maps to detect where they could become a COVID vaccine nearly two billion times.


A visualization of Google’s flood forecasting system, with three 3D maps stacked on top of one another, showing landscapes and weather patterns in green and brown colors. The maps are floating against a gray background.

Google’s alluvion forecasting engineering science sent flood alerts to 23 million people in Bharat and Bangladesh last year.

We’ve also expanded our flood forecasting applied science to help people stay safe in the face of natural disasters. During final twelvemonth’s monsoon season, our flood alerts notified more 23 million people in India and Bangladesh. And we estimate this supported the timely evacuation of hundreds of thousands of people.

In Ukraine, nosotros worked with the authorities to chop-chop deploy air raid alerts. To engagement, we’ve delivered hundreds of millions of alerts to help people get to prophylactic. In March I was in Poland, where millions of Ukrainians have sought refuge. Warsaw’s population has increased by almost xx% as families host refugees in their homes, and schools welcome thousands of new students. Nearly every Google employee I spoke with there was hosting someone.

Adding 24 more than languages to Google Interpret

In countries around the globe, Google Translate has been a crucial tool for newcomers and residents trying to communicate with one some other. We’re proud of how it’s helping Ukrainians find a fleck of hope and connection until they are able to render domicile again.


Two boxes, one showing a question in English — “What’s the weather like today?” — the other showing its translation in Quechua. There is a microphone symbol below the English question and a loudspeaker symbol below the Quechua answer.

With machine learning advances, nosotros’re able to add languages like Quechua to Google Translate.

Existent-time translation is a testament to how knowledge and computing come together to make people’s lives better. More than people are using Google Translate than ever earlier, but we however have work to do to make information technology universally accessible. There’southward a long tail of languages that are underrepresented on the web today, and translating them is a hard technical problem. That’due south considering translation models are unremarkably trained with bilingual text — for example, the same phrase in both English language and Spanish. However, at that place’south not plenty publicly available bilingual text for every language.

So with advances in machine learning, nosotros’ve developed a monolingual approach where the model learns to translate a new language without ever seeing a direct translation of it. By collaborating with native speakers and institutions, we found these translations were of sufficient quality to be useful, and we’ll go along to improve them.


A list of the 24 new languages Google Translate now has available.

We’re calculation 24 new languages to Google Translate.

Today, I’1000 excited to denote that we’re adding 24 new languages to Google Translate, including the first indigenous languages of the Americas. Together, these languages are spoken by more than 300 1000000 people. Breakthroughs like this are powering a radical shift in how we access knowledge and use computers.

Taking Google Maps to the next level

So much of what’s knowable about our earth goes beyond language — it’s in the concrete and geospatial data all around us. For more 15 years, Google Maps has worked to create rich and useful representations of this information to assist united states of america navigate. Advances in AI are taking this piece of work to the next level, whether it’s expanding our coverage to remote areas, or reimagining how to explore the world in more than intuitive means.


An overhead image of a map of a dense urban area, showing gray roads cutting through clusters of buildings outlined in blue.

Advances in AI are helping to map remote and rural areas.

Effectually the globe, we’ve mapped around 1.6 billion buildings and over 60 million kilometers of roads to date. Some remote and rural areas have previously been difficult to map, due to scarcity of high-quality imagery and singled-out building types and terrain. To address this, we’re using computer vision and neural networks to observe buildings at scale from satellite images. As a issue, we take increased the number of buildings on Google Maps in Africa by 5X since July 2020, from 60 one thousand thousand to nearly 300 million.

Nosotros’ve as well doubled the number of buildings mapped in India and Indonesia this yr. Globally, over 20% of the buildings on Google Maps take been detected using these new techniques. Nosotros’ve gone a pace further, and fabricated the dataset of buildings in Africa publicly available. International organizations like the Un and the World Banking concern are already using information technology to ameliorate understand population density, and to provide back up and emergency help.


Immersive view in Google Maps fuses together aerial and street level images.

We’re likewise bringing new capabilities into Maps. Using advances in 3D mapping and machine learning, nosotros’re fusing billions of aerial and street level images to create a new, high-fidelity representation of a place. These breakthrough technologies are coming together to power a new experience in Maps chosen immersive view: it allows yous to explore a identify like never before.

Popular:   Oled Tv Breakthrough Fixes All The Techs Problems And Its Coming Soon

Let’south go to London and take a wait. Say you lot’re planning to visit Westminster with your family. You tin can go into this immersive view straight from Maps on your telephone, and you can pan effectually the sights… here’s Westminster Abbey. If you’re thinking of heading to Big Ben, y’all can check if there’s traffic, how busy it is, and fifty-fifty meet the atmospheric condition forecast. And if you’re looking to catch a seize with teeth during your visit, you can check out restaurants nearby and become a glimpse within.

What’s astonishing is that isn’t a drone flying in the eating place — we apply neural rendering to create the experience from images alone. And Google Deject Immersive Stream allows this experience to run on merely most any smartphone. This feature volition start rolling out in Google Maps for select cities globally later this yr.

Another big comeback to Maps is eco-friendly routing. Launched last year, it shows you the most fuel-efficient road, giving you the choice to salve money on gas and reduce carbon emissions. Eco-friendly routes have already rolled out in the U.South. and Canada — and people accept used them to travel approximately 86 billion miles, helping salve an estimated one-half million metric tons of carbon emissions, the equivalent of taking 100,000 cars off the road.


Still image of eco-friendly routing on Google Maps — a 53-minute driving route in Berlin is pictured, with text below the map showing it will add three minutes but save 18% more fuel.

Eco-friendly routes will expand to Europe later this year.

I’one thousand happy to share that we’re expanding this feature to more places, including Europe later this year. In this Berlin example, y’all could reduce your fuel consumption by eighteen% taking a route that’s just three minutes slower. These small decisions accept a large impact at calibration. With the expansion into Europe and across, we estimate carbon emission savings volition double by the end of the twelvemonth.

And nosotros’ve added a similar feature to Google Flights. When y’all search for flights between two cities, we likewise show y’all carbon emission estimates alongside other information like cost and schedule, making it like shooting fish in a barrel to cull a greener option. These eco-friendly features in Maps and Flights are part of our goal to empower 1 billion people to brand more sustainable choices through our products, and we’re excited well-nigh the progress here.

New YouTube features to help people easily admission video content

Beyond Maps, video is becoming an even more fundamental part of how we share data, communicate, and learn. Oftentimes when y’all come up to YouTube, you are looking for a specific moment in a video and we want to assistance you become there faster.

Last year we launched auto-generated chapters to go far easier to leap to the role y’all’re nigh interested in.

This is besides dandy for creators because it saves them time making chapters. Nosotros’re at present applying multimodal technology from DeepMind. It simultaneously uses text, audio and video to auto-generate chapters with greater accuracy and speed. With this, we now have a goal to 10X the number of videos with auto-generated chapters, from eight 1000000 today, to 80 meg over the next year.

Often the fastest way to get a sense of a video’s content is to read its transcript, so nosotros’re also using speech communication recognition models to transcribe videos. Video transcripts are now bachelor to all Android and iOS users.


Animation showing a video being automatically translated. Then text reads "Now available in sixteen languages."

Auto-translated captions on YouTube.

Next up, we’re bringing automobile-translated captions on YouTube to mobile. Which means viewers tin at present car-translate video captions in 16 languages, and creators can grow their global audition. We’ll also exist expanding auto-translated captions to Ukrainian YouTube content next month, office of our larger attempt to increase access to accurate information about the war.

Helping people be more than efficient with Google Workspace

Just equally nosotros’re using AI to improve features in YouTube, we’re building information technology into our Workspace products to help people be more efficient. Whether you piece of work for a pocket-sized business or a big institution, chances are you spend a lot of time reading documents. Perhaps you’ve felt that wave of panic when you realize you have a 25-page document to read alee of a meeting that starts in five minutes.

At Google, whenever I become a long document or e-mail, I look for a TL;DR at the peak — TL;DR is short for “Too Long, Didn’t Read.” And it got u.s.a. thinking, wouldn’t life be ameliorate if more things had a TL;DR?

That’southward why we’ve introduced automatic summarization for Google Docs. Using one of our motorcar learning models for text summarization, Google Docs will automatically parse the words and pull out the main points.

This marks a big leap forward for natural language processing. Summarization requires understanding of long passages, information compression and language generation, which used to exist outside of the capabilities of fifty-fifty the best machine learning models.

And docs are just the first. We’re launching summarization for other products in Workspace. Information technology will come to Google Conversation in the next few months, providing a helpful digest of chat conversations, so you can spring correct into a group chat or look back at the fundamental highlights.


Animation showing summary in Google Chat

We’re bringing summarization to Google Conversation in the coming months.

And we’re working to bring transcription and summarization to Google Encounter as well so you lot can catch up on some important meetings yous missed.

Visual improvements on Google Encounter

Of grade there are many moments where you actually want to exist in a virtual room with someone. And that’s why we continue to improve audio and video quality, inspired by Project Starline. We introduced Project Starline at I/O last year. And we’ve been testing it across Google offices to get feedback and improve the technology for the future. And in the process, nosotros’ve learned some things that we can use right now to Google See.

Starline inspired machine learning-powered image processing to automatically improve your image quality in Google Meet. And it works on all types of devices so you look your all-time wherever you are.


An animation of a man looking directly at the camera then waving and smiling. A white line sweeps across the screen, adjusting the image quality to make it brighter and clearer.

Machine learning-powered prototype processing automatically improves paradigm quality in Google Meet.

We’re also bringing studio quality virtual lighting to Encounter. Y’all can adapt the light position and effulgence, so you’ll even so be visible in a dark room or sitting in forepart of a window. Nosotros’re testing this feature to ensure everyone looks like their true selves, continuing the work we’ve done with Real Tone on Pixel phones and the Monk Scale.

These are just some of the ways AI is improving our products: making them more helpful, more than attainable, and delivering innovative new features for everyone.


Gif shows a phone camera pointed towards a rack of shelves, generating helpful information about food items. Text on the screen shows the words ‘dark’, ‘nut-free’ and ‘highly-rated’.

Today at I/O Prabhakar Raghavan shared how we’re helping people observe helpful information in more intuitive ways on Search.

Making knowledge accessible through computing

We’ve talked about how we’re advancing access to knowledge as part of our mission: from better language translation to improved Search experiences beyond images and video, to richer explorations of the world using Maps.

Popular:   Game Boy Micro Wouldnt Have Happened If This Former Nintendo Exec Had His Way

Now nosotros’re going to focus on how we brand that knowledge even more accessible through computing. The journey we’ve been on with computing is an exciting ane. Every shift, from desktop to the web to mobile to wearables and ambient calculating has made cognition more useful in our daily lives.

Equally helpful as our devices are, nosotros’ve had to work pretty hard to conform to them. I’ve always idea computers should be adapting to people, not the other style around. Nosotros continue to push ourselves to make progress here.

Here’s how we’re making computing more natural and intuitive with the Google Assistant.

Introducing LaMDA 2 and AI Examination Kitchen


Animation shows demos of how LaMDA can converse on any topic and how AI Test Kitchen can help create lists.

A demo of LaMDA, our generative linguistic communication model for dialogue application, and the AI Examination Kitchen.

We’re continually working to advance our conversational capabilities. Conversation and natural language processing are powerful ways to make computers more accessible to everyone. And large linguistic communication models are cardinal to this.

Last year, we introduced LaMDA, our generative linguistic communication model for dialogue applications that can converse on any topic. Today, we are excited to announce LaMDA 2, our near advanced conversational AI withal.

Nosotros are at the kickoff of a journey to brand models like these useful to people, and we feel a deep responsibility to go information technology right. To brand progress, we need people to experience the engineering science and provide feedback. We opened LaMDA up to thousands of Googlers, who enjoyed testing it and seeing its capabilities. This yielded pregnant quality improvements, and led to a reduction in inaccurate or offensive responses.

That’s why nosotros’ve fabricated AI Test Kitchen. Information technology’south a new way to explore AI features with a broader audition. Inside the AI Test Kitchen, there are a few different experiences. Each is meant to give y’all a sense of what it might be like to have LaMDA in your hands and use information technology for things you intendance about.

The first is chosen “Imagine it.” This demo tests if the model can have a creative idea you lot give it, and generate imaginative and relevant descriptions. These are non products, they are quick sketches that permit us to explore what LaMDA can do with you. The user interfaces are very simple.

Say you’re writing a story and need some inspirational ideas. Maybe one of your characters is exploring the deep sea. You can ask what that might experience like. Here LaMDA describes a scene in the Mariana Trench. It even generates follow-up questions on the wing. You tin can ask LaMDA to imagine what kinds of creatures might live there. Remember, we didn’t hand-plan the model for specific topics like submarines or bioluminescence. Information technology synthesized these concepts from its preparation data. That’s why you can ask about almost any topic: Saturn’s rings or fifty-fifty being on a planet fabricated of water ice cream.

Staying on topic is a challenge for language models. Say y’all’re edifice a learning experience — you want it to exist open-concluded plenty to let people to explore where curiosity takes them, just stay safely on topic. Our second demo tests how LaMDA does with that.

In this demo, we’ve primed the model to focus on the topic of dogs. It starts by generating a question to spark conversation, “Have yous ever wondered why dogs beloved to play fetch so much?” And if you ask a follow-upwardly question, you get an answer with some relevant details: information technology’s interesting, information technology thinks it might accept something to do with the sense of smell and treasure hunting.

You can take the conversation anywhere you want. Mayhap you’re curious about how smell works and you want to swoop deeper. Y’all’ll get a unique response for that besides. No matter what you lot ask, it will try to go along the chat on the topic of dogs. If I start request about cricket, which I probably would, the model brings the topic back to dogs in a fun style.

This challenge of staying on-topic is a tricky one, and it’s an important expanse of research for building useful applications with language models.

These experiences bear witness the potential of language models to one day assist united states of america with things similar planning, learning about the world, and more than.

Of course, at that place are significant challenges to solve before these models can truly be useful. While we take improved safety, the model might still generate inaccurate, inappropriate, or offensive responses. That’s why we are inviting feedback in the app, so people can help report issues.

We will be doing all of this work in accordance with our AI Principles. Our process will exist iterative, opening up access over the coming months, and carefully assessing feedback with a broad range of stakeholders — from AI researchers and social scientists to human being rights experts. We’ll comprise this feedback into future versions of LaMDA, and share our findings as we go.

Over time, we intend to continue adding other emerging areas of AI into AI Test Kitchen. You tin learn more at: yard.co/AITestKitchen.

Advancing AI language models

LaMDA ii has incredible conversational capabilities. To explore other aspects of tongue processing and AI, we recently appear a new model. It’s called Pathways Linguistic communication Model, or PaLM for short. It’s our largest model to date and trained on 540 billion parameters.

PaLM demonstrates breakthrough performance on many natural language processing tasks, such as generating lawmaking from text, answering a math word problem, or even explaining a joke.

It achieves this through greater scale. And when we combine that scale with a new technique called chain-of- thought prompting, the results are promising. Chain-of-thought prompting allows us to draw multi-stride issues every bit a series of intermediate steps.

Allow’s have an example of a math discussion problem that requires reasoning. Normally, how you use a model is you prompt information technology with a question and respond, and then y’all outset asking questions. In this instance: How many hours are in the month of May? So y’all can see, the model didn’t quite get it right.

In chain-of-idea prompting, we give the model a question-reply pair, simply this time, an explanation of how the reply was derived. Kind of similar when your teacher gives you a step-by-pace example to help you sympathise how to solve a problem. At present, if nosotros ask the model again — how many hours are in the month of May — or other related questions, information technology really answers correctly and fifty-fifty shows its work.


There are two boxes below a heading saying ‘chain-of-thought prompting’. A box headed ‘input’ guides the model through answering a question about how many tennis balls a person called Roger has. The output box shows the model correctly reasoning through and answering a separate question (‘how many hours are in the month of May?’)

Chain-of-thought prompting leads to better reasoning and more accurate answers.

Chain-of-idea prompting increases accurateness by a big margin. This leads to state-of-the-art performance across several reasoning benchmarks, including math word problems. And we can practise it all without ever changing how the model is trained.

PaLM is highly capable and can do then much more. For instance, yous might exist someone who speaks a language that’s not well-represented on the spider web today — which makes it hard to find information. Even more than frustrating because the answer yous are looking for is probably out there. PaLM offers a new arroyo that holds enormous promise for making knowledge more accessible for anybody.

Popular:   Lgs Ust Projector Beams Bright 120 Inch Images From 7 Inches Away

Let me evidence you an example in which nosotros can help answer questions in a language like Bengali — spoken past a quarter billion people. Just like before we prompt the model with ii examples of questions in Bengali with both Bengali and English answers.

That’s information technology, now we can start asking questions in Bengali: “What is the national song of Bangladesh?” The answer, by the way, is “Amar Sonar Bangla” — and PaLM got it right, too. This is not that surprising because you would wait that content to exist in Bengali.

You tin can as well try something that is less likely to accept related information in Bengali such as: “What are pop pizza toppings in New York City?” The model again answers correctly in Bengali. Though it probably just stirred up a fence among New Yorkers about how “correct” that reply really is.

What’southward so impressive is that PaLM has never seen parallel sentences between Bengali and English. Nor was it ever explicitly taught to answer questions or interpret at all! The model brought all of its capabilities together to answer questions correctly in Bengali. And we tin extend the techniques to more languages and other complex tasks.

Nosotros’re and so optimistic most the potential for language models. One mean solar day, nosotros hope we can reply questions on more topics in any linguistic communication you speak, making knowledge even more accessible, in Search and across all of Google.

Introducing the world’s largest, publicly available machine learning hub

The advances we’ve shared today are possible merely because of our continued innovation in our infrastructure. Recently we announced plans to invest $nine.5 billion in information centers and offices across the U.S.

I of our state-of-the-art data centers is in Mayes County, Oklahoma. I’m excited to denote that, in that location, nosotros are launching the world’s largest, publicly-available machine learning hub for our Google Cloud customers.


Still image of a data center with Oklahoma map pin on bottom left corner.

I of our land-of-the-art data centers in Mayes County, Oklahoma.

This machine learning hub has viii Deject TPU v4 pods, custom-built on the same networking infrastructure that powers Google’southward largest neural models. They provide nearly 9 exaflops of computing ability in aggregate — bringing our customers an unprecedented ability to run complex models and workloads. We hope this will fuel innovation across many fields, from medicine to logistics, sustainability and more.

And speaking of sustainability, this motorcar learning hub is already operating at 90% carbon-free energy. This is helping united states brand progress on our goal to go the beginning major company to operate all of our data centers and campuses globally on 24/7 carbon-free free energy by 2030.

Even every bit we invest in our data centers, nosotros are working to innovate on our mobile platforms and then more processing can happen locally on device. Google Tensor, our custom system on a chip, was an important footstep in this direction. It’s already running on Pixel 6 and Pixel 6 Pro, and it brings our AI capabilities — including the best speech recognition we’ve ever deployed — right to your phone. It’southward also a big step forward in making those devices more than secure. Combined with Android’s Private Compute Core, it can run data-powered features straight on device so that information technology’s private to you.

People turn to our products every day for help in moments big and modest. Core to making this possible is protecting your individual information each stride of the fashion. Fifty-fifty every bit applied science grows increasingly complex, we keep more people prophylactic online than anyone else in the world, with products that are secure by default, individual by design and that put you in command.

Nosotros also spent time today sharing updates to platforms like Android. They’re delivering access, connectivity, and information to billions of people through their smartphones and other continued devices like TVs, cars and watches.

And nosotros shared our new Pixel Portfolio, including the Pixel 6a, Pixel Buds Pro, Google Pixel Watch, Pixel 7, and Pixel tablet all congenital with ambient calculating in heed. We’re excited to share a family of devices that work improve together — for you.

The next frontier of computing: augmented reality

Today nosotros talked about all the technologies that are changing how nosotros use computers and access cognition. We see devices working seamlessly together, exactly when and where yous demand them and with conversational interfaces that arrive easier to become things done.

Looking ahead, at that place’s a new frontier of calculating, which has the potential to extend all of this even further, and that is augmented reality. At Google, we have been heavily invested in this area. We’ve been building augmented reality into many Google products, from Google Lens to multisearch, scene exploration, and Live and immersive views in Maps.

These AR capabilities are already useful on phones and the magic will really come up alive when y’all can use them in the real world without the technology getting in the way.

That potential is what gets us most excited well-nigh AR: the ability to spend time focusing on what matters in the real world, in our real lives. Because the real world is pretty astonishing!

It’s important we design in a mode that is built for the real world — and doesn’t take yous abroad from it. And AR gives us new means to accomplish this.

Let’s have language as an case. Language is just so fundamental to connecting with one another. And still, understanding someone who speaks a dissimilar language, or trying to follow a conversation if you are deaf or hard of hearing can exist a real challenge. Let’s see what happens when we take our advancements in translation and transcription and evangelize them in your line of sight in i of the early prototypes we’ve been testing.

Yous can see information technology in their faces: the joy that comes with speaking naturally to someone. That moment of connectedness. To understand and be understood. That’s what our focus on knowledge and computing is all about. And it’s what nosotros strive for every day, with products that are congenital to help.

Each year we go a little closer to delivering on our timeless mission. And we even so accept so much farther to go. At Google, nosotros genuinely feel a sense of excitement nigh that. And nosotros are optimistic that the breakthroughs y’all just saw will help us get there. Thank you to all of the developers, partners and customers who joined usa today. We look forward to building the future with all of you.

Google Maps Brings Better Eco Friendly Features And More At Google Io 2022

Source: https://blog.google/technology/developers/io-2022-keynote/