How AI app looks to enrich Google search engine

Kenyan and other African countries will soon be able to tap Artificial Intelligence (AI) when using Google’s search engine. The tech giant said it is integrating an AI application in Nairobi and other 99 cities across the world as it seeks to enrich customer experience.

Google says this is part of the changes it is making as it redesigns its dominant search engine using AI technology dubbed Multitask Unified Model (MUM). This, the firm says, will revolutionise the way people search and engage with information.

The changes to the Google search page will also make it easier for users to explore information using image recognition technology – products like Google Maps, Google Images, Google Lens, and shopping and e-commerce, among others.

Google Senior Vice President of Search, Prabhakar Raghavan,  noted that people looking at ‘green’ locations to live in within Nairobi can use the Environmental Insights Explorer(EIE) Tree Canopy Tool which provides an aerial imagery identifying places at greatest risk of experiencing rising temperatures due to lack of tree cover.

“Students, parents and entrepreneurs all have the world’s insights at their fingertips. We make all of this possible by helping you tap into the endless knowledge of creators, publishers and businesses from across the world,” Raghavan noted, adding that the MUM technology “will continue to deliver the highest quality of information, with new tools to evaluate the credibility of what you find online”.

He said beyond making information more helpful in the users’ daily lives, the new technology can help the world tackle global challenges like climate change and disasters.

For instance, Google has update a wildfire boundary map that launched last year in Google search and maps to help people in the US monitor nearby wildfires, using satellite data and Google’s advanced mapping technologies.

“This gives people near real time information in critical moments like when and where a blaze is underway so they can avoid it,” he said.

 “Today we’re announcing a new wildfire layer overlaid directly on Google Maps so users can better visualise the location and size of major wildfires around the world along with quick links to local resources.”

When available, he said, you can also see details about the fire such as its containment, how many acres are burnt, and when all of this information was last reported. In addition to providing critical insights in times of crisis.

“AI is also helping communities tackle the complexities of climate change. This includes extreme temperatures which are causing public health concerns in cities around the world,” he said.

The capabilities are accessible via Android Apps thereby enabling users to access the capabilities via their smartphones.

Google connects people to well over 100 million different websites and millions of businesses from large retailers to the local shops down the street. But the new upgrade is being touted by Google executives as being 1,000 times more powerful than the BERT model that has been powering Google Search.

The Tree Canopy Insights tool helps cities to manage tree planting projects to soak up carbon emissions in the fight against climate change.

“We use aerial imagery, public data and specialized tree detection AI to give cities insights on where they could plant more trees to tackle these challenges,” he said. Tree canopy insights is already in 15 US cities and is being expanded to 100 more in the coming months before being scaled to the rest of the world next year, as part of the company’s commitment to help more places respond to climate change.

At Google’s 2021, Search On event, in November, the search giant shared an early look at the future of multimodal search –that combines different methods including camera, or images, text or enquiries to get more relevant results.

Vice President of Search at Google, Pandu Nayak, explained that using Google lens, users can search what they see on the camera or a picture with this upcoming multimodal update. In the coming months Google lens will now combine visual and text input to understand exactly what a user wants.

“Say I really like a pattern on a shirt but I feel the same would be much better on my socks. By posting the shirt, other similar shirts pop up and I can add a text requesting for a pair of socks with similar patterns and colour. More information pops up with recommendations of stores from local shops to large chains that carry similar looking socks along with other related content from across the web,” he demonstrated.

Merchant Shopping Vice President Matt Madrigal says when one finds something they like they can click to check out more reviews and ratings and even compare prices to get the best deal. This is powered by Google shopping graph—a comprehensive dataset of over 24 billion product offers, from millions of merchants of all sizes, both online and off.

According to Raghavan, these innovations are a few of the ways making Google radically more natural and intuitive in helping unlock more insights than ever before.

He also did a demonstration on his Google search engine with a more complex enquiry on how to fix a part of a broken bicycle.

“When my bicycle stops shifting gears, I can see something is off with the mechanism at the back, but I don’t have the words to describe the situation to Google, and calling the mechanism the gear thingamajig just might not work, soon you’ll be able to point your camera at such and type in to Google ‘How do I fix this?’, and it’ll show you everything you need to get your bike back on the trail,” he said, taking a picture of the rear mechanism of a bike, “MUM’s advanced multimodal understanding can simultaneously identify the part in the image, based on similar images from across the web. And also understand the intent behind your question, to help you solve it.”

In this case, Google diagnoses the problem as a jerked up derailleur. It then points to helpful information on how to fix it from a variety of videos, blogs, forums and websites.

According to Google, MUM is one of the first AI models that’s able to solve these types of complex multimodal questions.

“We’re actively testing new capabilities, like this and look forward to bringing them to life next year. And you might be asking why can’t I try MUM in lens today, if it’s already working in these demos? Part of the reason is that we still have some work left to do but really it’s because rigorous testing and evaluation is a crucial part of every new AI model that we deploy,” noted Nayak.

Afcacia seeks to be a powerful tech mouthpiece, giving a voice to your products and services in a way that has never seen before.

AFCACIA MEDIA LTD