
Rose La Prairie is group product manager at Google Search and product lead for Circle to Search. She is responsible for developing new ways for people to explore the world around them. Here, she speaks to Lou Wang, a co-founder of Google Lens and senior director. He leads Google Search’s multimodal AI products, including Lens, Circle to Search, and Voice Search.
Imagine getting the inside story on visual search from someone who not only helped build it, but uses it daily.
Meet Lou Wang.
He is a co-founder of Google Lens, which more than 1.5 billion people are using to search what they see every month. In fact, Lens grew 65% year over year1 with more than 100 billion visual searches already this year.2
But Wang is not just an architect of this technology, he’s also a self-proclaimed power user. He most recently used Lens with his six-year-old son to identify a baby opossum in their backyard.
Now, he shares his personal journey with visual search, alongside global trends and indispensable strategies for marketers to connect with audiences in this visual world.
Rose La Prairie (RLP): As a co-founder of Google Lens, you’ve been at the forefront of revolutionising how we interact with information. When Lens launched in 2017, what was the pivotal “eureka” moment that revealed the immense potential of visual search to you?
Lou Wang (LW): Visual search tech was pretty nascent when we started working on Lens. There were custom machine learning models to help identify animals (like dog and cat breeds) and plants. But, in general, visual search wasn’t very reliable yet.
Ironically, one “eureka” moment was realising that we could build visual search products by focusing on text. Text is everywhere — on signs, documents, menus. Google’s ability to understand text was ahead of understanding random objects. We even had a prototype in 2016 where people could photograph text, tap it, and get search results. They could, for instance, find out more about a band on a poster or learn about a pasta dish they were looking at on a menu.
This insight allowed us to build something truly useful fairly quickly. Since then, AI technology has improved tremendously and we’ve been able to introduce more and more capabilities into Lens. People still love using Lens for text, but now users can search pretty much anything visually. And we’re constantly incorporating new AI-powered features to make Lens even more impactful when responding to people’s visual search queries.
RLP: It has been fascinating to see the evolution from text to the expansive capabilities we know Google Lens has today. Has there been a particularly memorable way you’ve used it yourself?
LW: One of the coolest things was seeing my son understand how to use Google Lens even before he was two years old. If you have kids, you know they will point at anything they see and ask “What’s that?” Kids are inherently visual.
My son is now six-years old and he’s a power user. He goes around the house to ask questions about all the things he’s seeing.
A fun moment happened recently when we saw a weird animal in our backyard. We used Google Lens to find out that it was a baby opossum. Then, using Lens’ multimodal search capabilities, we asked questions like how big they get. Apparently up to two feet, which is quite scary!
We also discovered that opossums live in North America, while possums live in Australia.

But my kid isn’t the only one using Google Lens. I often hear about families going on neighbourhood walks and seeing flowers and wondering what they are. Lens gives them a natural way to ask those questions.
RLP: As the product lead for Circle to Search, I was thrilled to launch this intuitive, gesture-based way to explore our world last year. It builds upon your pioneering work with Google Lens. How has it changed the game for visual search?
LW: Circle to Search has been transformative. People often think of Google Lens as taking a photo with their camera to ask a question. Circle to Search takes this further by allowing them to circle a part of that image or tap on an object featured. They can even “circle” screenshots or social media posts without ever switching apps.
The moment someone sees something and has a question, they can instantly ask it. This ease of use is incredibly powerful and users love it.
It’s been particularly impactful for shopping. People are constantly inspired by what they see on YouTube and social media. A common use case is seeing a cool outfit online that an influencer is wearing and wanting to know more about it. Circle to Search makes this seamless. Users can “circle” a part of the outfit to find it, and even explore similar styles in their price range from nearby shops.
RLP: It’s great to know that people are using visual search for their shopping. What other common use cases have you seen?
LW: In Europe and Africa, shopping is definitely the hero use case, particularly for items that are hard to describe in words — think apparel, home goods, and accessories.
An area where we also see a lot of usage is education. In countries like India and Indonesia, schoolwork is often given in English, so students use Google Lens to translate it into languages like Hindi to better understand the questions. They can also get homework help directly from Lens and Search. This use case is growing in other markets too.
Another fun trend we’ve observed is that Google Lens queries related to the natural world — like animals and plants — are particularly strong in Germany and Japan. It seems there’s a special appreciation for nature in those countries!
And we’re seeing people increasingly use multisearch in Lens to express more complex intents. They don’t just share a photo of a leaf, for example, but they also ask an accompanying question, such as “How do I take care of this plant?”

RLP: From a business perspective, how can marketers make the most of visual search?
LW: For discoverability, it’s important that brands have many great images of products, with accurate and specific metadata to describe what that imagery contains.
For example, if you are selling a dress, provide images that show the item from multiple angles both on its own and on different models with diverse body shapes. Users take photos from different angles themselves as well, so they find this visual variety and context valuable. And, as a retailer, this variety makes your dress images more likely to stand out in search.
Another point, particularly relevant in apparel, is that seasons change very quickly. Items sold six months ago might no longer be available. Yet we see a lot of people still trying to find them. Maybe they saw someone on TV wearing a particular dress and want to know what it is and where to buy it.
If a product completely disappears from the internet, we can no longer help with that question. It is beneficial to keep those older products and images online so they’re accessible, and add information about a “new season” version or suggest similar in-stock items. This is an opportunity to direct customers to similar items that are in-stock, even if the original one isn’t available anymore.
For discoverability, it’s important that brands have many great images of products, with accurate and specific metadata
RLP: Finally, what does the future hold for visual search?
LW: The recent announcement of Search Live — rolling out in English to the U.S. this summer — adds a way to talk back and forth with Search that feels even more natural.
Imagine you’re looking at a painting and can just say out loud: “This style is really interesting, what can you tell me about it?” And then, as you learn it’s Impressionistic, you might ask, “Who are some well-known artists that work in that style?” That’s the kind of seamless, real-time capability we’re building, making interaction simpler and more aligned with how people actually think and explore.
It’s part of the new AI-driven experiences we’re bringing to our products. A significant development has been the integration of AI Overviews. Where we once might have shown a grid of visually similar images, now we can provide nuanced understandings and respond to more complex questions.
Another key focus is making it easier to search “in the moment”. Circle to Search has been fantastic for Android users and we’re expanding these experiences to desktop. Google Lens is now integrated into the Chrome browser, which means that when inspiration strikes or a question arises from something you see, the ability to explore is right there.
We’ll continue to enhance these experiences as we empower everyone to explore their curiosity as naturally as they think. That way people can point and ask questions — just like a two-year old might — to get the information they’re looking for.