Google became the gateway to the internet by perfecting its search engine. For two decades, it surfaced 10 blue links that gave people access to the information they were looking for.
But after a quarter century, the tech giant is betting that the future of search will be artificial intelligence. On Tuesday, Google said it was introducing a new feature in its search engine called A.I. Mode. The tool will function like a chatbot, allowing people to start a query, ask follow-up questions and use the company’s A.I. system to deliver comprehensive answers.
“It’s a total reimagining of search,” said Sundar Pichai, the chief executive of Google, in a press briefing ahead of the company’s annual conference for software developers. In tests of the feature, he said, people dramatically “changed the nature of how they are interacting with search.”
The feature headlined a list of new A.I. abilities, including more personalized and automated email replies and a shopping tool to automatically purchase clothing after it’s put on sale.
With the introduction of A.I. Mode, Google is essentially trying to disrupt its traditional search business before upstart A.I. competitors can disrupt it. The search giant has been nervous about that possibility since declaring a “code red” two years ago after the arrival of ChatGPT, a chatbot from OpenAI that ignited a race to add generative A.I. into tech products.
But Google has been hesitant to fully embrace A.I. because it has so much to lose. The company’s search business generated nearly $200 billion last year, more than half of its total sales. And the bedrock of that business has been how it has reliably provided people with the best answers to questions.
Though they are a technical leap, A.I. systems have one big shortcoming: They are prone to giving incorrect answers, like recommending people eat rocks, which one of Google’s A.I. systems did last year.
A.I. might have already begun to cut into Google’s popularity as the default for finding digital information. During testimony in a Justice Department antitrust case against Google this month, one of Apple’s top executives, Eddy Cue, said Google search traffic had declined for the first time in 22 years because more people were using artificial intelligence. Google said afterward that it continued to “see overall query growth” in search.
“They hesitated on this for a long time because they didn’t think the quality was good or know how to monetize it,” said Pete Meyers, the principal innovation architect at Moz, a software company focused on search engine optimization that tracks changes to Google Search. “Now they’re doing what they think they have to do to compete, and it’s uncomfortable.”
A.I. Mode, which launched in the United States on Tuesday, won’t serve ads initially. Google is holding an event for marketers and advertisers on Wednesday where it could unveil more.
The A.I. transition comes amid mounting antitrust pressures to break up Google’s business. Over the past two years, Google has lost a string of antitrust cases after being found to have a monopoly over its app store, search engine and advertising technology. The U.S. government argued this month that the company should have to give competing search engines and A.I. companies access to its data on what users search for and click on.
Google’s new A.I. features are also bound to deepen tensions between Google and web publishers who are concerned about traffic. Chatbots often lift information from websites and deliver it directly to people, upending the traditional search model that has sent people across the web to find material.
The company has sought to downplay publishers’ concerns that A.I. will disrupt their businesses. Mr. Pichai said A.I. Overviews, a feature the company introduced last year to generate summaries above traditional search results, has increased the number of searches people do and often leads people to spend more time on suggested websites.
At its Silicon Valley event, Demis Hassabis, the chief executive of Google DeepMind, the company’s artificial intelligence lab, presented Google’s newest model, Gemini 2.5 Pro, which does more “reasoning” to deliver more accurate results than its predecessor. He said the lab had continued its work to turn Gemini, its A.I. chatbot and app, into a virtual assistant that can identify and address real-world problems, like fixing a broken bicycle.
The Gemini system is the backbone of a new personalized smart reply ability in Gmail. With users’ permission, the system pulls from past emails to see how a user writes and suggests automated responses that reflect that person’s tone and style. For example, Mr. Pichai showed the system automatically drafting an email by scanning his inbox and calendar to piece together suggestions from a recent a road trip through Colorado for a friend making a similar drive. It also can automatically delete and archive emails with a new tool called inbox cleanup.
Google is also bringing Gemini to the Google Search shopping experience with a new chatbot that allows users to refine their searches. A user can ask the system to surface a rug that matches a gray couch and then refine the results to show rugs that are easy to clean. There’s also an A.I. agent that will let people set the maximum price they’re willing to pay for an item, like a purse, and automatically purchase it the moment it goes on sale.
Gemini’s chatbot abilities will be added to the company’s Chrome web browser. By touching a button, users can open a box to converse with an A.I. assistant that can review and answer questions about the information on a website.
The company introduced a new subscription service for developers called Google A.I. Ultra, which provides access to the company’s premier models and features, including tools to develop A.I. agents, videos and coding. Priced at $250 a month, it is designed to compete with similar services like OpenAI’s ChatGPT Pro and Anthropic’s Claude:Max, which both cost $200 a month.
(The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)
In an interview before the event, Mr. Hassabis said the company was trying to preserve traditional search while also bringing new A.I. abilities to the experience with its A.I. summaries, A.I. Mode chatbot and Gemini assistant.
“They’re all very interesting and exciting and have some complementary aspects,” he said. The company will watch how each one develops over the next two years, he added, “but for now, we’ve got to make sure we’re winning on all those fronts.”
While most of the event was dedicated to software, Google showed that it hoped to bring its A.I. offerings to devices in the future. The company introduced Android XR glasses equipped with cameras and speakers that a Gemini virtual assistant could use to identify and comment on people and places backstage at Shoreline Amphitheatre.
The glasses, which were only a prototype, could display texts and take photos. The video was choppy at times during the live demonstration.
Google has been working on and off on glasses for more than a decade. Google Glass, which it introduced in 2013, was widely considered to be a failure and abandoned by the company. Warby Parker and Gentle Monster are teaming up with Google to build glasses with its Android XR software system.
The glasses thrust Google into a competition with Meta, Apple and OpenAI, which are all hoping to move the capabilities of people’s phones to their faces and complement them with A.I. assistants that are processing the world in real time.
“There’s something here with these A.I. glasses, and at the right price, it can go mainstream,” said Ben Bajarin, the chief executive of Creative Strategies, a tech research firm. “It will be a long time before this replaces the smartphone, but we’re close to a day when these A.I. assistants will always be there.”