Live Updated 4 min ago Google I/O 2025: Live updates on Gemini, Android XR, Android 16 updates and more A cavalcade of new Gemini AI services and enhancements have already been announced.
Google I/O has kicked off in Mountain View, California, and the AI announcements are coming at a rapid-fire pace.
You can watch Google’s keynote in the embedded livestream above or on the company’s YouTube channel, too.
With the show wrapping, what did we think of the announcements Google made today?
With the new feature called Search Live, you can have a back-and-forth conversation with Search about what’s in front of you.
Live.
updated four minutes ago.
Live updates on Android XR, Android 16 updates, Gemini, and more will be available at Google I/O 2025.
Gemini has already announced a slew of new AI services and improvements.
The AI announcements are coming at a breakneck pace as Google I/O gets underway in Mountain View, California. In the ongoing battle for dominance in the AI space with competitors like OpenAI, Microsoft, and numerous others, a parade of Google executives, led by CEO Sundar Pichai, have taken the stage to explain how the search giant is integrating its Gemini artificial intelligence model throughout its whole portfolio of online services. The following are some of the announcements made thus far: Google Meet will soon feature real-time AI-powered translation; Project Astra’s computer vision capabilities will become significantly more sophisticated; and Google’s Veo image and video generation tools will receive a significant upgrade.
Join our on-the-ground reporter Karissa Bell (supported by Engadget staff members off-site) for real-time updates in the liveblog below as the announcements continue to take place on stage. Our liveblog will mostly concentrate on the headliner, but we’ll be watching to make sure we don’t miss any news from the developer-focused keynote that follows the main presentation at 4:30 PM ET/1:30 PM PT, as is customary.
You can watch Google’s keynote on the company’s YouTube channel or in the livestream that is embedded above. Please take note that through May 21, the company will host breakout sessions covering a wide range of developer-related subjects.
Live265 updates.
We appreciate everyone who watched and joined us on this crazy AI adventure.
Please visit Engadget for more stories about everything Google announced today.
You can upload a single photo to Google’s AI Mode to virtually try on clothing.
During today’s I/O 2025 announcements, Google revealed some new features that would add novelty to shopping in AI Mode. The three new tools cover the exploration, trying-on, and checkout phases of the process and are described as part of its new shopping experience in AI Mode. For US internet shoppers, these will be accessible “in the coming months”.
The first update occurs when you’re trying to find a specific item to purchase. Google gave the searches for travel bags and a rug that blends in with the room’s other furnishings as examples. Gemini’s shopping graph database of products and reasoning powers are combined by Google AI, which will use your query to determine that you want to view a lot of pictures and display a new panel full of images.
Read more: By uploading a single image, Google’s AI Mode allows you to virtually try on clothing.
Alright. now that the flood of AI has subsided. It makes sense to me why people are thrilled about this technology. It almost seems as though Google is flinging features at a wall to see what sticks, though. There is a lot going on, and attempting to cover a million different applications in a two-hour presentation quickly becomes disorganized.
And it appears to be that. Whoa!
Since that AR glasses demo, wifi has been slipping here, so I’m a little late, but it’s worth taking a moment to think about that. Google Glass, their first attempt at augmented reality glasses, was released more than ten years ago. Since then, a lot has changed in the technology (and how we view it), so it’s wonderful to see Google attempting something so ambitious. This demo seems to be comparable to Meta’s Orion prototype from last year, albeit with some hiccups. ().
Pichai is now telling a story about spending time with his father in a driverless Waymo and recalling the potential power of this technology.
How did we feel about Google’s announcements today, as the show came to an end?
According to Pichai, Google is developing FireSat, a new product. People in states like California particularly value its ability to more closely monitor fire outbreaks.
Videos with sound can be produced by Google’s Veo 3 AI model.
During its I/O developer conference this year, Google unveiled its most recent media generation models. Perhaps the most noteworthy is the Veo 3, the model’s first version capable of producing sound-added videos. For example, it can produce a video of birds singing or a city street with background traffic noise. Google claims that Veo 3 is also very good at lip syncing and real-world physics. The model is currently mainly accessible to enterprise users on Vertex AI and Gemini Ultra subscribers in the US via the Gemini app. Google’s new AI filmmaking tool, Flow, also offers it. ).
Read more: Videos with sound can be produced by Google’s Veo 3 AI model.
There’s Sundar Pichai again, supposedly to finish the show.
It appears that during the keynote, Gemini was mentioned more than AI itself.
Google claims that it is making great efforts to develop a platform for these kinds of glasses, with retail devices possibly arriving later this year.
The first eyewear companies to develop something on the Android XR platform are expected to be Warby Parker and Gentle Monster.
It appears that Google has tempted the gods of live demos too often, as one pair of glasses mistook Izadi’s speech for Hindi.
Asking questions about what your camera sees is possible with Google Search Live.
Google announced at I/O 2025 that one of the new AI features for Search will allow you to talk about what it’s seeing through your camera in real time. Over 1.5 billion people use Google Lens for visual search, according to Google, and the company is now advancing multimodality by integrating Project Astra’s live features into Search. You can converse back and forth with Search about what’s in front of you using the new feature known as Search Live. For example, you could just aim your camera at a challenging math problem and ask it to solve it for you or to clarify a concept you’re struggling to understand.
Learn more: You can ask questions about what your camera sees by using Google Search Live.
In what he refers to as a “risky demo,” Izadi is now trying to communicate with another presenter in real time while using translation in Farsi.
Naturally, the glasses are also capable of taking pictures.
During the keynote, Izadi even admits to using his glasses as a personal teleprompter.
Update, May 19, 2025, 1:01 PM ET: This story has been revised to reflect the new timestamp and include information about the developer keynote later in the day. There have also been wording changes made throughout.
Update: May 20, 2025, 9:45 AM ET: A liveblog of the event has been added to this story.
Updated May 20, 2025, 2:08 PM ET: An initial set of headlines from the I/O keynote have been added to this story.