If you’re on the hunt for a breakdown of everything Google announced at the I/O keynote, though, look no further.
Google Search Live is a similar-sounding feature.
Flow is available to Google AI Pro and Ultra subscribers in the US starting today.
In addition, the AI Premium plan is now known as Google AI Pro.
A second Android XR device has been announced As promised during last week’s edition of The Android Show, Google offered another look at Android XR.
Google’s annual keynote kicked off the I/O developer event today, making it one of the most significant days on the tech calendar. As usual, the business had a lot of updates to share about a variety of products.
A special edition of The Android Show last week revealed the majority of the Android news. The keynote on Tuesday did, however, cover a wide range of topics, including, of course, a ton of news about artificial intelligence. With professional analysis (and even a few jokes!) from our team, we provided live blog coverage of the event.
Look no further, however, if you’re looking for a summary of all the things Google revealed at the I/O keynote. These are all the juicy details that are worth being aware of.
The AI Mode chatbot will soon be available to all US users.
It should come as no surprise that Google is pushing more generative AI features into its main products. AI Mode, which the company refers to as a new chatbot, will soon be available to all US users in Search.
More complex queries than people have traditionally used Search for can be handled by AI Mode, which is located in a separate tab. It could be used to compare various fitness trackers or locate the cheapest event tickets. Soon, AI Mode will also be able to create unique charts and graphics based on your particular queries. It can also respond to follow-up inquiries.
Gemini 2.5 now powers the chatbot. By integrating some of its features into AI Overviews, Google intends to integrate them into the fundamental Search experience. Prior to Google releasing the new features more widely, Labs users will have first access to them.
Meanwhile, some new shopping features are powered by AI Mode. You will soon be able to see how an item of clothing would appear on a virtual version of yourself by uploading a single photo of yourself.
Additionally, Google will be able to notify you when an item you want (in its particular size and color) is on sale for a price you’re willing to pay, much like Google Flights does when it monitors price reductions. If you’d like, it can even finish the transaction for you.
Every month, 1 in 5 billion people view AI Overviews.
According to Google, over 1 in 5 billion people view AI Overviews each month. These summaries, which are powered by Gemini and show up at the top of search results despite having numerous issues. The company claimed that the “overwhelming majority” of users engage with these in a meaningful way, which could include clicking on an overview item or leaving it on their screen for a while (presumably to read it).
Nevertheless, some people prefer a list of links to the information they need and dislike the AI Overviews. Like Search used to be, you know. The results can be decluttered in a few simple ways, as it turns out.
Examining Google’s all-purpose AI assistant once more.
Google’s vision for a universal AI assistant, Project Astra, was first shown to us at I/O last year, and this time the company gave us more information. Astra demonstrated how to fix a mountain bike by investigating your emails to learn about the bike’s specifications, looking up information online, and contacting a nearby shop to inquire about a replacement part.
Although some people may find Astra’s features (like giving it access to Gmail) too invasive, it already feels like the pinnacle of Google’s efforts in the AI assistant and agent space. Regardless, Google wants to make Gemini a multipurpose AI assistant that can manage daily duties. We haven’t seen that in action until the Astra demo.
Google claims that the new Gemini 2.5 offers enhanced functionality, increased security and transparency, more control, and cost effectiveness. Deep Think, a new improved reasoning mode, supports Gemini 2.5 Pro. The model can add narration to each image and transform a grid of photos into a 3D sphere of images. The text-to-speech function of the Gemini 2.5 can also switch between languages instantly. Naturally, it goes far beyond that, and our Gemini 2.5 story contains more information.
Gmail’s “smart replies,” which allow you to quickly reply to an email with an acknowledgement, will now be available in personalized versions from Google to improve your writing style. Gemini examines your Drive files and emails to make this function. It is understandable that some people won’t find that uncomfortable. If nothing else, you’ll have to give Gemini permission before it can steal your personal data. This feature will be available to Gmail subscribers beginning this summer.
A real-time translation feature is coming to Google Meet, which should be very helpful for some people. Meet demonstrated in a demo that it could translate from Spanish while maintaining the speaker’s tone and cadence.
Beginning this week, real-time translations between Spanish and English will be available in beta to subscribers on Google AI Pro and Ultra (more on that momentarily) plans. Other languages will soon be able to use this feature.
All compatible Android and iOS devices will soon have access to Gemini Live, a feature Google introduced to Pixel phones last month through the Gemini app, which currently has over 400 million monthly active users. This enables you to ask Gemini questions regarding both live video being recorded by your phone’s camera and screenshots. Starting today, Google is integrating Gemini Live into the Gemini iOS and Android apps.
Google Search Live is a feature that sounds similar. You will be able to “talk” with Search about what the camera on your phone can see. This will be available via AI Mode and Google Lens.
Building on VideoFX, Flow is a new filmmaking app that offers features like perspective and camera movement controls, a way to incorporate Veo-generated content into projects, and options to edit and extend existing shots. Beginning today, Google AI Pro and Ultra subscribers in the US can access Flow. Google will soon make its services available in more markets.
Google is also updating the video generation model Veo. Although Veo 3, the most recent version, is the first to produce sound-added videos, it is unlikely to give the footage any soul or real significance. The business also claims that compared to previous iterations, its Imagen 4 model is more adept at producing photorealistic images and managing delicate details like fur and upholstery.
Google has a handy tool that can assist you in figuring out whether a piece of content was created with its artificial intelligence tools. As the name suggests, it’s called SynthID Detector, after the program that adds digital watermarks to content produced by artificial intelligence.
According to Google, SynthID Detector can identify the areas of an image, audio, video, or text that are most likely to have a watermark by scanning them for the SynthID watermark. Beginning today, early testers will have the opportunity to test this out. A waitlist for media professionals and researchers has been made available by Google. As soon as possible, Gen AI companies ought to provide educators with a version of this technology. ).
The monthly cost of the new AI Ultra plan is $250.
Google wants to charge $250 USD per month for its new AI Ultra plan, which will grant you access to all of its AI features. “LOL” is the only effective way to respond to this. LMAO… I hardly ever use either of those abbreviations, which emphasizes how ridiculous this is. Why are we even here? That is outrageously costly.
In any case, this plan offers unlimited use of features like Deep Research, which are expensive for Google to operate, as well as early access to the company’s newest tools. It has 30TB of storage for Gmail, Drive, and Google Photos. You’ll also receive YouTube Premium, which is possibly the best Google product available.
The announcement of a second Android XR device has been made.
Google gave another preview of Android XR, as promised during last week’s episode of The Android Show. The company is working on this platform with the goal of doing for virtual reality, mixed reality, and augmented reality what Android did for smartphones. Despite the company’s prior efforts in those areas, it is currently lagging behind companies like Apple and Meta.
For the time being, there wasn’t much to be excited about in the first Android XR demo at I/O. It showcased features like the ability to view 360-degree immersive videos and a mini Google Map that can be accessed on an integrated display. Hardware that can actually run this stuff is still a ways off.
According to Google, the second Android XR device was unveiled. Project Aura is a tethered smart glasses project being developed by Xreal. Further information about Google’s own Android XR headset, which it is developing in partnership with Samsung, will have to wait a little while longer. It will be delivered later this year.
An even more intriguing Android XR demo was the second one. With a smart glasses prototype that it developed in collaboration with Samsung, Google demonstrated a live translation feature for Android XR. Like many AI applications that prioritize accessibility, that seems like a really helpful tool. Warby Parker and Gentle Monster are also producing smart glasses running Android XR. I’m not your father, but please don’t call it Google Glass.
A new version of Chrome’s password manager is coming.
An extremely helpful tool against hackers is being provided by Google to the Chrome password manager. Passwords on accounts that have been compromised in data breaches will be automatically changed by it. The password manager will therefore enable you to create a new password and update a compatible account with just one click in the event that a website, app, or business is compromised, user data is exposed, and Google discovers the breach.
The primary issue is that it only functions with websites that are part of the program. To add support for this feature, Google is collaborating with developers. Nonetheless, facilitating the process of account lockdown is unquestionably advantageous. It is imperative that you use a password manager if you haven’t already. ).
Regarding Chrome, Google is also cramming Gemini into the system. You can ask the AI assistant questions about the tabs you are currently using. Both a new menu at the top of the browser window and the taskbar will provide you with access to it.
Google’s 3D video conference booths are now called Beam.
Project Starline is a 3D video conferencing initiative that we first learned about a few years ago. We had a great time using this technology when we tested it out at I/O 2023.