Want evidence that Google in point of fact has long gone all-in on AI? Then glance no additional than lately’s Google I/O 2025 keynote.
Forget Android, Pixel units, Google Photos, Maps and all of the different Google staples – none have been anyplace to be noticed. Instead, the entire two-hour keynote spent its complete time taking us thru Gemini, Veo, Flow, Beam, Astra, Imagen and a host of alternative gear that can assist you navigate the brand new AI panorama.
There used to be so much to soak up, however do not fret – we are right here to provide the very important round-up of the whole thing that obtained introduced at Google’s giant birthday celebration. Read on for the highlights.
1. Google Search obtained its largest AI improve but
‘Googling’ is not the default within the ChatGPT technology, so Google has spoke back. It’s introduced its AI Mode for Search (up to now simply an experiment) to everybody in america, and that’s only the start of its plans.
Within that new AI Mode tab, Google has constructed a number of new Labs gear that it hopes will prevent us from leaping send to ChatGPT and others.
A ‘Deep Search’ mode allows you to set it running on longer analysis initiatives, whilst a brand new ticket-buying assistant (powered by way of Project Mariner) will permit you to ranking access for your favorite occasions.
Unfortunately, the fewer fashionable AI Overviews could also be getting a much broader rollout, however something’s evidently: Google Search goes to feel and appear very other any more.
@techradar
♬ authentic sound – TechRadar
Shopping on-line can cross from simple to chaotic in moments, given the large quantity of manufacturers, outlets, dealers and extra – however Google is aiming to make use of AI to streamline the method.
That’s for the reason that aforementioned AI Mode for Search now provides a style that can react to shopping-based activates, similar to ‘I’m in search of a lovely handbag’ and serve up merchandise and pictures for inspiration and make allowance customers to slim down massive levels of goods; this is should you reside in america because the mode is rolling in the market first.
The key new characteristic within the AI-powered searching enjoy is a try-on mode that permits you to add a unmarried picture of your self, from which Google’s aggregate of its Shopping Graph and Gemini AI fashions will then enable you nearly attempt on garments.
The simplest caveat this is the try-on characteristic remains to be within the experimental degree and you wish to have to opt-in to the ‘Search Labs’ program to provide it a cross.
Once you’ve got the product or outfit in thoughts, Google’s agentic checkout characteristic will principally purchase the product in your behalf, the usage of the fee and supply main points saved in Google Pay; this is, if the associated fee meets your approval – as you’ll set the AI tech to trace the price of a specific product and simplest have it purchase it if the associated fee is true. Neat.
3. Beam may just reinvent video calls
Video calls are the bane of many of us’s lives, in particular should you paintings in an place of business and spend 60% of your time in such calls. But Google’s new Beam may just lead them to much more fascinating.
The thought this is to give calls in three-D, as in case you are in the similar room as anyone if you end up on a choice with them; a little like with VR. However, there is not any want for a VR headset or glasses right here, with Beam as an alternative the usage of cameras, mics, and – in fact – AI to paintings its magic.
If that every one sounds quite acquainted, this is because Google has teased this earlier than, below the identify Project Starline. But that is not a a ways away idea as it is right here, and virtually able for other people to make use of.
The caveat is that each callers will want to sit down in a personalized sales space that may generate the three-D renders which might be wanted. But it is all lovely spectacular nevertheless, and the primary trade shoppers will be capable to get the package from HP later in 2025.
4. Veo 3 simply modified the sport for AI video
AI video technology gear are already extremely spectacular, given they did not even exist a 12 months or two in the past, however Google new Veo 3 type seems like taking issues to the following degree.
As with the likes of Sora and Pika, the instrument’s third-generation model can create video clips after which tie them in combination to make longer motion pictures. But in contrast to the ones different gear, it might additionally generate audio on the similar time – and expertly sync sound and imaginative and prescient in combination.
Nor is that this capacity restricted to sound results and background noises, as a result of it might even deal with discussion – as demonstrated within the clip above, which Google demoed in its I/O 2025 keynote.
“We’re emerging from the silent era of video generation,” mentioned Google DeepThoughts CEO Demis Hassabis – and we aren’t going to argue with that.
5. Gemini Live is right here – and it is unfastened
Google Gemini Live, the quest massive’s AI-powered voice assistant, is now to be had without cost on each Android and iOS. Previously a paid-for choice, this transfer opens up the AI to a wealth of customers.
With Gemini Live, you’ll communicate to the generative AI assistant the usage of herbal language, in addition to use your telephone digicam to turn it issues from which it’ll extract knowledge to serve up comparable information. Plus, the power to proportion one’s telephone display and digicam with different Android customers by the use of Gemini Live has now been prolonged to appropriate iPhones.
Google will get started rolling out Gemini Live without cost from lately, with iOS customers having the ability to get right of entry to the AI and its display sharing options within the coming weeks.
Here’s one for all of the budding film administrators in the market: at I/O 2025, Google took the covers off Flow, an AI-powered instrument for filmmakers that may create scenes, characters and different film belongings from a herbal language textual content recommended.
Let’s say you need to peer docs carry out an operation behind a 1070s taxi; neatly, pop that into Flow and it’ll generate the scene for you, the usage of the Veo 3 type, with sudden realism.
Effectively an extension of the experimental Google Labs VideoFX instrument introduced ultimate 12 months, Flow will likely be to be had for subscribers to Google Al Pro and Google Al Ultra plans in america, with extra international locations to come back.
And it can be a instrument that’ll let budding administrators and cinematic video makers extra successfully take a look at scenes and storytelling, without having to shoot a large number of clips.
Whether this will likely make stronger filmmaking making plans or yield a complete new technology of cinema, the place maximum scenes are created the usage of generative AI quite than applying units and standard CGI, has but to be noticed. But it seems like Flow may just open up film making to extra than simply prepared amateurs and Hollywood administrators.
7. Gemini’s inventive skills are actually much more spectacular
Gemini is already an attractive sensible choice for AI picture technology; relying on who you ask, it is both moderately higher or moderately worse than ChatGPT, however necessarily in the similar ballpark.
Well, now it would have moved forward of its rival, because of a large improve to its Imagen type.
For starters, Imagen 4 brings with it a answer spice up, to 2K – which means you can be higher ready to zoom into and crop its photographs, and even print them out.
What’s extra, it’s going to even have “remarkable clarity in fine details like intricate fabrics, water droplets and animal fur, and excels in both photorealistic and abstract styles”, Google says – and judging by the image above, that looks pretty spot on.
Finally, Imagen 4 will give Gemini improved abilities at spelling and typography, which has bizarrely remained one of the hardest puzzles for AI image generators to solve so far. It’s available from today, so expect even more AI-generated memes in the very near future.
8. Gemini 2.5 Pro just got a ‘groundbreaking new ‘Deep Think’ upgrade
Enhanced image capabilities aren’t the only upgrades coming to Gemini, either – it’s also got a dose of extra brainpower with the addition of a new Deep Think Mode.
This basically augments Gemini 2.5 Pro with a function that means it’ll effectively think harder about queries posed at it, rather than trying to kick out an answer as quickly as possible.
This means the latest pro version of Gemini will run multiple possible lines of reasoning in parallel, before deciding on how to respond to a query. You could think of it as the AI looking deeper into an encyclopaedia, rather than winging it when coming up with information.
There is a catch here, in that Google is only rolling out Deep Think Mode to trusted testers for now – but we wouldn’t be surprised if it got a much wider release soon.
9. Gemini AI Ultra is Google’s new ‘VIP’ plan for AI obsessives
Would you spend $3,000 a year on a Gemini subscription? Google thinks some people will, because it’s rolled out a new Gemini AI Ultra plan in the US that costs a whopping $250 a month.
The plan isn’t aimed at casual AI users, obviously; Google says it offers “the easiest utilization limits and get right of entry to to our maximum succesful fashions and top class options” and that it’ll be a must if “you are a filmmaker, developer, ingenious skilled or just call for the very best of Google Al with the easiest degree of get right of entry to.”
On the plus side, there’s a 50% discount for the first three months, while the previoiusly available Premium plan also sticks around for $19.99 a month, but now renamed to AI Pro. If you like the sound of AI Ultra, it will be available in more countries soon.
10. Google simply confirmed us the way forward for wise glasses
Google finally gave us the Android XR showcase it has been teasing for years.
At its core is Google Gemini – on-glasses-Gemini can find and direct you towards cafes based on your food preferences, it can perform live translation, and find answers to questions about things you can see. On a headset, it can use Google Maps to move you in every single place the sector.
Android XR is coming to units from Samsung, Xreal, Warby Parker, and Gentle Monster, despite the fact that there’s no phrase but on once they’ll be in our arms.
11. Project Astra additionally obtained an improve
Project Astra is Google’s powerful mobile AI assistant that can react and respond to the user’s visual surroundings, and this year’s Google I/O has given it some serious upgrades.
We watched as Astra gave a user real-time advice to help him fix his bike, speaking in natural language. We also saw Astra argue against incorrect information as a user walked down the street mislabeling the things around her.
Project Astra is coming to both Android and iOS today, and its visual recognition function is also making its way to AI Mode in Google Search.
12. …As did Chrome
Is there anything that hasn’t been given an injection of Gemini’s AI smarts? Google’s Chrome browser was one of the few tools that hadn’t it seems, but that’s now changed.
Gemini is now rolling out in Chrome for desktop from tomorrow to Google AI Pro and AI Ultra subscribers in the US.
What does that mean? You’ll apparently now be able to ask Gemini to clarify any complex information that you’re researching, or get it to summarize web pages. If that doesn’t sound too exciting, Google also promised that Gemini will eventually work across multiple tabs and also navigate websites “on your behalf”.
That gives us slight HAL vibes (“I’m sorry, Dave, I’m afraid I can’t do that”), but for now it seems Chrome will remain dumb enough for us to be considered worthy of operating it.
13. …And so did Gemini Canvas
As part of Gemini 2.5, Canvas – the so-called ’creative space inside the Gemini app – has got a boost via the new upgraded AI models in this new version of Gemini.
This means Canvas is more capable and intuitive, with the tool able to take data and prompts and turn them into infographics, games, quizzes, web pages and more within minutes.
But the real kicker here is that Canvas can now take complex ideas and turn them into working code at speed and without the user needing to know specific coding languages; all they need to do is describe what they want in the text prompt.
Such capabilities open up the world of ‘vibe coding’, where one can create software without needing to know any programming languages, and it also has the capability of prototyping new ideas for apps at speed and just through prompts.
You may additionally like
Source hyperlink