Google’s Latest AI Model: A Game-Changer in the Making?
Building the Future of AI
Google is frantically building AI into practically every product it owns, trying to build products other developers want to use, and racing to set up all the infrastructure to make those things possible without being so expensive it runs the company out of business. But is the latest effort, Gemini 2.0, the answer to Google’s AI prayers?
A Major Upgrade
Demis Hassabis, the CEO of Google DeepMind and the head of all the company’s AI efforts, is excited about the new Gemini 2.0 model. It’s still in an "experimental preview," but Hassabis says it’s a big deal. "Effectively, it’s as good as the current Pro model is. So you can think of it as one whole tier better, for the same cost efficiency and performance efficiency and speed. We’re really happy with that."
New Capabilities
Gemini 2.0 can now natively generate audio and images, and it brings new multimodal capabilities that lay the groundwork for the next big thing in AI: agents. Agentic AI, as everyone calls it, refers to AI bots that can actually go off and accomplish things on your behalf. Google has been demoing one, Project Astra, since this spring — it’s a visual system that can identify objects, help you navigate the world, and tell you where you left your glasses. Gemini 2.0 represents a huge improvement for Astra.
Agents and More
Google is also launching Project Mariner, an experimental new Chrome extension that can quite literally use your web browser for you. There’s also Jules, an agent specifically for helping developers find and fix bad code, and a new Gemini 2.0-based agent that can look at your screen and help you better play video games. Hassabis calls the game agent "an Easter egg" but also points to it as the sort of thing a truly multimodal, built-in model can do for you.
The Future of AI
"We really see 2025 as the true start of the agent-based era," Hassabis says, "and Gemini 2.0 is the foundation of that." He’s careful to note that the performance isn’t the only upgrade here; as talk of an industrywide slowdown in model improvements continues, he says Google is still seeing gains as it trains new models, but he’s just as excited about the efficiency and speed improvements.
Conclusion
Google’s plan for Gemini 2.0 is to use it absolutely everywhere. It will power AI Overviews in Google Search, which Google says now reach 1 billion people and which the company says will now be more nuanced and complex thanks to Gemini 2.0. It’ll be in the Gemini bot and app, of course, and will eventually power the AI features in Workspace and elsewhere at Google. The multimodality, the different kinds of outputs, the features — the goal is to get all of it into the foundational Gemini model.
FAQs
Q: What is Gemini 2.0?
A: Gemini 2.0 is a new AI model from Google that can natively generate audio and images, and brings new multimodal capabilities that lay the groundwork for the next big thing in AI: agents.
Q: What is agentic AI?
A: Agentic AI refers to AI bots that can actually go off and accomplish things on your behalf.
Q: What are some examples of agentic AI?
A: Google has demoed Project Astra, a visual system that can identify objects, help you navigate the world, and tell you where you left your glasses. There’s also Project Mariner, an experimental new Chrome extension that can quite literally use your web browser for you. There’s also Jules, an agent specifically for helping developers find and fix bad code, and a new Gemini 2.0-based agent that can look at your screen and help you better play video games.
Q: What are the risks of agentic AI?
A: There are new and old problems to solve. The old ones are eternal, about performance and efficiency and inference cost. The new ones are in many ways unknown. Just to name one: what safety risks will these agents pose out in the world operating of their own accord? Google is taking some precautions with Mariner and Astra, but Hassabis says there’s more research to be done.

