Google conquers the AI world

A changing AI "world order"?

In today's piece...

What a week—AI news whiplash.

I got halfway through a post about auth in MCP, then Google came out with their latest Gemini 2.5 Pro model and beefed-up Deep Research offering.

Then I got halfway through a post about Google when OpenAI dropped o3 and o4-mini on us…

me trying to keep up with AI

AI is outrunning my keyboard. My brain hurts—in a good way 😛

Today, let’s stick with Google. They’ve been working some magic recently and deserve a dedicated post (don’t worry, I’ll get to OpenAI’s latest craziness next time).

In just the last month, Google has announced a tsunami of cool innovations. I started “bullet point”-ing these, but they quickly morphed into a longer table.

Thing

Summary

Relevant For CloudSec

Gemini 2.5 Pro

Arguably the best model in the world right now

Definitely

Upgraded Gemini Deep Research

A significantly better Deep Research offering powered by their new model

Definitely

Agent2Agent (A2A)

A new open protocol standardising AI-to-AI communication

Kinda

AI Agent Development Kit (ADK)

An open-source framework that simplifies building sophisticated multi-agent systems

Kinda

Google Unified Security

A solution that integrates their security capabilities across threat intelligence, security operations and cloud security

Definitely or Kinda

Ironwood TPU

Their seventh-generation chip optimised for AI, which achieves a 10X improvement from its predecessor and manages a wild 42.5 exaflops of compute per pod

Kinda

New generative media suite

A market-leading suite of generative media capabilities, spanning image generation (Imagen 3), audio generation (Chirp 3), text-to-music (Lyria), video generation (Veo 2)

Not really but cool

💡Key:

Definitely - Relevant to day-to-day usage of AI in our jobs as cloud security specialists

Kinda - Great to understand, may not directly affect our day-to-day just yet

Not really but cool - Probably not immediately useful for our work. But cool and great for personal usage.

Let’s take a quick tour of these new capabilities, focusing on the two that I expect to have an immediate payoff for us professionally.

Then we’ll end with some ruminations about Google’s market position within the AI world and where things might be going.

Gemini 2.5 Pro

First, they released Gemini 2.5 Pro Experimental around two weeks ago, which is their new state-of-the-art reasoning model. It leads the market across multiple benchmarks and is rated #1 on the LMArena leaderboard.

It’s arguably the best model in the world right now.

On the tricky, multi-modal benchmark Humanity's Last Exam, it achieves state-of-the-art performance for a model without tool access, scoring a stellar 18.8%.

This new model also kicks butt at coding, scoring 63.8 % on SWE‑Bench Verified; watch it one‑shot a 3D Rubik’s Cube sim.

So how does this help us cloud security folks? Myriad ways, but here are a few examples:

  • Quick technical answers - as the new top-ranked reasoning model, it’s now my go-to for getting quick answers to technical questions and other general queries.

  • Keeping up with industry news - a massive context window of 1M+ tokens lets it chew through PDFs, books, or whole codebases and spit out timestamped YouTube summaries. Perfect for keeping up with the news.

  • Large-scale log and Infrastructure-as-Code analysis - Great for large-scale cloud log analysis or compliance scans of your IaC codebases. Just be sure your enterprise agreement covers using company data ;)

Gemini Deep Research (new & improved)

Gemini Deep Research, Google's AI-powered research assistant, has become significantly more capable now that it runs on their new model.

What’s involved in Deep Research?

Well, you’ve got to be great at synthesising information across a large set of data sources. This is precisely the sort of thing that Gemini 2.5 Pro excels at, with its boosted reasoning abilities plus massive context window.

So in terms of my day-to-day as a cloud security professional, I now turn to Gemini Deep Research first to research niche technical questions that come up on the job.

The quota is more generous than OpenAI’s Deep Research, and the queries run much quicker (5 - 15 mins vs 30 min+).

Gemini Deep Research is also great for researching new vulnerabilities that might affect your cloud services. It’ll scour CVE feeds and vendor blogs, then come back with a summary and suggested mitigations.

Right now, my workflow by “question type” looks something like:

Scenario

Preferred tool

Low-value questions with time constraint

perplexity.ai ‘pro’ search

Mid-value questions with time constraint

Gemini 2.5 Pro Experimental and/or o3

Mid-value questions

Gemini 2.5 Pro Experimental Deep Research and/or o3

High-value questions

Gemini 2.5 Pro Experimental Deep Research + ChatGPT Deep Research

Disclaimer: I'm sure this workflow will change again in approximately five minutes as new models emerge 😅

In fact, I’ve started using the new o3 model halfway through writing this post. More to say on this one after I’ve had more time to play with it.

Multi-agent ecosystem wins: Agent2Agent & new Agent Development Kit

This makes it simple to deploy AI agents built off the latest reasoning models. It’s designed to integrate well with the Google ecosystem and Gemini family of models but works with other leading LLMs as well.

They also announced their Agent2Agent (A2A) protocol, a new open protocol to help AI agents communicate with each other regardless of their underlying technology.

A2A appears to be complementary with Anthropic’s Model Context Protocol (MCP), which focuses more on AI-to-tool communication.

That said, there may be some interesting overlaps in functionality between A2A and MCP in the future, so watch this space.

Clearly, the AI world is looking increasingly agentic; we’re well out of “chatbot that writes cute poems” territory in 2025.

Underscoring the point is the new “Moore’s Law for AI agents”: the length of tasks AI can complete has been exponentially increasing over the last 6 years.

A recent study from METR found that the length of tasks AIs can complete is doubling roughly every 7 months.

If this trend continues, AI agents will be able to reliably manage month-long human projects by 2030.

Honourable Mentions

Another Google Cloud Next 25 announcement was their Google Unified Security solution, which is an “everything in one spot” platform integrating threat intelligence, security operations and cloud security.

This sort of unified security data platform will be a powerful unlock for AI models, giving them greater context to reason across. They mention several new security-focused AI agents currently in the works, including an alert triage agent and a malware analysis agent.

Not a surprising strategic direction and I’m sure the hyperscalers are all looking to develop similar agentic capabilities.

As a cloud security professional, this platform will either be very relevant to your day-to-day or not at all depending on whether your workplace runs on GCP.

This is Google’s seventh-generation Tensor Processing Unit (TPU), their in-house chip family purpose-built for AI workloads.

It supports 24x the compute power of the world’s largest supercomputer and achieves a 10x improvement on their previous design. ‘Nuff said.

I filed this under “kinda” relevant only because there’s nothing specifically for us to take advantage of as cloud security professionals.

It’s more a big wave that will lift all boats 🌊 Read more about it here.

Google’s Vertex AI platform now offers an “all in one” shop for generative media models across all modalities.

This includes image generation (Imagen 3), audio generation (Chirp 3), text-to-music (Lyria), video generation (Veo 2).

Not as directly relevant for our day jobs in cybersecurity but fun to play around with 😛 And helpful for side hustle content creation if that’s something you’re into. Read more here.

Where Google’s At & What’s Next

They’ve gone from lagging behind with lackluster AI offerings to recapturing the market lead.

In the last year, Vertex AI - their development platform for generative AI - has seen a 20x increase in usage.

Even more impressively, Gemini usage specifically on the platform has increased 40x over that time.

It feels like they were caught off caught initially, and ill-prepared for the AI shift from pure R&D to consumer products.

But they’ve since woken up and put their powerful mix of world-class talent, user data, and in-house TPU chips to work.

I'm on the edge of my seat to see who wins the AI arms race over the next few years.

Hope you got something out of today's post.

Till next time,

Nelson