- The CloudSec Rocket
- Posts
- The Wrong Way to Use AI (and What to Do Instead)
The Wrong Way to Use AI (and What to Do Instead)
Hint: it starts by flipping the usual script

Howdy my friend!
In today’s piece:
How to use AI effectively by “working it backwards”
Two (or three) AI “zones of competence” relevant for cloud security
An AI “master psychologist” that helps salespeople close deals
Big security vendors furiously jumping on the AI train
Other handpicked things I think you’ll like
Find out why 1M+ professionals read Superhuman AI daily.
In 2 years you will be working for AI
Or an AI will be working for you
Here's how you can future-proof yourself:
Join the Superhuman AI newsletter – read by 1M+ people at top companies
Master AI tools, tutorials, and news in just 3 minutes a day
Become 10X more productive using AI
Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.
Use AI effectively by “working it backwards”.
I’ve been turning a concept over in my brain recently about how to use AI most effectively.
I’m calling it “working AI backwards.”
Most of us start by looking at the tasks we already do and then try to streamline them with AI.
The “working AI backwards” approach flips that: start with what AI is already great at—its “zone of competence”—then figure out how to bring more of those things into your work.
It sounds simple, but it’s a huge mental unlock.
When I first dabbled with YouTube for my newsletter, I took the usual route.
I thought: “I want to repurpose my newsletter content as videos—how can I automate that with AI?”
Cue weeks of experiments, building a custom AI video agent, and discovering that forcing AI into an existing workflow can be frustrating as heck.
(If you want the full cautionary tale, see my write-up on my failed YouTube AI agent)
After dusting myself off, I asked a different question:
“What kind of YouTube content would AI naturally excel at creating?”
The difference was night and day. Views jumped. People hit the ‘like’ button. And yes… my ADHD brain happily chased that shiny gold object 😛
Granted, this latest experiment involved content completely unrelated to this newsletter.
So yes, you could argue it didn’t meet the original objective of “repurposing my newsletter content”.
But does it meet my broader meta-objective of “creating content on the Internet people like, that I might be able to generate an income from one day?”
Yes.
And that’s exactly my point. Part of “working AI backwards” is being flexible in your execution strategy so that you can best use AI to meet your higher-level goals.
So how does “working AI backwards” apply to Cloud Security—or any technical field?
Right now, I see two big AI competence zones worth working backwards from:
AI for coding
Building portfolio projects to showcase your skills
Writing automation scripts for repetitive security tasks
Prototyping solutions to tricky problems fast
(I touched on this in 3 Strategies for Cloud Security Specialists to Thrive in a Post-AI World.)
AI for information gathering / research
Quickly finding relevant SDKs, APIs, or config examples
Summarising sprawling documentation
Surfacing relevant industry best practices
There’s also an emerging third zone—specialised SOC automation AI—that could be huge.
I haven’t been hands-on enough to give a verdict yet, but given the repetitive, high-volume nature of SOC analysis, this might be one of AI’s ripest opportunities.
A key nuance: AI research works best when the verification cost is far lower than the generation cost.
Example: imagine you need to find SDKs that support Microsoft Entra ID authentication.
You can ask ChatGPT or Gemini for candidate SDKs. If it gives you one, verifying that support is as quick as skimming the library’s README or docs page for the words “Microsoft Entra”—a 30-second check.
If the thing you’re verifying takes as long to confirm as it would to find yourself, you’ve left the competence zone.
The AI competence zone is expanding quickly. That’s why using AI for in-scope work—like agentic coding with Claude Code—can feel like working with baby AGI, while trying it for out-of-scope work—like most current “AI computer use” agents—feels clunky.
If your AI trials have mostly been outside that zone, it’s easy to write the tech off as hype.
But once you start working AI backwards, starting inside the sweet spots, AI stops feeling like a novelty—and starts feeling like leverage.
What’s one “working AI backwards” experiment you could try this week?
A word from our partner…
Unmanaged AI = Unmanaged Risk. Shadow IT Could Be Spreading in Your Org
You wouldn’t allow unmanaged devices on your network, so why allow unmanaged AI into your meetings?
Shadow IT is becoming one of the biggest blind spots in cybersecurity.
Employees are adopting AI notetakers without oversight, creating ungoverned data trails that can include confidential conversations and sensitive IP.
Don't wait until it's too late.
This Shadow IT prevention guide from Fellow.ai gives Security and IT leaders a playbook to prevent shadow AI, reduce data exposure, and enforce safe AI adoption, without slowing down innovation.
It includes a checklist, policy templates, and internal comms examples you can use today.
My Favourite Finds
Substrata: Fascinating AI platform that helps salespeople interpret nonverbal cues to close more deals. Maybe not as relevant to our jobs as an engineers or technical people, but I find this application of AI quite fascinating if a tad unsettling. Almost any job role involves some level of negotiating / persuading etc, so if AI can help in this area… there are massive implications.
‘Scouts’ by AI startup Yutori: always-on agents that monitor the web for anything you care about. Seems to only be in the waitlist stage for me, so no opinion on how good this product is. But a cool concept that caught my eye this week.
CSA Valid‑AI‑ted names Google Cloud as first AI‑assurance badge holder: The Cloud Security Alliance launched Valid‑AI‑ted, an AI‑powered quality‑assurance tool that uses large‑language‑models to automatically validate STAR Level 1 assessments and produce graded reports; Google Cloud became the first provider to earn this designation.
NTT DATA and Google Cloud team on agentic‑AI and sovereign cloud: A global partnership aims to co‑develop industry‑specific agentic‑AI solutions using Google’s Agentspace and Gemini models and modernize applications.
Booz Allen shows Vellox Reverser AI malware‑analysis engine: At Black Hat, Booz Allen demonstrated Vellox Reverser, an AI‑first, cloud‑native tool that uses agentic AI orchestration to reverse‑engineer malware at machine speed.
CrowdStrike updates Falcon Shield with AI‑agent visibility and ChatGPT integration: Falcon Shield now integrates OpenAI’s ChatGPT Enterprise Compliance API to provide secure AI interactions and enhanced visibility into AI agents.
Snyk’s Secure‑at‑Inception protects AI coding assistants: Snyk introduced a Secure At Inception suite that uses the Model Context Protocol (MCP) to give AI‑powered coding assistants real‑time vulnerability scanning and to detect MCP‑specific issues such as prompt injection and model poisoning.
Cyera launches AI Guardian for AI‑asset management and runtime protection: AI Guardian combines an AI‑asset posture‑management module that inventories AI models and data with AI Runtime Protection for continuous monitoring and enforcement of AI‑data policies at run time.
Before you go, I’d love to know what you thought of today’s newsletter. That way I can improve the experience for you and minimise suckiness.
What'd you think of this email? |
Take care.
Nelson, fellow AI enthusiast and cloud security dude.