How Selling Stink Bombs Helped Me Learn AI

A framework for learning AI as Cloud Security pros

In today's post...

The Stink Bomb Approach To Learning

In high school, I hatched a money-making scheme with two friends: selling stink bombs around the school.

It was the latest million-dollar idea in a line of failed ventures. My previous hustles included selling playing cards, virtual items in Diablo II, and a lawn care service called “The Leaf Brothers” (tagline: “Leaf it to us”).

None made me a millionaire. Or even a hundredaire. But stink bombs? “This is the way,” I thought.

Giddy with excitement, we bought a pack of 72 stink bombs from eBay.

Then we had to work out what to do next. We hadn’t planned that far ahead.

Step one was to test the product on unsuspecting friends to make sure it worked. One friend, Brian^, was top of our “they’d be fun to annoy” list, so he really copped it. He was not impressed.

Step two was marketing the product.

We put up some badly drawn posters (or talked about it, we might have chickened out - my memory’s hazy) and tried asking a few people if they wanted to buy some.

We sold 1 stink bomb. To my brother’s friend, who complained it didn’t work well.

The rest went in the bin. Sadly, our stink bomb business wasn’t turning any of us into Richard Branson.

Recently, a friend took a picture of a stink bomb on the ground, captioning it “culture you started still going strong.”

That’s when I realized that it was totally worth it, even though we made negative dollars.

I got more important things than money from the experience.

I got a fun memory to laugh about with friends almost two decades later.

I learned what not to do in business. Next time, I’d test my idea in small scale, low risk ways first (rather than going all in on a batch of 72 stink bombs).

I learned that marketing takes work and is important - brain-exploding epiphany.

How does this relate to using AI as cloud security professionals?

It’s an example of a concept I’m calling “Mental Model Manure” (MMM).

Mental Model Manure

I just came up with the concept 5 minutes ago and not sure if it’ll work as a metaphor. But it’s poo-themed, so I’ve got to try.

Mental Model Manure is an experience that might not yield a direct payoff, but it fertilizes the ground for new mental models to grow.

Your mental models are your ideas about how the world works.

They’re like plants, sprouting from the soil of your life experiences.

Without care, the plants grow slowly and look ragged. They’re also somewhat random; you can’t precisely predict the movements, shape, or height of a plant.

You can create the right environment for plants to flourish.

You can water them at the right times, expose them to the right sunlight, and fertilize them with the right manure.

In the context of using AI, Mental Model Manure is any experience that expands your understanding of what modern AI can do.

For example, I recently built a proof-of-concept Model Context Protocol (MCP) server that helps AI applications review privileged access in Azure.

This quickly unearthed several insights:

  1. Building a basic MCP server isn’t hard (shoutout to Anthropic’s Python SDK for MCP).

  2. MCP is a super powerful innovation for the AI ecosystem.

  3. Managing authorization securely in MCP will be a big challenge (and is currently poorly solved).

As I went down the rabbit hole, I stumbled across new startups trying to seize the opportunity here.

For example, Arcade provides a platform for securely calling AI tools. It’s been given 12M in venture funding to tackle this problem.

Which brand of manure to choose?

How do you distinguish high-quality manure from the cheap stuff that turns your plants a weird purple color?

The best Mental Model Manure experiences involve:

  1. A new technology

  2. Broad applicability

  3. Solving a problem

A new technology

Mature technologies are overrun gardens. High awareness and competition mean everyone has planted something, and the plants are competing for sunlight.

The conditions to thrive in this garden are well understood and practiced by many. It’s nearly impossible for novice gardeners to compete.

New technologies are undiscovered land.

Someone just invented a boat and arrived on this new land, so people haven’t had time to plant things.

No one knows which plants will thrive here or how to care for them.

Or if these new lands are inhabited by flying pink blobby animals that’ll eat your plants and fart on you.

oh boy, these flying blobby things suck… better learn how to deal with ‘em

New technologies are fertile ground for developing unique, useful insights.

These technologies break assumptions about how problems can be solved.

They expand the space of the adjacent possible.

The adjacent possible is a concept introduced by theoretical biologist Stuart Kauffman.

It’s the set of possibilities one step away from the current state of the world. It’s the realm of possible innovations accessible based on our current mix of knowledge, tools, and circumstances.

The adjacent possible is where the magic happens.

Only early tech adopters can see it.

They develop mental models that capture the new state of the world and spot opportunities in the adjacent possible that others miss.

Broad applicability

The second characteristic of high-grade Mental Model Manure is broad applicability.

You’re learning about a technology, process, or skill that applies across many things.

For instance, learning about a widely-used protocol or platform fertilizes more plant species than mastering a specific library or tool.

In AI and cloud security, this might mean building a MCP server to analyze vulnerabilities in a cloud environment.

Or building an AI agent that makes an annoying noise at random intervals when it finds a cloud vulnerability.

This will give you an understanding of how to build agentic solutions and how to annoy people. Both important skills.

You might learn to navigate new AI SDKs like Google’s AI Agent Development Kit (ADK) or OpenAI’s Agents SDK.

These experiences are potent Mental Model Manure because they help you understand the infrastructure powering the next wave of opportunities.

Solving a problem

The third calling card of quality Mental Model Manure is solving a problem.

This might be building an Identity and Access Management (IAM) AI agent that denies all access requests unless the requestor includes a sufficiently funny joke.

Or an AI-powered “Cyber Food Reviewer” app that assesses the quality of vegetable dishes.

Whenever it finds poorly cooked peas, it complains the meal violates the principle of least pea-vilege.

Don’t steal these ideas, please. They’re mine. They’re gonna make me millions.

Focus your learning on real problems and you filter out low-value learning.

And if you build something useful, you might make a bajillion bucks.

In summary

If you’ve read this far, you’ve wasted more time than you should have reading about a stupid poo-themed analogy.

But hopefully the MMM framework helps you. Look for novelty, widely applicable stuff, and solutions to problems.

You’ll grow majestic plants. And hopefully won’t get farted on by weird blobby things.

^ name changed for privacy