The Law of the Inverted Pyramid

A framework for focusing on the right things in AI & security

Sponsored by

Howdy my friend!

In today’s piece:

  • How to prioritise your learning in AI and security using the “law of the inverted pyramid”

  • We hit 1000 newsletter subscribers — yay!

  • Some key security controls and products relevant for AI security

  • Other handpicked things I think you’ll like

A word from our partner…

Not All AI Notetakers Are Secure. Here’s the Checklist to Prove It.

You wouldn’t let an unknown vendor record your executive meetings, so why trust just any AI?

Most AI notetakers offer convenience. Very few offer true security.

This free checklist from Fellow breaks down the key criteria CEOs, IT teams, and privacy-conscious leaders should consider before rolling out AI meeting tools across their org.

The Law of the Inverted Pyramid: A Focus Framework

We recently passed 1000 newsletter subscribers!

I’m stoked. If you’re reading this, thanks for being one of those 1000! It means a lot.

Today, I want to talk about a framework for prioritising what to learn in the world of AI, as well as how to apply this to AI security.

I’m calling the framework the “law of the inverted pyramid.”

This is the concept that in the modern AI ecosystem, much of the value lies in understanding things at a high level.

Going “deep” has diminishing returns.

Let me explain.

If you think about what a “knowledge pyramid” might look like when it comes to AI, you might have something like:

AI knowledge pyramid

  • Top layer: what the main AI apps are, how to use them as a consumer, etc

  • Middle layer: historical evolution of AI (good old-fashioned AI vs modern deep learning), key AI innovations like the Transformer architecture, general properties of deep learning systems (i.e. stochasticity, black box nature).

  • Bottom layer: deep technical understanding of AI research (deep learning and neural network architectures, reinforcement learning, etc.)

Each layer in this pyramid is a higher-level abstraction of the layer beneath it.

The top layer is the surface-level stuff most end consumers of AI would want to know. This is stuff like “ChatGPT exists and has key models like o3 and GPT-4o” or “ChatGPT and Gemini have Deep Research capabilities.”

This layer is about knowing which consumer products solve which problems.

The middle layer is about understanding the key technical characteristics of modern generative AI systems and the industry’s evolution over recent decades.

It’s the “big picture” of how these systems work and have evolved into their present form.

Finally, the bottom layer is understanding at a deep technical level how modern AI systems work. Think understanding neural network architectures like the Transformer architecture, how training algorithms for these systems work, and other sophisticated machine learning techniques.

Law of the inverted pyramid

The “law of the inverted pyramid” says most career value comes from learning the top levels of this pyramid.

Why?

Extracting value from modern AI systems is now an engineering problem, not a research problem.

It’s the time of the “AI engineer.”

It’s about taking existing innovations and putting them to work in the real world.

Unless you’re a frontier lab (the OpenAIs, Anthropics, or Googles of the world), companies aren’t looking to do fundamental R&D.

They want to use models from the frontier labs to work more efficiently.

They need people that understand what “LEGO blocks” are available and how they can best be used.

Therefore, most of us should focus our energy on the top levels of this pyramid.

the inverted pyramid of value

We’re seeing a Cambrian explosion of new AI models and services. Keeping up is becoming a full-time job.

People who dive too deep into advanced AI theory often fail to keep up with new capabilities and announcements at the “higher levels.”

Of course, if you’re an AI researcher at a frontier lab, this advice does not apply (in fact probably the opposite).

But for the rest of us, the law of the inverted pyramid is a helpful framework for what to focus on.

Applying the inverted pyramid law to AI security

This concept applies to securing AI as well.

What might the “top layer” of the pyramid look like for securing AI at your workplace?

I’d suggest two key items:

  • Understanding your company’s main AI use cases

  • Understanding the fundamental security controls for each use case

Let’s go through each one.

Understanding your company’s main AI use cases

You can divide AI use into two broad categories.

Use case #1: Employees accessing externally hosted AI models

The first is employees accessing externally hosted AI models from their laptops.

This includes browser-based AI SaaS apps like ChatGPT, Google’s Gemini Web App, Anthropic’s Claude, or Perplexity.

It also includes local applications that communicate with an external cloud AI model.

This use case is about employee productivity: help with work research tasks, coding, etc.

First, I’d try to understand:

  1. What technical and process controls prevent data exfiltration risks (i.e. employees feeding sensitive company data into their prompts)?

  2. How can we control what AI applications can be used?

The good news is we don’t have to start from scratch. We’ve got decades of existing industry wisdom we can apply.

Data Loss Prevention (DLP) solutions can mitigate data exfiltration risks, ensuring attempts to attach files containing sensitive data (e.g. credit card numbers) are blocked.

This might be third-party vendor products like Zscaler’s DLP solution, or first-party cloud solutions like Microsoft Purview’s DLP capability.

Start by reviewing your current security stack for DLP capabilities in the context of AI.

Big security vendors like Zscaler are rushing to incorporate GenAI capabilities.

It’s therefore worth looking at what your existing product set can offer and what the licensing considerations are.

You’ll also want to check out emerging start-ups like Harmonic built to protect sensitive data in AI tools.

Compare the options to understand what makes sense for your environment.

Similarly, application allowlisting solutions like ThreatLocker can be used to help govern which AI applications can be used locally.

You can combine this with a secure web gateway like Zscaler’s Internet Access to restrict which AI applications can be accessed via the browser.

Use case #2: AI applications built by your company running in the cloud

The second big use case is in-house AI applications built by your company.

Think customer service chatbots that help users understand your product.

Here, the user is often external, and the risk is less about the user divulging sensitive information and more about them exploiting your application.

In this context, key security controls include:

  • Protecting against malicious user inputs, like prompt injections or jailbreak attempts

  • Ensuring the principle of least privilege access is followed (i.e. both in terms of what the end user can access as well as what the AI application itself has access to)

  • Ensuring thorough observability and monitoring of your AI applications, including updated detective controls for emerging AI threats

Again, you can start by reviewing what your existing product set offers.

If you’re a Microsoft Azure shop, for example, familiarise yourself with Azure AI Content Safety, including Prompt Shields.

You’ll also want to be across the new Microsoft Defender for AI workloads plan. I’d imagine the other cloud providers will have similar capabilities.

Then, you can compare against newer start-ups like Lakera Guard to find the right approach for your environment.

Be aware of new innovations like the Model Context Protocol (MCP), and how AI app architectures will evolve to accommodate them.

For example, check out this video from Microsoft on protecting MCP servers with Azure API Management (APIM). This is an emerging pattern — sticking an API gateway in front of your MCP servers to handle authorisation and observability.

Understanding these high-level use cases and basic security controls available is how you apply the law of the inverted pyramid to AI security.

Once you’ve got a handle on the “top layer,” then you can start to go deeper.

My Favourite Finds

Cloud Security Alliance’s secure vibe coding guide: excellent “checklist”-style guide to embedding security into using GenAI for code generation A.K.A “vibe coding”.

BaxBench security benchmark: novel benchmark evaluating LLMs on secure and correct code generation. Their leaderboard is a must-see. Apparently, 62% of generated solutions even by the best model are either incorrect or contain a security vulnerability. Ooft! That’s massive.

Grok 4 capability analysis: AI Explained’s new video giving the rundown on Grok 4. By many benchmarks, it’s now the best model in the world, but looks like it’s been a bit “benchmaxxed” and doesn’t appear thaaaat different based on vibe checks. Also, xAI has some serious reputational issues right now

Guide to picking the right OpenAI model: Helpful post by Zvi Mowshowitz unpacking the dizzying array of OpenAI models and providing some guidance on picking the right one.

Centaur, a human psychology model: Research scientist Marcel Binz helped created Centaur, a model trained to predict and simulate human behaviour. They finetuned a state-of-the-art LLM on a novel, large-scale data set called Psych-101. Interesting stuff.

Creative association tool: tool that lets you save things you find online and automatically finds interesting related content across the web. Looks interesting as a creative aid.

A+ example of AI art: artsy and evocative example of creatively using AI tools to make a video that tells a story. Just a fun glimpse into the future of entertainment.

Before you go, I’d love to know what you thought of today’s newsletter. That way I can improve the experience for you and minimise suckiness.

What'd you think of this email?

Login or Subscribe to participate in polls.

Take care.

Nelson, fellow AI enthusiast and cloud security dude.