Platform engineering is what makes AI enterprise-ready

As we’ve found, writing AI-powered software is the easy part. Testing it, securing it, operating it at enterprise scale - that’s where things get interesting. “Guardrails,” all that.

Purnima lays this out: a shift from deterministic systems (input goes in, predictable output comes out) to probabilistic ones where agents wander around exploring multiple paths to get stuff done.

Sure, there’s all the freaking out about deleting data, bringing down production, exposing your precious secrets. Those are real and top of the list of things to worry about.

But, functionally, when the guardrails work, there’s also guarding against just getting goofy results. This is likely the more common outcome. If you’ve used AI for awhile you find that you sometimes get “a dumb one” as several of my AI-crazed pals will put it. The results are just bad for some random reason.

She lays out a platform engineering approach to managing AI, very Tanzu-y:

  1. Governance baked into the dev framework, not bolted on after. Spring AI does this by enforcing security controls and observability hooks by default.
  2. Deny by default at runtime. Agents get zero access to anything unless explicitly granted. Zero trust, but for AI agents instead of humans.
  3. Centralized capability catalogs - curated MCP servers and APIs instead of letting agent skills proliferate like unmanaged microservices circa 2017.
  4. Private infrastructure so you’re not shipping regulated financial data to someone else’s API.

She also lays out a five-layer enterprise AI platform stack: agent runtimes, middleware/gateway, governed model brokerage, standardized dev frameworks, and low-latency data products. It’s a useful mental model if you’re trying to figure out what “AI platform” actually means beyond “we bought some GPUs.”

As with every new technology, we see a similar path. The technology itself gets commoditized fast. The hard, valuable work is the governance, the guardrails, the platform engineering that makes it safe to use at scale. Which is, you know, what we do at Tanzu: why not TryTanzu.ai.