Engineering

You Don't Know How Many AI Agents You Have. Here's Why That's a Problem.

April 7, 2026

The Question Nobody Can Answer

Ask any engineering leader how many microservices their team runs. They’ll give you a number. Ask how many databases. They’ll know. Ask how many AI agents are deployed across the organization — and you’ll get silence.

This isn’t a failure of documentation or process. It’s a fundamental gap in how agentic AI systems are built and deployed today. Agents don’t look like traditional software, and the tools we built for tracking traditional software don’t work for agents.

The 6 Agentic AI Architecture Patterns — and What Can Go Wrong With Each

April 6, 2026

Not All Agents Are Created Equal

The term “AI agent” covers everything from a simple LLM call with a search tool to a fully autonomous swarm of specialized agents coordinating across systems. These aren’t just different scales — they’re fundamentally different architectures with different risk profiles, failure modes, and governance needs.

Understanding which pattern you’re building — and what can go wrong — is the first step toward building agents that are production-ready, not just demo-ready.

Your Agent Changed. You Didn't Know. Here's What Happened Next.

April 5, 2026

The Change Nobody Noticed

It started with a small commit. A senior engineer updated the system prompt for the customer support agent — adding a line about the new return policy. The change went through code review. Tests passed. The agent still answered questions correctly in staging.

Two weeks later, support tickets spiked. Customers reported the agent was offering refunds for products outside the return window. The agent wasn’t broken — it was behaving differently. The prompt change had subtly shifted the agent’s interpretation of “eligible for return” in edge cases that no test case covered.

Why Your AI Agent Security Tools Are Missing Half the Picture

April 4, 2026

The Layer Nobody’s Watching

The agentic AI security market is booming. Runtime guardrails that filter prompt injections. Firewalls that block malicious outputs. Shadow AI discovery tools that find unauthorized LLM usage. Red-teaming platforms that stress-test models.

These tools protect agents after they’re deployed. They sit in front of your agent at inference time and intercept bad inputs or outputs. They’re valuable — and they’re necessary.

But they’re only half the picture.