Tagged: Agent-Design

← All posts

When the Spec Becomes the Source: What Spec-Driven Development Asks of Your Specs

May 7, 2026

Something quiet has shifted in how engineering teams build with AI agents. A year ago, a feature began as a Jira ticket, became a design doc, became code, and the documentation lived a short life. Today, more teams are writing the specification first — in a structured form a coding agent can consume — and treating the spec, plan, tasks, and constitution as the primary artifacts. The code is generated downstream.

Nine Seconds to Wipe a Database: What That AI Agent Incident Tells You About Your Own Agents

April 28, 2026

The agent’s own post-incident explanation included the line “I guessed… I didn’t verify… I didn’t check.” Full account is on Yahoo Tech

This post is the engineering breakdown of that set of design gaps in agentic AI.

The Design Conditions That Make a Nine-Second Deletion Possible

For an incident of this shape to occur, some combination of the following has to be true at design time — before the agent ever runs a token of inference:

The 6 Agentic AI Architecture Patterns — and What Can Go Wrong With Each

April 6, 2026

Not All Agents Are Created Equal

The term “AI agent” covers everything from a simple LLM call with a search tool to a fully autonomous swarm of specialized agents coordinating across systems. These aren’t just different scales — they’re fundamentally different architectures with different risk profiles, failure modes, and governance needs.

Understanding which pattern you’re building — and what can go wrong — is the first step toward building agents that are production-ready, not just demo-ready.

Why Your AI Agent Security Tools Are Missing Half the Picture

April 4, 2026

The Layer Nobody’s Watching

The agentic AI security market is booming. Runtime guardrails that filter prompt injections. Firewalls that block malicious outputs. Shadow AI discovery tools that find unauthorized LLM usage. Red-teaming platforms that stress-test models.

These tools protect agents after they’re deployed. They sit in front of your agent at inference time and intercept bad inputs or outputs. They’re valuable — and they’re necessary.

But they’re only half the picture.