The Question Nobody Can Answer
Ask any engineering leader how many microservices their team runs. They’ll give you a number. Ask how many databases. They’ll know. Ask how many AI agents are deployed across the organization — and you’ll get silence.
This isn’t a failure of documentation or process. It’s a fundamental gap in how agentic AI systems are built and deployed today. Agents don’t look like traditional software, and the tools we built for tracking traditional software don’t work for agents.
Why Agent Inventory Is Different
Traditional software inventory is straightforward. Services are defined in deployment manifests. Containers are listed in registries. APIs are documented in OpenAPI specs. Each component has a clear boundary: a repository, a Docker image, an endpoint.
Agents don’t work this way.
Agents are defined inline. A LangChain agent might be instantiated in a single function call inside a larger application. There’s no separate repository, no deployment manifest, no registry entry. It’s 15 lines of Python inside a 500-line file.
Agents span frameworks. One repository might contain an OpenAI function-calling agent, a LangGraph workflow, and an AutoGen multi-agent conversation — all coexisting, none declared as “agents” in any formal sense.
Agents have invisible capabilities. A traditional service has defined API endpoints. An agent has tools — and those tools might include database write access, email sending, web browsing, or code execution. These capabilities are defined in code, not in infrastructure, and they change with every commit.
Agents delegate. A service calls another service through a well-defined API. An agent delegates to another agent through natural language. The delegation chain is implicit, dynamic, and invisible to traditional monitoring.
What Happens Without Inventory
Without knowing what agents exist, engineering teams face a cascade of problems:
You can’t govern what you can’t see. When a security review asks “which agents have access to customer data?” — if you don’t have an inventory, you can’t answer. You’d need to grep every repository, understand every framework’s agent definition pattern, and manually trace tool permissions.
You can’t detect drift. If you don’t know what an agent looked like yesterday, you can’t tell if it changed today. A prompt modification, a tool addition, a model swap — these are invisible without a baseline.
You can’t enforce standards. Are all agents using approved models? Do they all have error handling? Are any running with unrestricted autonomy? Without inventory, every answer is “we don’t know.”
You can’t respond to incidents. When production behavior goes wrong, the first question is “which agent did this?” Without inventory, you’re debugging in the dark.
What a Real Agent Inventory Looks Like
An effective agent inventory captures more than just “agent exists.” For each agent, you need:
- Identity: name, framework, location in code
- Capabilities: which tools it has, what they can access (read/write/delete)
- Autonomy level: is it human-in-the-loop, semi-autonomous, or fully autonomous?
- Architecture pattern: single agent, pipeline, multi-agent coordinator?
- Guardrails: what safety mechanisms are in place (output validation, rate limiting, iteration limits)?
- Dependencies: does it delegate to other agents? Does it share memory or state?
This inventory needs to be automatic — maintained by scanning code, not by asking engineers to fill out spreadsheets. It needs to update with every commit. And it needs to work across the 30+ agentic AI frameworks that teams are actually using.
How ARIAS Solves This
ARIAS’s scanner runs locally against your codebase and automatically detects every agent — regardless of framework. LangChain, AutoGen, CrewAI, Pydantic AI, OpenAI function calling, custom implementations — the scanner identifies them all.
For each detected agent, ARIAS generates:
- A unique Agent Behavioral Fingerprint (ABF) capturing goals, tools, memory, orchestration, and error posture
- A 6-dimension maturity score assessing prompt engineering, agent design, memory architecture, orchestration soundness, observability, and governance alignment
- A capability map showing exactly what each agent can access and do
The inventory updates automatically with every scan. No manual tracking, no spreadsheets, no stale documentation.
The first step to governing your agents is knowing they exist. Everything else — drift detection, CI/CD gates, certification — builds on this foundation.
ARIAS is the control plane for AI agents. Start your free trial and discover what’s in your codebase.