Deploy Securely is thrilled to share this guest post from Kaleb Walton, Principal Security Architect at FICO. In this piece, he explores some of the unique security and compliance challenges posted by AI agents. Enjoy!
As organizations race to integrate AI agents into their workflows, a pressing security question arises: How do we ensure agents can act on our behalf without introducing unacceptable risk, accountability gaps, or compliance liabilities?
AI agents operating without strong guardrails could become one of the largest risks organizations face in the coming years — introducing new failure modes across identity, security, auditability, and trust.
An intriguing lens to examine this through is the Law of Agency — a well established legal framework governing relationships where one party (the “agent”) acts on behalf of another (the “principal”). Applying this model to AI offers a powerful foundation for designing agentic systems with structured delegation, trust, and clear accountability.
AI as Delegate: A New Take on Agency
In traditional agency law, an agent is authorized to perform tasks for a principal, but must act within the scope of their authority and prioritize the principal’s interests. Translating this to AI, we can imagine systems where:
The AI agent explicitly acts on behalf of a human employee.
Each action is traceable back to the principal.
Authority is limited, conditional, and revocable.
Technically, this could involve OAuth-like delegation frameworks, or more sophisticated dynamic delegation protocols, where employees can authorize, monitor, and revoke AI agent actions in real time.
But this delegation opens up serious technical and operational challenges — especially around identity management, auditability, and incident response.
Automating Identity and Access: A Hidden Minefield
If an AI agent is to operate properly on behalf of a human, it must be able to assume their identity securely and narrowly. In practice, this often means:
Scoped, time-bound API tokens issued dynamically based on user grants.
Real-time IAM policy enforcement, ensuring the agent’s access never exceeds human permissions (and ideally, is even more tightly constrained).
Today’s IAM systems aren’t built for dynamic, delegated identity impersonation at runtime. Without extreme care, agents could introduce:
Privilege escalation risks
Orphaned tokens or credentials
Persistent latent attack surfaces
Least privilege, short-lived credentials, and delegation audit trails must become mandatory design features — not optional hardening steps after the fact.
The Audit Trail Paradox
Another huge operational challenge: auditing.
When something critical happens — say, a financial transaction, a database deletion, or a customer communication — we must ask:
Was it the employee directly?
Was it the agent acting on the employee’s behalf?
Was the agent operating within its authorized scope?
Current audit systems often capture only who initiated an action — not whether it was direct or delegated, nor the context of approvals.
Without multi-layered audit trails capturing:
The initiator (the principal)
The executor (the agent)
The scope and approval chain
…organizations will find post-incident forensics, regulatory audits, and internal investigations impossible or incomplete.
For a deeper discussion on why it’s critical to maintain clear human ownership of all AI-driven actions — and why even using terms like “Non-Human Identity” (NHI) risks diluting accountability — see Walter’s article.
As he argues, every system — even those powered by autonomous AI agents — must have a designated human owner who remains responsible for the agent’s behavior, credentials, and risks.
The Hardest Problem: Building Trust
Above all, the real bottleneck will be trust.
Employees won’t willingly adopt AI agents to assist them if they feel that:
The agent might take actions that expose them to professional risk.
Mistakes could get them reprimanded, embarrassed, or even fired.
In an agentic model, if an AI acts improperly under a user’s name, it’s often the human principal who bears the consequences — and that’s appropriate when the system is designed correctly. Accountability must ultimately rest with a human.
The real danger arises when systems fail to clearly link AI actions back to responsible human owners, or when accountability is misaligned — falling unfairly on employees for failures outside their reasonable control.
Without clear delegation models and fair ownership structures, employees will be rightfully hesitant to trust agentic AI tools — and organizational adoption will stall.
Building trust will require radical transparency at every level:
Trust in the designers who architect the delegation mechanisms.
Trust in the developers who build the agents.
Trust in the operators who manage the AI in production.
Trust in the interface that allows humans to delegate, monitor and approve agent actions.
Trust in the accountability model itself — that responsibility for agentic actions will be clear, fair, and aligned with organizational governance structures.
Trust isn’t earned through promises — it’s earned through consistent safety, visibility, and a history of not putting people at risk.
Incremental Rollout: Safety First, Speed Later
The only responsible way to roll out agentic AI systems is incrementally:
Detailed safety measures at first: every sensitive action requires human approval, frequent reviews, and easily auditable logs.
Tight scopes and slow expansion: agents begin handling only low-risk workflows.
Transparent error reporting: all mistakes, however minor, are surfaced, analyzed, and communicated.
Progressive relaxation: only as trust builds — through weeks and months of safe, reliable performance — should organizations allow agents more autonomy.
This will be a long game. But in the end, if we treat agency as a relationship that must be earned and maintained, we’ll create AI systems that employees genuinely want to work alongside — not ones they fear.
A Note on Broader AI Contexts
While this article frames AI agent design primarily around assisting human principals, it’s important to recognize that every AI system — regardless of where or how it operates — must have clear human ownership. Some agents will operate system-to-system, supporting integrated infrastructure or background processes without a visible user interaction. But even in those cases, accountability cannot be abstracted away: there must always be a designated human owner responsible for the agent’s behavior, risks, and outcomes.
Clear delegation, transparency, and accountability remain critical — whether the agent is assisting a person, operating within infrastructure, or interacting autonomously with other systems.
Final Thoughts
By grounding AI agent design in the Law of Agency, we ensure that automation stays accountable, human-centered, and resilient. It’s not about blind automation accountable to a faceless bot; it’s about trusted delegation, with AI assisting real humans.
The future of AI isn’t simply a race to make agents smarter or faster. It’s a challenge to make them safe, transparent, auditable, and genuinely aligned with the interests of their human principals.
If we get that right, we’re not just automating workflows — we’re building systems people can genuinely trust.