From Impersonation to Delegation: How to Build Trust in the Age of AI Agents
AI agents are here—and they’re already acting on our behalf. But most digital infrastructure wasn’t built to support them. The first wave of agentic services relies on impersonation, not delegation. It’s screen scraping, not secure trust.
In our new white paper, From Impersonation to Delegation, we explore why this matters—and what to do about it.
By 2028, AI agents are expected to drive 20% of digital storefront interactions and make 15% of day-to-day decisions autonomously. Yet today, most AI agents operate without declared identity or constrained authority, introducing massive risk to data, compliance, and consumer trust.
To meet this challenge, we must shift to explicit delegation—grounded in robust identity verification, enforceable entitlements, and conformance frameworks that can govern AI activity. The white paper offers a strategic framework based on:
Human-in-the-loop: Proof-of-personhood and privacy-preserving credentials
Know Your Agent (KYA): Agent identity, delegated authority, and auditability
Model Context Protocol (MCP): Verifiable agent context at runtime
Trusted Standards: OpenID, CDR, AGDIS, and other building blocks
Digital Public Infastructure (DPI) in Practice: What Australia’s regulatory and infrastructure approach means for trusted AI
This isn’t just theory. The white paper maps out the real-world architecture, risks, and investment pathways for AI-ready services across financial, government, and commercial sectors.
Looking to move beyond AI hype and into secure, scalable implementation?
Let’s talk about how your organisation can architect open, interoperable, and future-ready digital trust solutions.