Rethinking governance for agentic AI

April 14, 2026

Many approaches to agentic artificial intelligence (AI) governance rely on reactive oversight, applying controls after systems are deployed rather than embedding them into system design. The article argues that, like established “trust assurance” practices in security, agentic AI requires clear definitions of behavior, success and failure conditions, and enforceable and pre-defined limits built into the system from the start to minimize silent and recurring failures.

It critiques claims that agents can (or should) operate without fixed behaviors, predefined workflows, or controlled tool usage, noting that such assumptions increase the risk of cascading errors and exploitation. Instead, reliable agentic AI depends on enforced boundaries around actions, API calls, and tools, with legal, cybersecurity, and compliance teams playing a key role in defining trust boundaries, limiting scope, and preventing unchecked expansion over time.

Read our full publication, “How to approach governance of AI agents.”