Moving Beyond Static Prompts to Living Playbooks: What ACE Means for Enterprise AI Governance

Hanah-Marie Darley
Hanah-Marie Darley
Co-founder & CAIO

ACE moves enterprise AI governance beyond static prompts to living playbooks. This research note explains the model, why prompts don't scale, and where enterprises can apply it today.

Desk lamp with warm lighting for workspace illumination

AI agents are moving quickly into enterprise operations. They are writing reports, querying data stores, helping teams make decisions, and triggering actions across systems. What many organisations are now discovering is that it's one thing to deploy an agent and quite another to govern how it behaves, learns, and adapts over time.

Traditional AI governance tools focus on model access, data protection, and compliance audits. These are still essential foundations, but they are no longer sufficient on their own. AI agents do not just generate outputs. They reason, chain decisions, and act across SaaS platforms, codebases, APIs, and endpoints. That creates a new category of responsibility for CISOs and CIOs: agentic AI governance.

A recent research paper from Stanford, Agentic Context Engineering (ACE), offers a thoughtful path forward. It explores how to guide agent behaviour through structured, evolving context rather than constant model retraining. For enterprises, this research provides useful insight into the future of AI governance systems, AI risk management, and how to build AI platforms that are transparent, controllable, and audit-ready.

Why prompts alone don’t scale in enterprise AI governance

Most AI agents today rely on fixed prompts (guardrails) or policies to shape their behaviour. Over time, this creates two problems that ACE defines clearly:

  • Brevity bias - Optimised prompts often become shorter and more generic. Important domain rules and compliance requirements fall away.
  • Context collapse - As memory grows and is rewritten repeatedly, past knowledge is lost. Agents forget prior mistakes or changes in policy. There is no reliable audit trail.

For CISOs, CIOs, and risk leaders, this leads to a familiar feeling: systems that perform well in isolation but offer limited traceability, accountability, or consistency in higher risk workflows.

Agent governance requires more than static prompts, it requires context that evolves like an organisational playbook: version-controlled, reviewable, and aligned to policy.

ACE is a model for context-driven AI governance

ACE proposes a simple but powerful architecture: instead of retraining agents or rewriting entire system prompts, update only the relevant parts of their context in small, structured entries.

It uses three functional layers:

ACE roles at a glance
ACE role Purpose Enterprise analogy
Generator Executes the task The agent or automation system
Reflector Evaluates outcomes and identifies lessons or risks QA, red team, security analyst
Curator Updates context with structured learnings Governance or policy management layer

Each update to context is stored as a structured unit with metadata such as rule, source, timestamp, and domain. This creates a persistent, reviewable memory system that supports governance, audit and risk assessment.

For example:

If an onboarding document is missing employee tax jurisdiction, request confirmation before submission.
Source: HR compliance issue, logged 12 Feb 2026
Owner: HR Governance

This provides a clear audit log of how an agent learned, why a change occurred, and how it should behave in future.

Why this matters for enterprise security and AI governance

This approach aligns closely with what security leaders are asking for: AI systems that improve over time while staying accountable.

ACE-style context engineering supports several pillars of enterprise AI governance:

  • AI governance audit logs: Every update has a source, reason and timestamp - essential for compliance with frameworks such as the EU AI Act, ISO 42001, NIST AI RMF, DORA, and NIS2.
  • AI governance risk assessment: Security teams can review how agents handle sensitive data, escalate decisions, or deviate from expected behaviour.
  • Policy-aware agents: Governance systems can update agent context with policy changes rather than triggering full model retraining.
  • Selective unlearning and change control: Individual rules can be removed or amended when regulations, ethics policies or legal requirements change.

This is where AI governance platforms must evolve from securing models to governing behaviour over time.

Where enterprises can apply this today

Some early enterprise use cases that align with ACE include:

Where ACE applies
Use case How ACE principles apply
Financial reporting agents Update compliance rules in memory instead of retraining models
HR or legal assistants Learn approved phrasing and escalation paths through Curator updates
DevSecOps / coding agents Embed new secure coding practices as structured memory when vulnerabilities are found
Compliance / audit agents Capture policy exceptions, legal interpretations, and decision rationales for review

These use cases have something in common: each requires continuous improvement with control, and each benefits from behavioural visibility rather than static, templated instructions.

From static agents to governed, evolving intelligence

The ACE framework shows a credible path forward. It evidences the idea that AI Agents can learn safely if their context is treated as a living asset: structured, audited, and governed.

For CISOs and CIOs, this is the foundation of modern AI governance platforms:

  • Visibility of agent behaviour across systems
  • AI governance policy management through context updates
  • Real-time monitoring and AI governance metrics
  • Audit-ready records of how an agent reasoned, acted and adapted

At Geordie, this is exactly where we are focused. We help security teams gain behavioural observability across agents - whether they are built, bought or embedded in SaaS platforms - so they can scale innovation without losing control of risk.

Enterprise AI does not need more opacity. It needs systems that can see, reason, and govern as they evolve.

Original article: https://www.arxiv.org/pdf/2510.04618

Footer graphic with abstract geometric patterns and gradients