The Agentic State 

How Agentic AI Will Revamp 10 Functional Layers of Public Administration

Whitepaper丨Version 1.0丨May 2025

Lead AuthorLuukas Ilves

ContributorsManuel Kilian, Tiago C. Peixoto, Ott Velsberg

6. Policy and Rulemaking

From static rulebooks to living policies, continuously monitored
and adapted by AI agents.


  1. How It (Doesn’t) Work Today

Lawmaking and regulatory rule-making typically operate on slow, reactive cycles. Policies are drafted, debated, decided upon, and only then implemented. Once enacted, they often remain static for long periods, even as conditions change. Updating them usually requires new legislation or a full regulatory process, which are both costly and time-consuming.  As a result, many regulatory actors understandably prefer stability over adaptability. 

This rigidity leads to critical mismatches. Benefits formulas and tax codes lag behind economic realities; environmental thresholds may fail to reflect the latest scientific data. When law adjustments are made, they are usually based on retrospective data, political compromise, or expert judgment, rather than real-time feedback. 


  1. A Vision for Agentic Policymaking

In a world where agentic AI is fully embedded in government, the very fabric of governance can change. Laws, currently static code written once and amended rarely, can develop into a far more dynamic living system, continuously interpreted, tested, and refined by agents operating within clearly defined public mandates.

The idea of ‘law as code’ is not new, but an agentic government will have the capability to rewrite laws as easily as agents rewrite code. AI agents can simulate complex systems, run policy scenarios, test and red team alternative designs at staggering volume and speed; where quantum computing will unlock a further leap. Moreover, AI can course-correct ‘at runtime’, detecting drift, bias, and systemic failures. 

This opens the door to far more dynamic systems of regulation and legislation, where broad societal goals are set by humans (legislators), while specific rules, thresholds and requirements are adjusted dynamically by agents with limited or no human intervention. Much of the current content of laws and regulations (and the time-consuming process of legislation) could be replaced by mandates for agents to achieve outcomes. 

This shift will be most profound in areas where public and private sector agents can collaborate and negotiate. In regulatory areas such as environmental protection or financial services, public agents embodying public mandates and goals could negotiate with private agents in  sophisticated, real-time markets for risk, emissions, or compliance. A city, for instance, might task its agent with minimising the total financial cost of reducing particulate pollution below health-critical thresholds, while a financial regulator might permit firms to offer higher-risk products, provided that systemic risk remains within acceptable bounds.

Agentic policymaking challenges contemporary notions of inclusion and participatory decision making, but need not be less democratic. Rather than operating solely through top-down regulatory adjustments, agentic policy systems could also learn from citizen signals. Feedback loops, such as appeals, time-to-resolution metrics, or even emotion detection in digital interactions, could become inputs for agent-guided policy refinement. In this model, the boundary between policy implementation and adjustment becomes porous: agents adjust rules not  only based on macro-level KPIs but also from bottom-up input and friction indicators.

Realising this vision will require governments to update legal and governance frameworks to ensure continuous, transparent oversight of agentic services and rules. Key elements of such a framework might include: 

  • Chain-of-thought logging: Every agent decision must be traceable back to reasoning steps.

  • Fallback mechanisms: High-risk decisions must allow human contestation or override at runtime.

  • Identity binding: Every agent must be legally linked to a responsible person or institution.​

  • Public outcome metrics: Services must report fairness, error rates, and service quality over time, not only at launch.

  1. Key Questions

Where can governments safely begin experimenting with dynamic, AI-assisted policy adjustment? While full automation of policymaking is not the starting point, what are the most appropriate early-use cases for AI-assisted adaptation? Could executive agencies, which already operate below the level of legislative decision-making, serve as testbeds? 

What kind of (democratic) oversight preserves legitimacy? What combination of public dashboards, citizens’ juries, or parliamentary committees legitimises continuous parameter updates?

How do we test and rollback self-adjusting rules? What simulation standards, validation thresholds, and legal rollback procedures apply when a digital-twin-driven change misfires?

How do we harmonise agent-driven policy across borders? When neighbouring jurisdictions adopt at different tempos, how do we prevent regulatory arbitrage yet respect sovereignty?

If evidence-based policymaking and ‘law as code’ have struggled to gain traction, what makes agentic policymaking different to succeed? Why should we expect it to succeed where those earlier efforts have struggled to take hold?

© 2025 Global GovTech Centre GmbH

Imprint

© 2025 Global GovTech Centre GmbH

Imprint

© 2025 Global GovTech Centre GmbH

Imprint

© 2025 Global GovTech Centre GmbH

Imprint