4. Crisis Response and Resilience
From legacy responses to agentic readiness in an era of polycrisis.
How It (Doesn’t) Work Today
When force majeure hits, whom do you turn to? Responding to large-scale emergencies is one of the original tasks of government and remains a core function in every policy area, from defence and public safety to financial markets and public health. Yet today’s emergency systems are struggling to keep pace. Most governments still rely on siloed command structures, manually updated dashboards, and human-in-the-loop-focused decision-making. Coordination across agencies is slow. Real-time data often does not exist. When it does, it is fragmented across systems, underused, or too complex to process and act on quickly.
This would be challenging in any environment, but especially so given today’s extraordinary complex challenges. We are in an era increasingly characterised by polycrisis: interconnected and cascading shocks ranging from pandemics and extreme weather events to cyber-physical attacks, financial instability, disinformation campaigns and even conventional warfare — traditional crisis management models are under strain. Threat actors are already adapting. With AI, they can automate, scale, and personalise attacks at unprecedented speed. Governments, by contrast, are often still operating with institutional reflexes shaped for a slower, more linear world.
A Vision for Agentic State Resilience
In a world increasingly shaped by AI, the best form of preparedness is to excel at using AI. Agentic government means equipping the state with intelligent systems that can anticipate, respond, and adapt across the entire crisis lifecycle: from prevention and preparedness to response, recovery, and continuous learning. In an environment where threat actors are already leveraging AI to disrupt and destabilise, governments must match speed with greater speed, and intelligence with higher intelligence.
Resilient statecraft in an agentic era will depend on the following capabilities:
Predictive early warnings and proactive preparedness: AI systems trained on global and hyperlocal data streams identify precursor signals and provide warnings with actionable lead times. AI agents simulate a near-infinite variety of virtual stress tests. The already overwhelming fire hydrant of data turns into a tsunami. AI is crucial for maintaining a signal-to-noise ratio. Otherwise, meaningful decision-making, resource allocation, prioritisation and cross-agency coordination break down.
Simulation infrastructure: Governments should treat simulation infrastructure as critical public infrastructure. Agentic models can continuously simulate crisis scenarios across domains, producing synthetic datasets that reveal systemic fragilities. More than stress tests, these simulations become generative foresight mechanisms. Similar to how flight simulators improved aviation safety, crisis simulators, if public and participatory, could preempt cascading failures in everything from climate to supply chains.
Hyper-aware AI-orchestrated first response: When a crisis begins to unfold, AI initiates the first steps in crisis response before human-in-the-loop structures have time to react. What is already happening today in technical domains like automated distributed denial-of-service (DDoS) attack mitigation will be increasingly used for complex tasks. This ranges from raising alert levels and dispatching repair teams to fielding inbound help requests and managing initial public communication. These agents will work alongside increasingly autonomous physical systems such as drones and robots, forming the backbone of a responsive, adaptive crisis infrastructure.
Coordinated machine-speed response: Governments will not be the only actors fielding AI agents. Private firms, NGOs, and even individuals may deploy agents to assist or intervene in emergencies. At best, this enables the entire ecosystem to respond at machine speed. At worst, uncoordinated agents will work at cross-purposes and undermine a coherence response. Avoiding this scenario will require new technical and institutional protocols for agent alignment, coordination, and conflict resolution.
Human-on-the-loop: Human oversight will increasingly move from making decisions to supervising them. This shift may feel uncomfortable, but those who let machines take the first step, especially in fast-moving situations, will gain an edge, with mental capacity freed up for critical thinking under pressure.
Key Questions
How do we harden information supply chains and models against adversaries and outages? What safeguards detect spoofed and poisoned data or autonomy? And how do agentic systems respond when networks degrade or go dark?
What does operational resilience look like at machine speed? Can agents function safely and effectively in degraded environments or under contested conditions, and how do we design for ‘graceful fallback’?
Can governments develop a shared doctrine for agentic crisis management? Do we need new playbooks, rules of engagement, or even treaties that define how autonomous systems coordinate in multi-actor emergencies? What would shared operating principles between allied agents, public and private, look like in practice?
How can we prevent agentic lines of defence from escalating crises unnecessarily? For example, how does misclassifying intent or triggering automated countermeasures in response to ambiguous signals not lead to escalation? What governance mechanisms can ensure that autonomous protection does not become a source of provocation or geopolitical miscalculation?