As we move toward 2026, I find myself increasingly uneasy with how many organizations talk about operational resilience. Not because they are ignoring it, quite the opposite. Most financial institutions, and a growing number of organizations beyond financial services, have invested heavily in resilience over the past several years. Frameworks are in place. Programs exist. Governance structures have been approved. The language of resilience has entered boardrooms and regulatory conversations.

And yet, when I look beneath the surface, I see a growing gap between readiness on paper and resilience in practice.

That gap is not caused by a lack of effort or intent. It is caused by a misunderstanding of what resilience actually is.

Resilience is not the absence of disruption

One of the most persistent and dangerous myths I encounter is the belief that resilience is something you earn once you have done enough “good work.” Enough controls. Enough documentation. Enough maturity.

What I have learned — both professionally and personally — is that resilience is not a reward for doing things right. It is the capacity to continue when things go wrong anyway.

Healthy organizations experience failure. Well-governed institutions suffer outages. Strong control environments still face cascading disruptions when dependencies behave in unexpected ways. Resilience does not mean disruption will not happen. It means the organization can absorb shock, adapt under pressure, and continue delivering what matters most without causing unacceptable harm.

This distinction is fundamental, yet many operational resilience programs are still designed as if the goal were to prevent failure rather than operate through it.

Why resilience has become a strategic capability

What I am seeing across industries is a fundamental shift in how disruption manifests. Disruption today is rarely localized. It is rarely singular. And it is rarely contained within organizational boundaries.

Modern organizations operate through complex ecosystems; particularly digital ecosystems. Critical services depend on cloud platforms, SaaS providers, managed service providers, data aggregators, identity services, and increasingly specialized third parties. Each of these dependencies enables scale and innovation. Collectively, they create shared vulnerability.

When something breaks in that ecosystem, the impact propagates. Failures cascade across services, firms, and jurisdictions. Recovery becomes nonlinear. Decisions must be made with incomplete information and under real time pressure.

This is why operational resilience has moved out of the operational basement and into strategic discussions. It now shapes how services are designed, how technology architectures are approved, how outsourcing decisions are made, and how executives think about acceptable trade-offs under stress.

Resilience is no longer something you “activate.” It is something you design for.

The global regulatory signal; and what matters more than the rules

I spend a great deal of time with regulations, guidance, and supervisory expectations. But what stands out to me most right now is not the differences between regulatory regimes; it is their convergence.

Across Europe, North America, and Asia-Pacific, regulators are independently arriving at the same conclusions. Whether through digital resilience regimes, broader critical-entity frameworks, prudential standards, or supervisory guidance, the message is consistent:

Organizations must understand what matters most, how it is delivered, how it fails, and how it recovers; and they must be able to demonstrate that understanding continuously.

The specific regulation matters less than the principle behind it. Organizations that treat each new requirement as a standalone compliance exercise miss the point entirely. Those that internalize the underlying resilience principles find themselves better prepared: not just for one regulator, but for an increasingly volatile operating environment.

What worries me most: fragmentation disguised as maturity

The biggest weakness I see in operational resilience programs today is not a lack of controls. It is fragmentation masquerading as maturity.

Risk management teams have their models. ICT teams have theirs. Third-party risk teams focus on contracts and due diligence. Business continuity teams focus on plans and exercises. Each discipline may be competent — even sophisticated — in isolation.

But resilience does not fail within silos. It fails between them.

When business services are defined differently across functions, when dependencies are modeled inconsistently, when testing results are not connected to tolerances, and when incidents do not inform investment decisions, the organization appears resilient on paper while remaining brittle in reality.

Fragmentation allows organizations to produce documentation without producing capability. That is increasingly visible as expectations mature.

The shift I believe defines the next phase: from readiness to demonstration

Over the past few years, most organizations focused on readiness. That made sense. You cannot demonstrate what you have not built.

But readiness is now table stakes.

What I believe defines the next phase of operational resilience is demonstration. Demonstration means showing — credibly and repeatedly — that the organization can deliver critical services within tolerable limits under adverse conditions.

This is where many programs struggle, because demonstration requires things that documentation alone cannot provide:

  • Impact tolerances grounded in operational reality rather than aspiration,
  • Recovery strategies tested under compound stress,
  • Third-party exit plans that work when markets are constrained, and
  • Evidence that is consistent over time, not manually assembled when asked.

Demonstration exposes assumptions. And assumptions are often where resilience quietly breaks.

Testing as a discipline of truth, not reassurance

Few aspects of resilience are more misunderstood than testing.

In many organizations, testing is still treated as a validation exercise, a way to confirm that plans work and reassure stakeholders. Scenarios are narrow. Outcomes are framed positively. Lessons learned are modest.

But meaningful resilience testing does something very different. It reveals uncomfortable truths.

It shows where recovery objectives are unrealistic, where dependencies were overlooked, where decision-making breaks down under pressure, and where third-party assurances collapse when stressed.

I believe organizations must fundamentally change how they think about testing. Testing is not about passing. It is about learning faster than the next disruption. Organizations that avoid uncomfortable results delay maturity. Those that embrace them build resilience that actually holds.

Third parties: where theory meets reality

If there is one area where I see the largest disconnect between theory and reality, it is third-party risk.

Most organizations now acknowledge that they are deeply dependent on external providers. Yet many still manage those dependencies using tools and assumptions designed for a different era.

When dozens — or hundreds, or thousands — of institutions depend on the same providers, resilience is no longer an individual firm problem. It is systemic. Exit strategies that look plausible on paper may collapse when multiple firms attempt to act simultaneously. Substitution plans may fail when alternatives are scarce.

What I recommend is a fundamental reframing of third-party risk management; from a procurement and compliance function into an enterprise resilience capability. That means understanding shared dependencies, testing failure scenarios honestly, and governing third-party relationships as part of critical service delivery, not as vendor checklists.

Evidence, data, and the illusion of manual control

Another area where I see growing strain is evidence. Many organizations believe they understand their resilience posture because experienced individuals can explain it. But supervisory confidence does not rest on explanation. It rests on traceable, consistent evidence.

When resilience data is fragmented across systems, manual aggregation becomes the norm. That approach might survive an audit. It does not survive ongoing supervision or real disruption.

My call to action here is simple but not easy: organizations must treat resilience evidence as a byproduct of operations, not a reporting exercise. That requires integrated data models that connect services, dependencies, risks, tests, incidents, and outcomes.

This is not a technology problem. It is an operating model problem.

Resilience as an operating model, not a program

The organizations I see making real progress share a common trait: they stop treating resilience as a program and start treating it as an operating model.

Resilience informs how services are designed, how architectures are approved, how outsourcing decisions are made, and how trade-offs are evaluated. It becomes embedded in decision-making rather than activated during crisis.

This is the point where resilience stops being a regulatory obligation and starts becoming a source of confidence.

My call to action as we approach 2026

As we move into 2026, my call to action is clear.

Stop asking whether your organization is compliant.
Start asking whether it can perform under stress.

Stop treating resilience as documentation.
Start treating it as design.

Stop optimizing within silos.
Start integrating across services, dependencies, and decisions.

Operational resilience is not about avoiding disruption. It is about reliably achieving objectives amid uncertainty and continuing with integrity when disruption arrives anyway. Organizations that internalize this principle will be better prepared not just for regulatory scrutiny, but for the world as it is now.

Leave a Reply