CAPTAIN’S LOG: Choose Your Own Risk Adventure

When I stepped onto the keynote stage in Miami at Riskonnect Konnect 2025, it felt less like a ballroom and more like a bridge. The room hummed the way a starship does before a jump to warp: alive with expectation, crewed by leaders who navigate complex systems every day. I introduced the mission simply: we would not talk about risk management; we would do risk management — together — through a Choose Your Own Adventure simulation where every decision would change the story. Because that is how it works in real life. You do not get the luxury of a single timeline. You choose, you commit, you face the branches.

I framed the session the way I frame my podcast: risk is not the enemy, it is the mission. Too many organizations still steer using the rearview mirror: audit findings, stale registers, and red–yellow–green heatmaps that tell us where we have been, not where we are going. Real navigation requires foresight — connecting internal telemetry to external signals, aligning decisions to objectives, and operating with integrity even when the turbulence hits.

To make that point tangible, I called four “Trekkies” from the audience to the bridge and gave them their roles. Costumes included (Vulcan ears for the Science Officer — irresistible).

  • Captain (CEO)Bob Bowman, Chief Risk Officer & Chief Ethics and Compliance Officer, The Wendy’s Company
  • Science Officer (Risk)Drew Stipe, Director, Professional Services, Riskonnect, Inc.
  • Security Officer (Compliance)Fritz Hess, Chief Technology Officer, Riskonnect, Inc.
  • Engineering/Ops (IT)Janet Dold, Corporate Data System Analyst, Fairview Health Services

They did not know what was coming. That was the point. We rarely do.


The Mission Begins: Expansion into Country Zed

Our board — yours and mine, in the simulation — had approved outsourcing expansion into a promising new market. The question was not “Is there risk?” The question was “Which risk will we choose to own?” The Captain set tone and objective. The Science Officer surfaced geopolitical stability and corruption indices. Security mapped regulatory exposure and ethical tripwires. Engineering checked capacity, resilience, and digital trust. The audience voted on pace: fast, phased, or delay for more assurance.

The vote split, as it often does in real committees. Speed has a cost. Caution has a cost. Not deciding is also a decision. That was Lesson One: every path trades one risk profile for another.

  • Strategic choice framing helps: objective, appetite, threshold, constraint.
  • Forward telemetry beats backward reporting: what could happen next, not only what did.
  • Shared language reduces friction: scenario, exposure, control, consequence.

First Shockwave: A Modern Slavery Exposure

Two months into expansion, the first shock hit: an exposé tied our outsourcer to modern slavery. Phones lit up. Investors wanted reassurance. Regulators wanted answers. Internal teams wanted a plan. The Captain weighed options, the Science Officer modeled impacts, Security reviewed legal obligations and values, Engineering tested whether we could re-platform quickly.

The dilemma was not academic, and the audience felt it. Cut ties immediately and absorb sunk cost? Audit and remediate with transparency and risk the optics? Pause for certainty and risk reputational collapse? The room leaned toward “act with integrity and rebuild” — not because it was easy, but because it aligned with purpose and preserved long-term value.

  • Integrity is a control — not just a slogan — and protects license to operate.
  • ESG is operational when it drives supplier governance, not just disclosure.
  • Remediation readiness (playbooks, partners, KPIs) determines whether “fix” is credible.

Second Shockwave: An Activist Ransomware Strike

Then the second shock wave: a coordinated ransomware attack by an activist group demanding we sever ties or suffer a data breach. This is how risks really behave — they cluster. A social/ethical exposure becomes cyber becomes operational becomes financial. The bridge got very quiet. The Captain asked for probabilities of recovery and time-to-restore. The Science Officer calculated; Security confirmed disclosure triggers; Engineering reported containment limits. We debated whether to pay, stall, or resist.

No option was clean. Paying invited recidivism. Resisting meant downtime and headlines. Negotiating bought time but not certainty. The audience discussed cyber insurance posture, segmentation, and tabletop preparedness as if we were actually under fire — again, the point. Exercises beat memos.

  • Interconnected risk is the rule: one event, many domains.
  • Preparedness is evidence: segmented backups, crown-jewel mapping, breach comms, insurance terms.
  • Transparency beats silence: timely, fact-based updates build trust even in failure.

The Final Fork: Retreat, Rebuild, or Pivot

With regulators, media, and investors watching, we faced the last branch: pull out of Country Zed entirely; stay and rebuild with strict governance and transparency; or pivot to a new region with stronger controls but strained resources. The vote settled on stay and rebuild — a choice that accepts pain now to build competence later. It is also where real programs separate themselves: rebuilding is not a press release; it is architecture and muscle.

  • Rebuild playbook: supplier offboarding/onboarding rigor, continuous control monitoring, third-party assurance, board-level oversight.
  • Metrics that matter: mean-time-to-detect, mean-time-to-remediate, % critical suppliers with independent assurance, loss exceedance curves.
  • Culture signals: leaders who front the issue, incentives that reward reporting, consequences that are consistent.

Debrief: What the Adventure Proved

When the applause faded and the crew returned to their seats, we closed the loop. The adventure worked not because it was theatrical but because it was familiar. Everyone in the room had lived some version of it. The difference between “we survived” and “we created durable value” is usually not a single hero; it is orchestration.

Here is what the simulation made concrete:

  • Risk is in the decision, not just the register. Strategic choices (market entry, M&A, products) need the same discipline we bring to operational risks: scenarios, distributions, and thresholds — not just traffic lights.
  • Objectives are the north star. ISO 31000’s definition — risk is the effect of uncertainty on objectives — forces clarity: what are we actually trying to achieve, what will we accept, and what will we never trade away?
  • Compliance and risk are complementary, not hierarchical. Risk analysis is neutral; compliance draws the boundary lines of law and ethics. Collaboration with segregation of duties keeps the ship on course.
  • Quant beats color. Move from heatmaps to histograms; from likelihood × impact guesswork to loss exceedance curves, control efficacy, and ROI of mitigation.
  • Resilience is the business case. After the last five years, no process owner wants “more risk.” Every one of them wants less surprise and faster recovery.

Practical Tools You Can Lift Tomorrow

Because a keynote should leave you with handles, not just headlines:

  • Decision pre-briefs for big bets: objective → scenarios → exposures → controls → “tripwires” (KRIs) → go/hold criteria.
  • Third-party lifecycle discipline: intake, due diligence depth by criticality, continuous monitoring, and a real offboarding playbook.
  • Cyber tabletop with ethics overlay: run the technical drill and the disclosure and integrity decisions side by side.
  • Risk rhythm with the business: quarterly sessions with each function on their objectives and the risks to those objectives; build dashboards they actually use.
  • Story + stats: pair Monte Carlo or Bayesian outputs with a bow-tie narrative; the board funds what it understands.

Why the Starfleet Motif Works

Star Trek gives us a clean frame: a mission, a crew, a code, a universe that will test us. It keeps us honest about trade-offs, because space is indifferent to our intentions. It also keeps us optimistic: the point is not to avoid the unknown, but to reach it well — with clarity of objectives, disciplined curiosity, and integrity.

That is why we played Choose Your Own Adventure on stage. Not as theater, but as a mirror. In your organization, the pages you will turn next month are already numbered. The only question is who will decide, how they will decide, and what data, ethics, and controls will sit beside them when they do.

Risk is not the handbrake. It is the navigation system.

Set objectives. Tune your sensors. Orchestrate your crew.

Engage. 

If I Were a CRO: The Risk Platform I Would Demand (Through the Lens of an Analyst)

Technology does not give you good risk management. Strategy does.

Risk is everywhere—and that’s not a problem. As I say on the Risk Is Our Business podcastthe organization that is not taking risk is already out of business. The job is not to eliminate risk; it’s to take the right risks, at the right time, with eyes wide open.

Yet too much of what passes for “risk management” is a compliance exercise. In the United States in particular, risk has been conflated with Sarbanes‑Oxley controls. Sufficient? Absolutely not. Managing issues and losses after the fact is like driving with your eyes glued to the rearview mirror. You might learn from what you hit, but you won’t avoid the next one.

In my workshops, one of the best summaries I’ve heard is: risk management’s role is to ensure there are no surprises in achieving objectives. I agree—and I’d go further. Risk management is about making better decisions. Not just reporting on whether prior decisions met their objectives.

On the podcast, we’ve explored this repeatedly—from Renee Murphy on the slipperiness of reputational risk and the poverty of metrics beyond financials, to guests who challenge the orthodoxy of defensive risk. Tony argued we should be risk seekers—strategically, not recklessly. I’m with him. The modern risk leader is less a “risk cop” and more a risk strategist and facilitator who enables the business to take calculated risk in pursuit of value. EY’s recent work on the risk strategist echoes this pivot.

So if I were the Chief Risk Officer—or advising one as I do daily—what would I require from a risk management platform? Below is my buyer’s manifesto, grounded in GRC 7.0 – GRC Orchestrate, infused with hard‑won lessons from client engagements and conversations on Risk Is Our Business.


TL;DR — The Non‑Negotiables

  1. Model the business (strategy, objectives, value streams, processes, services, assets).
  2. Performance & Objective Management comes first; risks live in that context.
  3. Strategic Risk & Resilience (Decisions) risk as a strategy shaper.
  4. Objective‑Centric ERM performance‑aligned, proactive, integrated.
  5. Operational Risk & Resilience day‑to‑day reliability that enables strategy.
  6. Risk Analysis, Aggregation & Visualization — distributions, not heat maps.
  7. Risk Quantification that actually works (credible math, tested models).
  8. Rich Visualization incl. bow‑tie, event/fault trees, loss exceedance.
  9. Digital Twins of the enterprise and extended enterprise.
  10. Scenario Modeling & Simulation war‑games, tabletops, stress tests.
  11. Collaboration & Accountability owner, control owner, payer of risk.
  12. Insurance & Risk Transfer integrated with quantification.
  13. Risk Intelligence external/internal signals feeding foresight.
  14. Integration with ERP/OPS/Cyber/TPRM/H&S/etc. via a data fabric & ontology.
  15. Artificial Intelligence explainable, governed, and agentic as it matures.

First Principles: Strategy → Frameworks → Process → Then Technology

GRC 7.0 – GRC Orchestrate starts with the operating model, not the tool. The sequence matters:

  1. Strategy & Governance. Clarify the mission, risk culture, decision rights, and the roles/responsibilities across business and risk functions. Risk belongs on the bridge, not in the boiler room.
  2. Frameworks. Anchor in standards that emphasize objectives and uncertainty.
  3. Processes. Define how sensing, analyzing, deciding, acting, and learning flow across the lines of the business.
  4. Technology. Choose a platform that enables and orchestrates the above—not one that forces your organization to color inside its heat‑map lines.

If the platform can’t model how your business creates value and how decisions propagate through that model, it can’t help you manage risk — only inventory it, and those are most often out of date and of little value.


What I Would Demand From the Platform (and How I Would Test It)

1) Model the Business (Strategy → Value Streams → Processes → Assets → Obligations)

  • Why it matters. Risk doesn’t float in the ether; it attaches to objectivesprocessesservicesproductsvendorslocationstechnology, and people.
  • What good looks like. A native business architecture: objectives and KPIs/KRIs; value streams and processes (with owners); services; assets; third‑parties; and obligations mapped to each. A graph/ontology under the hood to keep relationships first‑class.
  • Red flags to avoid. A flat risk register with custom fields pretending to be a model.
  • Ask vendors. Show me a graph of how a change in a supplier’s risk posture propagates to service performance and strategic objectives in real time.

2) Performance & Objective Management (Context Before Risk)

  • Why it matters. Objectives provide the frame for uncertainty. Starting with risk is starting in the middle, like putting the cart before the horse. This dovetails into #4 below.
  • What good looks like. First‑class objectives with measurable KPIs, tolerance bands, and explicit linkage to risk, controls, scenarios, and initiatives. Ability to do objective‑level risk appetite and track risk‑adjusted performance.
  • Red flags to avoid. “We support objectives”—but only as a picklist on a risk form.
  • Ask vendors. Create a new strategic objective live. Link three KRIs, two initiatives, and a scenario. Now show me the risk‑adjusted forecast for that objective.

3) Strategic Risk & Resilience (Decisions)

  • Why it matters. Risk doesn’t only protect strategy; it shapes it.
  • What good looks like. A decision intelligence layer: option analysis, assumptions management, stress testing, and strategy simulations. Ability to quantify upside risk and optionality. Governance for how strategic decisions are logged, evidenced, and reviewed.
  • Podcast tie‑in. We often highlight how boards fixate on downside while ignoring the risk of missed upside. “Risk seeking” (hat tip, Tony) lives here.
  • Ask vendors. Demonstrate how the platform compares strategic options (build/buy/partner) using scenarios, quantification, and sensitivity analysis.

4) Objective‑Centric ERM

  • Why it matters. ERM must be performance‑aligned, not control‑centric.
  • What good looks like. Risks owned where work happens; KRIs/KPIs joined at the hip; near‑misses and weak signals captured and learned from; thematic risk aggregation that rolls from objective to objective, not from forms to forms.
  • Red flags to avoid. Quarterly risk reviews that never change the plan.
  • Ask vendors. Show me how a deteriorating KRI automatically triggers re‑forecasting of the objective and proposes mitigations with owners and funding.

5) Operational Risk & Resilience (ORM)

  • Why it matters. Strategy rides on the rails of operations.
  • What good looks like. Process‑level risks, controls, and impact tolerance mapped to important business services; automated controls & evidence where feasible; incident/near‑miss capture; playbooks tied to scenarios; resilience tests with learning loops.
  • Ask vendors. Run a tabletop on a payment outage. Show me the stress on impact tolerances, customer outcomes, and the handoff to issue/cause/corrective action management.

6) Risk Analysis, Aggregation & Visualization (Distributions, Not Dots)

  • Why it matters. Risk is not a color. Risk is a distribution over outcomes.
  • What good looks like. Histograms, cumulative loss curves, tornado/sensitivity charts; correlation/aggregation that is explicit and explainable; ability to roll up by structure (org) and function (themes) without double counting.
  • Red flags. A heat map as the main screen. Even worse, a stop light.
  • Ask vendors. Quantify a scenario, show the distribution, and explain aggregation assumptions. Change an assumption; show sensitivity in real time.

7) Risk Quantification (Credible Math)

  • Why it matters. Decisions require scale and trade‑offs.
  • What good looks like. Transparent models (e.g., Monte Carlo where appropriate), parameter estimation from internal/external data plus expert judgment with credibility weighting; support for heavy tails; scenario libraries with calibration; model validation and versioning. I appreciate approaches like Graeme Keith’s work on robust estimation and aggregation—because they respect uncertainty rather than wish it away.
  • Red flags. One‑size‑fits‑all scoring engines and black‑box “AI risk scores.”
  • Ask vendors. Walk me through your model risk management: documentation, testing, drift monitoring, and auditability.

8) Risk Visualization (Make It Think & Feel)

  • Why it matters. The right picture shortens the distance to a good decision.
  • What good looks like. Bow‑tie analysis (causes/controls/consequences), fault and event trees, causal maps, control effectiveness cones, loss exceedance curves. Executive views that are decision‑forward, not dashboard‑pretty.
  • Ask vendors. Build a bow‑tie live; link controls to testing/evidence and show how a failed test reshapes the consequence distribution.

9) Digital Twins (Of the Organization and the Extended Enterprise)

  • Why it matters. You can’t simulate what you haven’t modeled.
  • What good looks like. A living digital twin of your organization’s value streams, services, sites, suppliers, data, and dependencies. Twins support what‑if analysis: supplier outage, regulatory change, cyber event, demand surge. They learn as new data arrives. Twins extend to third parties and fourth parties via shared data and attestations.
  • How it works in GRC 7.0. The twin is driven by a semantic graph/ontology; an orchestration engine sustains synchronization across systems (ERP, cyber, H&S, TPRM). Agentic AI can probe the twin with experiments, surface nonlinearities, and propose mitigations with cost/benefit.
  • Ask vendors. Show me the twin of an important business service. Knock out a critical supplier. Quantify customer impact, regulatory exposure, and the mitigation portfolio with cost, time, and residual risk.

10) Scenario Modeling & Analysis

  • Why it matters. Scenarios are the wind tunnel for strategy and operations.
  • What good looks like. Stress and reverse stress testing; war‑gaming and tabletop exercises that are instrumented (evidence, timings, decisions); scenario trees with branching; Bayesian updating as facts accumulate; playbook linkage.
  • Ask vendors. Run a geopolitical escalation scenario affecting logistics. Show the branching decisions, updated probabilities, and funding trade‑offs.

11) Collaboration & Accountability (Owner, Control Owner, Payer)

  • Why it matters. Risk is everyone’s job but not no one’s job.
  • What good looks like. Clear RACI across risk, control, and budget ownership (who pays for mitigations and residual risk). In‑flow collaboration for executives and frontline managers, not just risk staff. Human‑centered UX; mobile capture for incidents/near‑misses; conversation linked to decisions.
  • Ask vendors. Assign an accountable executive, a control owner, and a payer to a mitigation. Route for approval; evidence funding and benefits realization.

12) Insurance & Risk Transfer

  • Why it matters. Transfer is one lever in the portfolio.
  • What good looks like. Policies, limits, exclusions, and claims data tied to scenarios and quant models; optimization of retain vs transfer; integration with brokers/insurers; evidence for insurability and premium negotiations.
  • Ask vendors. Show me how cyber control maturity shifts expected loss and the optimal retention/limit selection.

13) Risk Intelligence (Foresight Beats Hindsight)

  • Why it matters. External signals widen the field of view.
  • What good looks like. Feeds for geopoliticalregulatorymacroeconomicESG/reputationthreat intel, and supplier signals. Signal ingestion → enrichment → triage → linkage to twins, objectives, and scenarios.
  • Podcast tie‑in. Our episode on reputation underscored the gap between narrative risk and operational metrics. Intelligence connects the two.
  • Ask vendors. Demonstrate how a negative media surge or sanction change flows into scenarios, KRIs, and decision options.

14) Integration (Data Fabric & Ontology, Not Spaghetti ETL)

  • Why it matters. Risk sits at the seams.
  • What good looks like. Open APIs, event streams, and connectors; a semantic layer so data lands meaningfully; identity integration for least‑effort adoption; low‑code mapping; lineage and quality checks.
  • Ask vendors. Show the canonical ontology and how ERP incidents, SIEM alerts, vendor ratings, and HR data map to it—live.

15) Artificial Intelligence (Useful, Governed, and Agentic)

  • Why it matters. AI amplifies sensing, analysis, and orchestration—if governed.
  • What good looks like. ML for anomaly detection; NLP for unstructured evidence; copilots for authorship and decision support; agentic AI to run simulations, propose mitigations, and draft playbooks—with guardrails: model cards, bias/robustness testing, audit trails, human‑in‑the‑loop, and a clear RAIL/AI governance framework.
  • Ask vendors. Explain how your AI is validated, how humans supervise it, and how you prevent model drift and hallucination from entering decisions.

What I Will Not Buy

  • A static risk register with pretty heat maps.
  • “Compliance‑first” risk that never touches objectives or decisions.
  • Black‑box quantification with no model risk discipline.
  • Dashboards that report but never re‑plan.
  • AI without governance, provenance, or explainability.
  • Integration that means CSVs and weekend heroes.

The GRC 7.0 – GRC Orchestrate Blueprint

Sense → Model → Decide → Act → Learn is the feedback loop. The platform should:

  • Sense. Ingest internal telemetry and external intelligence.
  • Model. Maintain the semantic graph and digital twins; keep them current.
  • Decide. Run scenarios, quantify, compare options; document choices and rationale.
  • Act. Launch initiatives, controls, transfers; assign owner/control owner/payer; fund and track benefits.
  • Learn. Update models from outcomes, near‑misses, and after‑action reviews.

This is the bridge of the Enterprise—not a back‑office inbox.


A Concrete Walkthrough: Third‑Party Disruption to a Key Service

  1. Signal. A high‑risk supplier’s financial health deteriorates; sanction chatter emerges.
  2. Twin. The service twin shows a concentration risk to two geographies and a single alternate.
  3. Objective link. Customer churn and revenue objectives flag increased variance.
  4. Scenario. Branching: replace supplier (12–18 weeks), dual‑source (8–10 weeks), or stockpile (4 weeks) with cost/benefit quantified.
  5. Visualization. Bow‑tie surfaces control gaps (QA on alternate supplier, logistics reroute).
  6. Quantification. Monte Carlo + expert priors estimate loss exceedance; sensitivity highlights logistics lead time.
  7. Decision. Executive review selects dual‑source + temp stockpile; payer funds expedited onboarding; insurance team evaluates trade‑credit cover.
  8. Act & learn. Playbooks executed; KRIs monitored; post‑mortem updates priors and the twin.

Metrics That Matter (Beyond the Usual)

  • Risk‑adjusted performance at the objective level.
  • Loss exceedance probability at board‑relevant thresholds.
  • Near‑miss capture and conversion to learning actions.
  • Control effectiveness trajectory (not just pass/fail).
  • Scenario coverage & currency (last run, last calibrated).
  • Decision cycle time from signal to funded action.
  • Reputation/experience indicators (customer & employee)—yes, Renee’s drumbeat.
  • Insurance ROI (retained vs. transferred vs. mitigated)

RFP Prompts I Actually Use

  • Modeling. Show me your semantic graph. What are the first‑class objects? How do relationships version over time?
  • Objectives first. Create an objective, link KPIs/KRIs, attach scenarios—and quantify residual risk.
  • Quant. Demonstrate parameter calibration from internal/external data and expert judgment with credibility weighting.
  • Digital twin. Knock out a supplier in the twin; recompute service risk and objective variance.
  • Decision log. Where do decisions live? How are assumptions captured and reviewed?
  • AI governance. Provide model cards, validation evidence, and human‑in‑the‑loop controls.
  • Integration. Map ERP incidents, SIEM alerts, and vendor ratings to your ontology—live.
  • Accountability. Assign owner/control owner/payer; route approvals; show funding/budget links.

Final Word (and an Invitation)

Producing heat maps and generic lists to fulfill a reporting requirement is not risk management. The modern platform must help leaders make and fund better decisions—with context, quantification, accountability, and learning. That is the spirit behind GRC 7.0 – GRC Orchestrate, and the consistent theme on Risk Is Our Business.

If you’re wrestling with platform choices or shaping an RFP, I evaluate solutions constantly and carry a deep library of requirements. Reach out—and in the meantime, tune into the podcast for unvarnished conversations with leaders who are moving risk from the boiler room to the bridge.

GPRC for Operational Resilience: Delivering on DORA

The Enterprise Bridge for Digital Trust in the European Union

On the bridge of a starship, everything is connected. Navigation depends on sensors, sensors depend on power, power depends on engineering, and the captain’s decisions depend on the clarity and integrity of the information flowing across the ship. That is the image leaders should carry when they think about the EU Digital Operational Resilience Act (DORA)DORA is not merely another checklist of controls; it is the European Union’s insistence that financial institutions, and the ICT companies that support them, run their digital enterprise like a mission-critical vessel — coordinated from a single command center where governance, performance, risk management, and compliance operate as one.

DORA became applicable in January 2025 with a simple demand that is difficult to execute: prove that your organization can withstand, respond to, and recover from material ICT disruption while maintaining continuity of critical services. Behind that demand is the EU’s recognition that cyber threats, technology failures, concentration in third-party providers, and cross-border interdependencies can destabilize not only a firm but the confidence of markets and citizens.

Fragmented, after-the-fact, paper-driven “resilience” will not suffice. What is required is GPRC — governance, performance, risk management, and compliance — fully orchestrated, not scattered, through a modern architecture. In my GRC 7.0 language, that is GRC Orchestrate: a semantic, data-driven operating model with digital twinsagentic AI, and business-integrated processes that turn regulation into real operational capability.

Why DORA exists – and what it means in practice

The EU did not draft DORA to create busywork . . .

[The rest of this blog can be read on the Corporater blog, where GRC 20/20’s Michael Rasmussen is a Guest Blogger]

Not Your Father’s Information Security Program: Digital Risk & Resilience by Design

This week I’m back in the United Kingdom—wall-to-wall engagements, packed rooms, and board-level urgency. Two themes are dominating every corridor conversation and every executive session:

  1. Digital risk & resilience management (cyber risk, IT risk, information security), this is not your father’s information security program—and the market has noticed, and
  2. UK Corporate Governance Code Provision 29—the looming attestation requirement that pulls risk and controls from the boiler room to the bridge.

They’re not separate stories. They’re the same plotline: governance must now prove risk, control, and resilience.

Next week I head to Denmark and Sweden with an overbooked schedule and an active waiting list. It’s so busy I’ve booked four business meetings on Sunday in Copenhagen because the workweek is full. Demand is surging because the operating reality has changed.


The UK Context: Incidents That Forced the Issue

Yesterday in London, over 90 professionals registered for my Digital Risk & Resilience Management by Design workshop. We opened with what the UK has actually experienced this year—real events that disrupted operations, damaged trust, and elevated the conversation to the board:

  • Harrods disclosed a new incident after hackers compromised a third-party, stealing 430,000 e-commerce customer records—a second major event this year (see the latest from GRC Report: Harrods Suffers New Data Breach Exposing 430,000 Customer Records. This wasn’t “just” a data problem; it was a digital supply-chain failure with reputational consequences.
  • Marks & Spencer acknowledged a significant cyber incident in the spring, with official updates noting personal data exposure. Independent analyses estimate substantial disruption costs.
  • Co-op faced an attack that affected operations and supply, with press reporting on material revenue impact.
  • Jaguar Land Rover (JLR) suffered a major cyberattack that halted production and cascaded across suppliers, leading to government action to stabilize the supply chain and a phased restart. This is cyber risk turning into industrial and financial risk overnight.
  • Airports across Europe (including the UK) experienced disruptions tied to a third-party check-in provider—collateral damage when an ecosystem vendor falters.
  • Looking back to 2024, the Synnovis ransomware event reminded everyone that cyber incidents can spill into clinical operations—in this case, impacting NHS pathology services across London.

Add to that the UK’s Cyber Security Breaches Survey 2025 and public warnings from officials about rising hostile activity; the trendline is clear: frequency, materiality, and interdependence are all up.


Provision 29: When Governance Must Prove Resilience

The updated UK Corporate Governance Code 2024 applies from 1 January 2025, with Provision 29 (the board’s declaration over the effectiveness of material internal controls, including those over reporting) applying to financial years beginning on or after 1 January 2026. Translation: boards must step beyond narrative disclosure to assert control effectiveness—and evidence it.

Practical guidance circulating in the market rightly pushes companies to identify risks to objectives, define material controlsstand up testing and monitoring cycles, and remediate weaknesses well ahead of the first reporting year. If you wait until year-end, you won’t have the audit trail, telemetry, or confidence to sign. I am teaching a full-day workshop on this on November 6th, UK Corporate Governance Code by Design, LONDON.

Provision 29 makes cyber and digital resilience a governance obligation as it is part of broader risk and internal control management. It’s no longer sufficient for security leaders to say “we’re doing our best.” Boards must demonstrate that controls over risk, operations, and reporting are effective—continuously, not sporadically.


“Not Your Father’s Information Security Program”: What Keeps Leaders Up at Night

In yesterday’s workshop opening breakouts, attendees shared the nightmares that wake them at 2 a.m. Below I expand on each—because every one is valid, and together they define the new scope of digital resilience.

  1. Digital dependence. When every process is digitized, digital is business risk. Capture business-service twins (see below) that tie technology to outcomes so investment and trade-off decisions are made in business units, not technical silos.
  2. Ransomware (mentioned repeatedly). Assume data theft + encryption + extortion. Emphasize identity (MFA, phishing-resistant auth), immutable backups, segmentation, EDR containment, and exfil detection. Align with cyber insurance obligations before an event.
  3. Data breaches. Move beyond perimeter thinking to data-centric controls: classification, encryption, retention/rationalization, and continuous DLP tuned to business context. Reduce toxic data stores—what you don’t keep can’t be stolen.
  4. Third-party & digital supply chain. Most incidents now arrive through someone else’s API, SSO, or managed service. Build tiered criticality, continuous assurance (evidence feeds, attack-surface monitoring), and kill-switch playbooks (token revocation, traffic shaping, failover).
  5. Complexity of environment. Hybrid/Multi-cloud, SaaS sprawl, legacy on-prem, OT/ICS—complexity is the attack surface. Rationalize platforms, impose architectural guardrails (identity first, least privilege, service isolation), and automate hardening at the pipeline.
  6. Pace of technology, business, risk, & regulatory change. Static frameworks fail in dynamic environments. Shift from annual cycles to continuous risk assessment, streaming indicators (threat intel, misconfig drift), and regulatory horizon scanning tied to policy updates and training.
  7. Real-time insight into digital risk & resilience. Dashboards must reflect material risk now, not last quarter. Integrate attack surface, identity risk, vuln posture, and control status into one place, with drill-downs that show evidence, not just colors.
  8. Social engineering. Human-centric attacks (phishing, pretexting, MFA fatigue) bypass hardened perimeters. Resilience demands behavioral control design, adaptive training, and active monitoring of anomalous requests—especially in finance, HR, and privileged IT channels.
  9. Behavior. Policies don’t move mice; people do. Incentives, consequences, nudges, and leadership example-setting are necessary to turn rules into reflexes. Measure cultural indicators (reporting rates, near-misses, phishing test performance) as rigorously as technical KPIs.
  10. AI risk. AI expands both attack surface (prompt injection, data leakage, model theft) and attacker capabilities (automation, deepfakes). Establish an AI risk register, model validation, and guardrails (content filters, retrieval hardening, data minimization), and treat AI vendors as high-risk third parties.
  11. Employee practices on social media. Oversharing enables social engineering, doxxing, and physical risk. Provide clear, practical guidance, red-team your own open-source footprint, and monitor for impersonation and brand misuse.
  12. Silos of oversight. Security, risk, audit, privacy, and compliance often operate on parallel tracks. Converge on a common risk ontology, unified control library, and shared telemetry to eliminate duplicative testing and blind spots.
  13. Lack of assurance. Assurance is not a PDF; it’s a signal backed by evidence. Operationalize continuous control monitoring (CCM), link tests to controls, and maintain an immutable evidence ledger for internal audit and Provision 29 support.
  14. Critical system availability. “Data protected” is not “business up.” Map business services to dependencies (apps, data, vendors, facilities), define impact tolerances, test recovery to realistic RTO/RPO, and engineer graceful degradation.
  15. Corporate culture. A culture of speed and shadow IT without guardrails breeds loss events. Bake controls into the developer and product experience (policy-as-code, paved roads) so doing the right thing is the fastest path.
  16. Interconnected nature of digital risk on other risks. Cyber incidents cascade to operationalfinanciallegal, and reputational risk. Quantify causal chains: “one auth outage ⇒ order backlog ⇒ revenue dip ⇒ covenant risk.” This is the language of the board.
  17. Cyber incidents. Treat incident response as business continuity with forensics. Pre-negotiate counsel, crisis comms, and law enforcement engagement. Rehearse board-level tabletop exercises to align decisions under pressure.
  18. Extended enterprise. Partners, affiliates, franchisees, integrators—risk propagates through contracts. Expand scope beyond “vendors” to all external relationships; standardize onboarding, evidence exchange, and offboarding data destruction.
  19. Constant data breaches. Frequency has normalized, but tolerance hasn’t. Move toward event-ready posture: pre-built comms templates, regulator playbooks, customer remediation workflows, and materiality decision criteria.
  20. Cyber insurance. Policies are tighter; exclusions matter. Map controls to underwriting requirements (MFA, backups, EDR, patching SLAs), maintain attestable evidence, and simulate loss scenarios to set economically rational limits.
  21. PCN attacks on refineries (OT/ICS). Process Control Networks in energy and petrochemicals raise safety, environmental, and macro-economic stakes. The UK energy sector remains a prime target; bring OT and IT risk under a single governance model, with strict network isolation, asset discovery, and incident drills that include safety.
  22. Access control. Identity is the perimeter. Enforce least privilege, JIT/JEA for admins, continuous access review, and session recording for high-risk functions. Kill standing privileges.
  23. Out-of-date systems. Technical debt is breach bait. Build a decommission cadence, isolate what you can’t patch, and make “end-of-life” a board metric with remediation funding.
  24. Lack of segmentation. Flat networks turn local issues into enterprise outages. Segment by trust zone, blast radius, and business service; verify with purple-team exercises.
  25. Regulations. Requirements are multiplying (DORA, NIS2, CER, UK Code, UK Operational Resilience). Normalize obligations to controls and tests; avoid duplicate evidence generation by centralizing control mapping across frameworks.
  26. Support streams such as power. Cyber resilience depends on physical resilience (power, cooling, connectivity). Model these dependencies explicitly and test alternative sites, UPS run-times, and failover contracts.

Why Provision 29 and Digital Resilience Are the Same Conversation

Provision 29 isn’t a paperwork exercise; it’s a capability: governance that can see material risk, control it, and prove it. Yes, Provision 29 is much broader than digital risk and resilience, but it certainly is a critical part of it. The declaration forces boards to ask:

  • Which controls are material to our business services and reporting?
  • Do we have evidence, not assertions?
  • Can we detect control failure quickly and respond before outcomes degrade?
  • Are third-party and AI-driven risks within the same scope of control and testing?

The new standard of care is continuousassurable, and board-readable.


Digital Risk & Resilience in the Age of GRC 7.0 – GRC Orchestrate

This is where the next evolution—what I call GRC 7.0 – GRC Orchestrate—earns its keep. Think of it as a business-integrated command center underpinned by digital twinsagentic AI, and continuous assurance:

  1. Digital twins of business services. Map each critical service (e.g., “E-commerce checkout”, “Claims adjudication”) to its applications, data, identities, vendors, facilities, and support streams (power, network). Now you can analyze materiality, simulate impact, and target investment where it moves the needle.
  2. Unified risk ontology & control library. Collapse silos by adopting one language for risk, control, and obligation across security, resilience, privacy, and compliance. Provision 29 depends on a single source of control truth feeding testing, evidence, and reporting.
  3. Continuous control monitoring (CCM) & evidence ledger. Automate tests (config drift, MFA coverage, backup immutability, EDR health, segmentation rules), bind the results to the control, and store signed evidence with lineage. Assurance moves from “annual binders” to streaming signals.
  4. Agentic AI for detection, triage, and mapping. Use AI to reconcile findings to controls and obligations, summarize deviations for executives, draft remediation plans, and keep policies aligned to changing regs (DORA, NIS2, UK Code) without manual re-keying. Humans decide; AI does the grunt work.
  5. Third-party & AI vendor orchestration. Ingest SOC2/ISO attestations, penetration reports, SBOMs, and attack-surface telemetry. Maintain live risk tiers, enforce contractual controls, and keep “pull-to-revoke” playbooks (SSO tokens, API keys) ready.
  6. Identity-first architecture. Make identity and authorization the enforcement plane: phishing-resistant MFA, least privilege, continuous verification, high-risk session recording, and automated removal of stale access.
  7. OT/ICS governance alongside IT. Treat PCN assets with their own twin, zoning, and procedure sets. Drill scenarios that integrate cyber response with safety and environmental controls.
  8. Resilience analytics & impact tolerances. Tie recovery objectives to business outcomes (orders processed, beds filled, flights dispatched). Visualize tolerances and variance in real time; rehearse failovers using your twins, not guesswork.
  9. Board-ready reporting. Replace red/amber/green with narratives grounded in evidence: “3 of 3 material access-controls for E-commerce are in tolerance; segmentation test #142 failed in Zone C; compensating control is active; remediation ETA 72 hours.” That’s a Provision 29-grade update.
  10. Assured compliance. Map control signals to obligations and make audit a bystander effect: when evidence is baked into operations, audits consume it—not create it.

This is not a tool swap. It’s an operating model that treats digital risk as a system-of-systems problem, orchestrated across people, process, technology, and partners—with verifiable assurance as the output.


Closing the Loop

The UK incidents of 2025 — Harrods, M&S, Co-op, JLR, airport disruptions — show how quickly “IT issues” become business crises and governance tests. The only durable answer is a modern resilience architecture with continuous assurance that a board can attest to with confidence.

Now, I’m off to a string of meetings today and tomorrow in London—then wheels up for Denmark and Sweden. If you’re in Copenhagen this Sunday, you already know my schedule is spilling into the weekend. The message from every boardroom is the same: orchestrate resilience, or risk orchestrating your own headlines.

Policy Management and RegTech: Orchestrating Governance in an Age of Regulatory Uncertainty

The week began with two very different conversations that echoed the same theme. One was with a major U.S. healthcare organization grappling with how to stay ahead of regulatory change. The other was with a European financial services firm confronting the tsunami of new regulations washing over their business. Both organizations wanted to understand how regulatory change management integrates with policy management and the broader GRC architecture.

Those discussions flowed directly into my Policy Management by Design Workshop in New York City yesterday (hosted by COMPLY), where 42 participants from financial services joined me for a half-day of interactive discussion. The workshop confirmed what those initial calls signaled: policies are the nervous system of governance, risk management, and compliance, but too often they are fragmented, outdated, and ill-equipped to keep up with regulatory and business change.


What Keeps Risk and Compliance Leaders Awake at Night

Financial services attendees were candid about the challenges they face in policy governance amid regulatory volatility. Among the most pressing concerns raised:

  • Mapping policies directly to regulations and keeping them synchronized.
  • Sheer volume and velocity of regulatory change.
  • Ensuring stakeholders and employees actually see and understand policies.
  • Conflicting or duplicative policies across different regions and business units.
  • The frequency of updates required to keep policies relevant.
  • Documentation that satisfies both board oversight and regulatory examiners.
  • Multinational conflicts in language, jurisdiction, and enforcement.
  • Enforcement across the extended enterprise — including third parties.
  • Horizon scanning to anticipate change and prepare policies in advance.
  • Policy fatigue, apathy, and the danger of checkbox attestations.
  • Inconsistent governance and scattered ownership across silos.
  • Quality control, clarity, and conciseness in policy drafting.
  • Training, awareness, and testing of policy effectiveness.
  • The operational implications and implementation of policies — moving from words on paper to behaviors in practice.
  • Version control, access management, and audit trails to demonstrate accountability.
  • The looming question of how AI will reshape policy management itself — from drafting to monitoring compliance.

These are not isolated pain points; they are systemic fractures that demand a federated, structured, and technology-enabled approach.


The Blueprint for Policy Management by Design

At the workshop, I shared my Blueprint for an Effective, Efficient, and Agile Policy Management Program. The premise is simple but urgent: policy mismanagement is no longer a back-office nuisance — it is a GRC failure waiting to happen.

The blueprint calls for a structured, strategic, and scalable approach to policy governance:

  • Define a complete lifecycle for policy creation, approval, communication, training, monitoring, and retirement.
  • Establish governance, ownership, and accountability for policies, supported by a Policy Committee and a “meta-policy” (the policy on policies).
  • Standardize policy format, language, and metadata to eliminate confusion and inconsistency.
  • Communicate and embed policies across business units and third parties, supported by targeted training and attestations.
  • Link policies to objectives, risks, controls, obligations, and incidents within the broader GRC information architecture.
  • Measure effectiveness and compliance with clear KPIs/KRIs and test policies in practice.
  • Leverage technology for automation, distribution, and traceability, including integration with regulatory change management and horizon scanning tools.

The objective is not more policies. It is better policies: concise, relevant, realistic, and enforceable. Policies should guide decisions, reduce liability, and build trust — not gather dust on a shelf or clutter intranet pages.


RegTech: The Engine of Policy Agility

This is where RegTech enters the stage. Organizations cannot manually keep pace with the scale and speed of today’s regulatory change. Automated regulatory change management and horizon scanning feed into structured policy management so that:

  • New regulations are quickly mapped to affected policies.
  • Impact analyses identify gaps and conflicts.
  • Updates and attestations are triggered across the enterprise.
  • Boards and regulators see a clear, defensible audit trail.
  • Multinational organizations can harmonize global frameworks while respecting local nuances.

The convergence of RegTech with policy management is not optional. It is the only way organizations can remain agile in the face of regulatory velocity, while embedding integrity into their culture and operations.


From New York Workshops to the Global RegTech Summit

This week’s conversations and workshop set the stage for my role today at the Global RegTech Summit USA 2025 in New York, where I am moderating two panels.

  • In Stream B, we will explore RegTech and the Regulators: Striking the Balance Between Innovation and Risk, featuring voices from compliance leadership, investment management, and technology providers.
  • In Stream A, I’ll moderate Reg Change in the Financial Sector: Navigating the Evolving Regulatory Landscape, where we will dive into shifting compliance strategies, risk management frameworks, and how RegTech and AI are shaping the future.

The message I will carry into both discussions is the same: policy management is where regulatory change becomes real. Without effective policies — clear, current, and enforced — all the investment in regulatory intelligence and RegTech falls short.


Closing Reflections

Policy management is at the crossroads of governance and RegTech. It is where regulatory complexity meets organizational behavior. The organizations that succeed will be those that design policy governance as a strategic capability: federated across silos, automated with technology, and aligned to values and objectives.

In this era of constant change, policies are no longer static documents. They are living instruments of governance. And when managed by design, they empower organizations to achieve objectives, navigate uncertainty, and act with integrity.

Policy Management by Design: From Chaos to Culture

Policies are more than documents on a shelf. They are the DNA of organizational integrity, the framework that defines culture, directs behavior, and provides accountability in times of scrutiny. When done well, policies guide decisions, reduce liability, and build trust across the enterprise. When they are fragmented, inconsistent, or outdated, they create exposure rather than protection. 

Unfortunately, many organizations still operate in that fragmented state. Policies live across file shares, emails, intranet sites, and even printed binders. Multiple versions circulate at the same time, and employees are never quite sure which is the right one. New policies are sometimes authored without legal review, creating unintended liabilities. Attestations are tracked poorly, if at all, leaving leadership uncertain whether employees even know what standards apply. In this environment, policy management is not a back-office nuisance — it is a governance, risk, and compliance failure waiting to happen. 

This confusion undermines culture as well as compliance. Every policy is, at its heart, a risk document. It exists because a risk was identified and needed to be addressed. Policies . . .

[The rest of this blog can be read on the Comply blog, where GRC 20/20’s Michael Rasmussen is a Guest Blogger]

Digital Risk and Resilience: Orchestrating for Digital Trust

Inevitability of Failure: the Digital EcoSystem of Business

Every organization today is defined by the digital fabric and architecture in which its operations relies upon. This fabric is sprawling, complex, and interdependent. The systems, processes, and relationships that sustain modern business are increasingly digital, and increasingly fragile. Reminds me of the U.S. National Security Agency (NSA) paper from the 1990’s The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments, which was foundational in my early career. The reality is that this is no longer just about the IT department, the data center, or even the historic CISO role. The digital architecture of the enterprise is now the architecture of the business itself.

We have seen in stark terms how this fabric can unravel . . .

  • CrowdStrike. In 2024, a CrowdStrike update spiraled into global disruption. This was not a hacker, virus, or worm — it was a trusted vendor’s software failure, rippling across industries and bringing down organizations worldwide.
  • U.K. Retail Attacks. Earlier this year, the United Kingdom retail giants Marks and Spencer, Harrods, and the Co-Op faced devastating cyberattacks and ransomware that crippled operations and shook customer trust.
  • Ascension Ransomeware. In healthcare, Ascension Hospital’s ransomware crisis last year was a chilling reminder that digital failure does not just stop business; it can endanger lives.
  • Southwest Airlines Digital Meltdown. Southwest Airlines’ holiday meltdown was driven by outdated crew scheduling and IT systems that failed to track and reassign staff during winter storms, turning a weather disruption into a full-scale operational collapse.

Each of these events underscores a reality we can no longer ignore: digital risk is systemic, enterprise-wide, and existential.

What makes digital risk so challenging is not just the sophistication of threats but the convergence of multiple risk factors. Human error continues to cause outages and breaches through simple missteps. Malicious behavior — whether from insiders or external adversaries — adapts constantly. The relentless pace of change across infrastructure, applications, and cloud transformation adds new exposures by the day. And perhaps most precariously, organizations now operate in vast digital supply chains where one weak link can send shockwaves across thousands of entities. In practice, disruption often emerges from a combination of these elements, such as:

  • A misconfiguration in a cloud environment paired with a rushed change window.
  • A ransomware attack on a supplier that cascades into dependent operations.
  • An insider error or action that intersects with a system update or third-party service.

This intricate web means digital risk management cannot be siloed into compliance checklists or narrowly scoped security controls. It must be orchestrated, decision-driven, and tied directly to business objectives.

Rearchitecting to Digital Risk & Resilience for Digital Trust

Too many organizations still treat digital risk as a matter of regulatory compliance or a set of prescribed controls. But compliance alone is not resilience, and certainly is not risk management. Controls alone cannot deliver digital trust. True resilience begins with clarity of objectives — understanding what the business is trying to achieve and how digital capabilities support those goals.

From there, organizations must build foresight into their approach: anticipating disruption, simulating scenarios, and preparing adaptive responses. And it requires integration — weaving governance, risk management, and compliance into the very design of digital business operations rather than layering them on afterward. This is digital risk and resilience management to deliver digital trust.

The digital supply chain highlights why this is so urgent. Organizations depend on ecosystems of cloud providers, SaaS vendors, outsourcers, and digital partners. These relationships provide value but also amplify fragility. A single failed software update, as with CrowdStrike, can cause cascading outages. A ransomware-hit partner can expose data far beyond their own network. Even a brief supplier outage can paralyze entire business units. Managing this requires more than vendor scorecards or compliance attestations. It requires the ability to map dependencies, monitor signals, simulate breakdowns, and design resilience into interconnected digital ecosystems.

GRC 7.0 – GRC Orchestration of Digital Trust

This is where the future of GRC comes into play. GRC 7.0 — GRC Orchestrate provides the architecture to meet this challenge (as long as strategy and process are in place). It is not about defense alone but about foresight and trust. This does not eliminate risk to objectives but enables resilience so they can be achieved:

  • Agentic AI. With agentic AI, organizations can sense risk in real time, analyze context, and support decision-making at scale.
  • Digital Twins. With digital twins, they can model supply chains, business processes, and systems, simulate disruptions, and evaluate recovery strategies before crises strike.
  • Orchestration. With orchestration, resilience becomes embedded into governance, objectives, performance, and compliance, ensuring trust is designed into digital operations rather than left to chance.

The organizations that will thrive are those that embed resilience into their DNA. This is not a technical initiative but a business imperative. Digital trust is earned not through slogans but through deliberate strategy, careful design, and continuous execution.

On October 1st in London, I will be leading the Digital Risk & Resilience Management by Design workshop — a full-day session delivering a blueprint for building agile, integrated, and context-aware digital resilience programs. We will explore how to align digital risk with enterprise objectives, shift from reactive continuity to proactive resilience, and use emerging technologies like agentic AI and digital twins to orchestrate trust across complex ecosystems.

Digital risk is the business risk of our time. The question is no longer whether disruption will occur, but how ready your organization will be to anticipate, absorb, and adapt. The future belongs to those who design resilience into their digital architecture — and orchestrate digital trust.


Why GRC is NOW or Never For Aspirational Organizations

There comes a point in every organization’s journey when it must choose whether it is going to lead or follow — whether it will proactively shape its future or continually react to disruption.

For organizations with ambition — those seeking to scale responsibly, innovate with confidence, and uphold their commitments to stakeholders — that moment is now. Governance, Risk Management, and Compliance (GRC) has become the fulcrum on which that decision rests. 

The GRC conversation is no longer about avoiding penalties or surviving audits. It is about enabling the organization to reliably achieve objectives (governance), address uncertainty (risk management), and act with integrity (compliance). This is not a compliance slogan; it is the operational imperative of our time. And for aspirational organizations, it is a now-or-never decision. The complexity, speed, and interconnectedness of today’s risk and regulatory environment will not wait; and those who hesitate risk losing both control and credibility. 

Risk Is Moving Faster Than You Can Track with Spreadsheets 

The pace of risk has changed. Yesterday’s risk landscape was linear and episodic; today’s is complex, systemic, and real-time. The very nature of risk has evolved from being internal and controllable to external, interconnected, and constantly shifting. And nowhere is this more evident than in . . .

[The rest of this blog can be read on the GRCxperts blog, where GRC 20/20’s Michael Rasmussen is a Guest Blogger]

GPRC for Third-Party and Supply Chain Risk Management

Command and Control on the Bridge of the Enterprise with GRC 7.0 – GRC Orchestrate

“Captain, sensors are detecting increased fluctuations in the warp field. I recommend we adjust our alignment.” — Commander Spock

In the expansive landscape of modern business, the ability to manage risk and performance across an extended enterprise of third parties and suppliers is not simply important, it is mission-critical. Just as the bridge of the USS Enterprise coordinates navigation, operations, security, and engineering to sustain its mission, organizations today require a unified command center to orchestrate third-party governancerisk management, and compliance (GRC) that adds in performance (GPRC).

In this first article of our series exploring G[P]RC, we examine how organizations must move beyond fragmented checklists, static workflows, and reactive monitoring. Instead, the new paradigm — powered by GRC 7.0 – GRC Orchestrate — emphasizes enterprise architecture, business process modeling, digital twins, agentic AI, analytics, and intelligent systems that align governance and performance with proactive risk management and compliance.

Because the extended enterprise is no longer simply managed—it must be orchestrated.

The Legacy Problem: Navigating Without Sensors

Traditional third-party and supply chain risk management often looks like a . . .

[The rest of this blog can be read on the Corporater blog, where GRC 20/20’s Michael Rasmussen is a Guest Blogger]

GRC Engineering: From After-the-Fact Verification to Engineered Assurance

Featuring my collected insights combined with thoughts from the most recent Risk Is Our Business Podcast with Ayoub Fandi, Security Assurance Automation Team Lead at GitLab and founder of the GRC Engineer Podcast & Newsletter

In the most recent transmission of the Risk Is Our Business Podcast, I beam aboard Ayoub Fandi — Security Assurance Automation Team Lead at GitLab and the founder of the GRC Engineer Podcast and newsletter — to explore what is a next frontier for governance, risk management, and compliance: GRC engineering and how it related to GRC 7.0 – GRC Orchestrate. Our conversation ranged from first principles to hard-won lessons in automation and architecture, and from the current cyber-heavy use of the term to a broader, enterprise-wide discipline that touches objectives, risk, integrity, and assurance across the business.

Ayoub’s professional arc mirrors the transformation underway in the field. He moved from Big Four consulting in France to roles in high-growth technology environments at Salesforce and GitLab, where the cadence of change is measured not in quarters but in deployments per hour. That pace renders traditional GRC practices — annual control checks, screenshots, manual evidence packs, after-the-fact testing — increasingly unfit for purpose. As Ayoub put it, the gap between how fast the business operates and how slowly GRC verifies has become untenable. The solution, he argues, is not more checklists; it’s a structural shift: treat risk, compliance, and assurance as engineered capabilities built directly into systems, processes, and workflows.

Ayoub Fandi: “Some companies push a hundred thousand deployments a year. You can’t meet that speed with yearly tests and screenshots. GRC has to move earlier into design and become machine-readable in how we test, monitor, and gather evidence.”

What follows is a detailed exploration — narrative and pragmatic — of what GRC engineering is, why it matters, and how to make it real.


First Principles: Defining GRC Engineering against OCEG’s Core

We ground this in the OCEG definition of GRC as a capability to reliably achieve objectives (governance), address uncertainty (risk), and act with integrity (compliance). Against that backdrop, GRC engineering is not a new “flavor” of risk, compliance, or control; it is the technical discipline that embeds those principles into the fabric of the organization.

Here’s the definition we refined together:

Michael Rasmussen (validated by Ayoub): “GRC engineering is the discipline of embedding governance, risk management, and compliance into the technical fabric of the organization through systems architecture, automation, and data engineering — so that GRC is not just a policy function, but an operationalized, engineered capability.
Ayoub’s verdict: “9.999 out of 10 — the data engineering part is critical. Get the data wrong and everything else collapses.”

That last sentence is a constant refrain in our discussion. The most sophisticated automation fails without coherent data models, clean pipelines, and consistent semantics. In other words, data engineering is table stakes.


The Manifesto Mindset: Shift-Left, Treat GRC as a Product, and Be Practitioner-Led

Ayoub has articulated a simple, powerful set of principles in what he calls the GRC Engineer Manifesto. Three ideas stand out:

  • Shift Left: Move risk management and compliance considerations into the design phase so that GRC influences how systems are built, not just how they’re verified.Ayoub: “We want GRC present when the product manager makes trade-offs, not arriving at the end asking for screenshots.”
  • Treat GRC as a Product: Manage GRC iteratively, with a roadmap, telemetry, and a user experience orientation toward control owners and contributors. Reduce toil by sourcing data from native systems rather than forcing duplicate entry into GRC tools.
  • Practitioner-Led: Ensure those who live the problems shape the solutions. Partner with vendors, yes, but be clear-eyed about what good looks like, and build lightweight internal capabilities where necessary to bridge gaps.

Together, these principles convert GRC from a project (checklist, deadline, binder) into a product (ongoing capability, measured outcomes, engineered UX).


Architect vs. Engineer: Two Sides of the Capability

We also distinguished GRC architects from GRC engineers. The engineer writes the scripts, wires the webhooks, builds the workflows, and automates the evidence gathering. The architect designs the overarching decision and data architecture: how risk, control, obligation, and assurance flows traverse systems; where the source of truth sits; how to align GRC telemetry with business objectives and reporting.

Ayoub: “Software engineering skills may increasingly be commoditized with AI, but architecture endures— orchestrating systems, data, and stakeholders so the whole actually works.”

I shared an example of a Nordic telecom whose first GRC implementation faltered; their second succeeded only after they restarted with data models and enterprise architecture first. Ayoub agreed: many failures stem from starting with vendor feature lists instead of a clear picture of inputs, outputs, and flows (e.g., how a third-party risk assessment creates obligations, exceptions, and control tests downstream).


Beyond Cyber: Expanding the Scope to Enterprise GRC

Today, most visible “GRC engineering” examples sit inside digital and cyber programs — policy-as-code, cloud configuration monitoring, continuous compliance for SOC 2 / ISO 27001 / PCI / FedRAMP. That’s understandable; the technical acumen and tooling maturity in security are ahead of many business functions. But both of us argue that the same engineering principles must extend beyond IT to the full enterprise:

  • Performance & Objectives: connecting KPIs/KRIs/KCIs to objectives so that performance is always viewed with its uncertainty and control posture.
  • Enterprise & Operational Risk: scenario modeling, risk quantification, and control telemetry tied to processes, people, and assets.
  • Compliance & Ethics: obligation parsing and mapping, policy lifecycle automation, training triggers, and case management that integrates HR, Legal, and Compliance.
  • Internal Control & Audit: continuous controls testing, automated evidence pipelines, exception governance, and analytics-driven assurance.

Ayoub: “We started in tech because that’s where the engineers were, but the benefits are even greater as you move into functions with legacy process debt. The evangelism and some pre-built patterns just need to catch up.”


What GRC Engineering Looks Like in Practice (Across GRC 7.0 – GRC Orchestrate)

To make this concrete, here is a cross-section of engineered capabilities aligned to the GRC 7.0 – Orchestrate domains. Note how they move from workflow lists to data-centric, automation-ready architectures.

Strategy & Decision Management

  • Problem: Strategy reviews lack risk-adjusted intelligence.
  • Build: A decision architecture that links objectives to risk and control telemetry; simulation models surface trade-offs (“If we accelerate Region X, supply risk and ESG non-compliance increase by Y”).
  • Result: Decisions show the path to target and the cost of uncertainty.

Performance & Objective Management

  • Problem: KPIs are blind to risk and control efficacy.
  • Build: Data models that bind KPIs ↔ KRIs ↔ KCIs with lineage back to source systems (ERP, CRM, HRIS, cloud).
  • Result: Performance dashboards that surface early warning signals and control degradation.

Enterprise & Operational Risk & Resilience

  • Problem: Paper scenarios and tabletop exercises don’t translate into action.
  • BuildDigital twins of critical processes and assets; stress-test scenarios (workforce, vendor, facility, cyber, regulatory).
  • Result: Playbooks driven by telemetry, not static documents; alignment to DORA, CPS 230, UK Operational Resilience.

Digital Risk & Resilience

  • Problem: Cyber posture is siloed from business risk.
  • Build: Continuous configuration and vulnerability telemetry mapped to business services and obligations (NIST CSF, PCI, GDPR).
  • Result: Cyber metrics contextualized by business impact and regulatory exposure.

Compliance, Ethics & Obligation Management

  • Problem: Obligations live in PDFs and spreadsheets.
  • BuildObligation parsing (human + AI), normalized into a graph that links to processes, policies, controls, owners, and evidence sources.
  • Result: Machine-actionable compliance with automated attestations and evidence collection.

Third-Party GRC

  • Problem: Onboarding is front-loaded; monitoring and offboarding are weak.
  • Build: End-to-end orchestration — intake → segmentation → KYC/AML/sanctions → ESG → contract → performance/risk telemetry → offboarding controls.
  • Result: Governance of the entire third-party lifecycle, not just initial risk scoring.

Policy & Training

  • Problem: Policies aren’t adopted or understood at the point of work.
  • Build: Version-controlled policies linked to obligations and roles; contextual policy guidance APIs and Q&A assistants embedded where employees work.
  • Result: Reduced policy-toil and higher adherence.

Internal Control Management

  • Problem: Point-in-time testing misses drift.
  • BuildContinuous control monitoring (CCM) via APIs, event streams, and rules engines; exception management with risk-based SLAs.
  • Result: Early detection, lower audit fatigue, clearer lines of accountability.

Issue & Case Management

  • Problem: Fragmented hotlines and incident trackers.
  • Build: A unified case platform with routing, confidentiality tiers, evidence management, and disclosure workflows.
  • Result: Integrity becomes operationalized and reportable.

Audit & Assurance

  • Problem: Audits recreate the past instead of validating the present.
  • BuildEvidence pipelines and data lineage, enabling continuous auditing; risk-based sampling and automated test scripts.
  • Result: Assurance at the speed of change.

ESG & Sustainability

  • Problem: CSRD/ESG data is manually wrangled and error-prone.
  • Build: Instrumentation and vendor data feeds (energy, scope data, workforce) normalized to reporting taxonomies with provable lineage.
  • Result: Timely, defensible disclosures tied to objectives and risk.

Integrated Reporting & Analytics

  • Problem: Reports are static and backward-looking.
  • Build: A GRC command center that unifies objectives, risks, controls, obligations, third-party and ESG metrics; layered with digital twins and agentic AI to surface weak signals and recommend actions.
  • Result: A living system of governance, not a stack of PDFs.

Agentic AI: Promise, Pragmatism, and the Data Imperative

Both of us see agentic AI as a transformative accelerant — but only when the data substrate is ready.

Ayoub: “A lot of ‘agentic’ workflows today are still step-by-step automations. Without coherent, consistent data, the agent will just go faster in the wrong direction. Fix the data, and even a modest tool delivers outsized value.”

The horizon he sketches is compelling: AI that becomes technology-agnostic, generating custom integrations and workflows to meet business objectives regardless of the underlying cloud or tool stack. In that world, engineering gives way to architecture as the enduring discipline — because the agent writes scripts, but humans still design the goals, constraints, and governance.


Pathways into GRC Engineering (From Both Sides of the Aisle)

One of the most practical sections of our conversation was Ayoub’s guidance on how to enter and grow in the discipline:

  • If you’re a GRC practitioner:
    • Learn the basics of Python or similar.
    • Pick a single, painful, repetitive task — e.g., quarterly evidence collection from a handful of systems — and automate it end-to-end (even with AI-assisted coding).
    • Measure toil reduction and error rate improvements; socialize the win and repeat.
  • If you’re a software engineer:
    • Study GRC objectives and frameworks (OCEG, ISO 31000, internal control principles, sector regulations).
    • Shadow a control owner or an auditor for a sprint.
    • Apply your skills to build reliable evidence pipelines, clean data models, and simple but robust automations that survive audits.

For ongoing learning, Ayoub points to the GRC Engineer Manifesto and his newsletter and podcast — where he features practitioners from Netflix, Zoom, IKEA, and beyond. The pattern across episodes is the same: start where the data already lives, automate one real bottleneck, and focus on fit-for-purpose outcomes rather than flashy demos.


From Workflow to Architecture: The Operating Model Changes

A recurring theme in our exchange is that GRC engineering is not merely “doing workflows in a tool.” It is adopting an architectural operating model:

  • From forms to pipelines: Inputs flow from source systems; validations and transformations are explicit.
  • From controls to telemetry: Tests run continuously; drift is detected early.
  • From evidence packs to lineage: Data is traceable from report back to system of record.
  • From one-off projects to product roadmaps: Backlogs, usage metrics, SLAs, and success criteria exist.
  • From isolated teams to orchestration: Risk, compliance, audit, security, and the business share a common data model and glossary.

This is precisely where GRC architects and engineers collaborate: decide what must be true in the data and the flows, then implement it with the right blend of vendor capabilities and custom glue.


Why Now: Regulation, Complexity, and Tooling Maturity

The timing is not accidental. Three forces converge:

  • Regulatory pressure (e.g., UK Corporate Governance Code Provision 29, EU DORA, CSRD, NIS2) demands not just policies but evidence of effectiveness and ongoing assurance.
  • Business complexity — global supply chains, hybrid work, digitized operations — creates a volume and velocity of change that manual GRC cannot handle.
  • Technology maturity — APIs everywhere, event streams, cloud data platforms, rules engines, LLMs, and early digital twin practices — makes engineering the practical path to sustainable GRC.

Making It Real: A Practical Starter Blueprint

If you’re ready to move from concept to capability, here’s a pragmatic starter plan that works in organizations large and small:

  1. Choose one value stream (e.g., third-party onboarding, change management, or financial close).
  2. Map the GRC flows: objectives → risks → obligations/policies → controls → telemetry/evidence → exceptions → attestations → reporting.
  3. Define the minimum data model (entities, relationships, owners, sources of truth, lineage requirements).
  4. Automate one control test end-to-end (trigger → gather → evaluate → log → notify → escalate).
  5. Stand up a tiny “command center” view for that stream — objectives, risk indicators, control status, exceptions — in a single page.
  6. Measure toil removed and assurance gained; capture lessons; expand by one adjacent control or obligation each sprint.
  7. Institutionalize the operating model: backlog, product ownership, SLAs, data standards, change management, and documentation that auditors can love.

The Road Ahead: GRC 7.0 – Orchestrate

We closed by situating GRC engineering inside the broader evolution I call GRC 7.0 – GRC Orchestrate. This next era blends agentic AI with digital twins and business-integrated architectures so organizations can reliably achieve objectivesaddress uncertainty, and act with integrity — continuously, and at scale. GRC engineering is how we get there: by making assurance native to the way the enterprise plans, builds, buys, changes, and learns.

Ayoub: “Fix the data, build the flows, and the rest follows. Start small, automate what hurts, and keep the human judgment where it matters.”

Risk isn’t the enemy; it’s the mission. GRC engineering gives us the instrumentation, the telemetry, and the control surfaces to navigate that mission with speed and integrity — not just in cyber, but across the entire enterprise. If you want to dive deeper into practitioner stories and the manifesto, check out Ayoub’s GRC Engineer newsletter and podcast — and expect to hear more from both of us as this discipline matures from pockets of automation into a coherent, engineered operating model for GRC.