A View Earned Over Time

I do not come to this perspective lightly, nor is it driven by the latest technology trend or marketing cycle. I have been immersed in GRC technology for more than twenty-six years. I defined the GRC acronym in 2002. I authored the first Forrester GRC Waves when the market was still trying to understand itself. Since then, I have continuously tracked, analyzed, briefed, and advised on this space across industries, geographies, and regulatory regimes.

That longitudinal view matters. When you watch a market evolve over decades rather than quarters, patterns become unmistakable. You see which design decisions age well and which quietly become liabilities. You see which vendors innovate from conviction and which survive by layering complexity on top of aging assumptions, or use marketing fiction to weave perceptions that are not reality (or not yet reality). You also learn that technology markets do not fail loudly at first: they fail slowly, then suddenly.

The GRC market is now approaching that kind of inflection point.

What concerns me is not that legacy platforms are imperfect. No platform ever is. What concerns me is that many of the architectural foundations underpinning today’s dominant GRC solutions were designed for a world that no longer exists; and, more importantly, cannot stretch far enough to support the world we are rapidly entering.

How We Got Here: The Long Shadow of Early GRC Architecture

Many GRC platforms in use today trace their lineage back ten, fifteen, even twenty years or more. They were conceived in an era when the primary problem organizations were trying to solve was visibility and documentation. Regulators wanted proof. Boards wanted reports. Audit committees wanted assurance artifacts. The dominant paradigm was periodic, retrospective, and largely siloed.

Those early architectures reflected that reality. Data models were designed around assessments, issues, controls, and documents. Workflows were linear and role-based. Risk, compliance, audit, and policy management were treated as adjacent — but fundamentally separate — domains.

Over time, pressures mounted. Regulations multiplied. Risk categories expanded. Third-party ecosystems exploded. Cyber risk grew into an existential threat. Vendors responded the only way most enterprise software vendors know how: they added.

They added modules. They added configuration layers. They added analytics engines. They added integrations. When organic innovation slowed, they acquired competitors and complementary tools. Each step made sense in isolation. Collectively, they created platforms that look powerful on the surface but are increasingly brittle and cumbersome under the hood. They updated user experiences, the UX, but underneath things became archaic.

Architecture, unlike marketing, has memory. Every bolt-on capability inherits the constraints of the foundation beneath it. Eventually, those constraints begin to define what is no longer possible.

Why This Moment Is Fundamentally Different

The GRC market has lived through many cycles of change: Sarbanes-Oxley, the financial crisis, GDPR, operational resilience, ESG. Each wave increased complexity, but none fundamentally changed the nature of the system itself.

  • AI does.
  • Agentic AI does.
  • Digital twins do.

These are not new features to be slotted into an existing roadmap. They represent a shift from systems that record and report to systems that sense, reason, and act. That shift exposes architectural weaknesses that were previously manageable, even invisible.

In my conversations with vendors, I increasingly hear phrases like “AI-powered,” “embedded intelligence,” and “next-generation analytics.” In many cases, what that actually means is that an AI capability sits adjacent to the core platform, drawing from exported data, operating under tight constraints, and returning insights that must still be interpreted and acted upon by humans.

That is assistive technology. It is not transformational technology.

The future of GRC is not about helping humans work faster inside broken models. It is about enabling organizations to operate as adaptive systems under continuous uncertainty.

GRC 7.0 and the Emergence of Homeostatic GRC

When I describe GRC 7.0 – GRC Orchestrate, I am describing a fundamental reframing of what GRC is meant to do. At its core, GRC has always been about three things: achieving objectives, addressing uncertainty, and acting with integrity. What has been missing is the ability to do this continuously, dynamically, and at scale.

This is where the concept of homeostasis becomes essential.

In biology, homeostasis refers to the ability of a living organism to maintain internal stability while external conditions change. This is not achieved through constant conscious oversight. It is achieved through deeply integrated systems of sensors, controls, and effectors that operate automatically, continuously, and proportionally.

Most organizations today operate their GRC programs like a patient in intensive care: monitored constantly, intervened upon manually, and perpetually one incident away from escalation. This is inefficient, exhausting, and ultimately unsustainable.

A homeostatic GRC system (built on GRC 7.0 – GRC Orchestrate) is different. It is self-aware. It detects weak signals before they become failures. It adjusts behavior within defined tolerances. It escalates only when necessary. Most importantly, it frees leadership to focus on strategic objectives rather than perpetual fire-fighting.

This is not a cultural aspiration alone. It is an architectural requirement.

Why Digital Twins Change Everything — and Why Star Trek: Strange New Worlds Gets It Right

I want to be explicit about the illustration I am using here, because it matters. This is not a vague science‑fiction reference. It is a precise example of systems thinking applied under extreme uncertainty.

In Star Trek: Strange New Worlds, Season 3, Captain Batel is infected by a Gorn parasite. This is not a routine medical problem. Standard diagnostic models fail. Traditional treatment protocols are ineffective. Time is a hard constraint. The situation is complex, non‑linear, and existential.

At this point, the medical team — anchored by Nurse Chapel and Spock — does something fundamentally different. They turn to an advanced AI system that constructs a digital twin of Captain Batel. This twin is not a static replica. It is a living, adaptive simulation of her physiological state, capable of modeling millions of potential interventions across biological, chemical, and environmental dimensions.

The digital twin becomes the locus of decision‑making.

Spock and Chapel do not simply ask the system questions. They work with it. They iterate through scenarios. They test interventions that would be impossible — or unethical — to test directly on a human. The system evaluates outcomes, refines probabilities, and narrows the solution space. Most importantly, it does this at machine speed, far beyond human cognitive limits.

The digital twin is not just a model. It is:

  • A sensing mechanism, continuously incorporating new data
  • A reasoning engine, evaluating trade‑offs and constraints
  • An orchestration layer, coordinating potential actions
  • An ethical compass, helping determine what should be done, not just what could be done

This distinction is critical.

From Simulation to Action: The Role of Agentic AI

What makes this Strange New Worlds episode such a powerful metaphor for the future of GRC is that the digital twin does not exist in isolation. It is paired with intelligence that can act, not just analyze.

This is where agentic AI enters the picture.

Agentic AI is not simply predictive analytics or generative text. It is goal‑driven intelligence that can:

  • Monitor conditions continuously
  • Reason about objectives, constraints, and risk appetite
  • Propose and sequence actions
  • Execute within defined authorities
  • Learn from outcomes and adjust behavior

In the episode, the AI system does not merely present Spock with a report. It actively participates in the diagnostic and treatment process. It compresses decision cycles. It orchestrates complexity. It enables human experts to operate at a higher level of judgment rather than drowning in data.

This is exactly what GRC must become; is becoming by 2030.

Digital Twins as the Foundation of Homeostatic GRC

Homeostasis depends on three things: sensing, control, and effectors. In biological systems, these functions are deeply integrated. They do not operate in silos. They do not wait for quarterly reviews. They do not require constant executive oversight.

In GRC 7.0 – GRC Orchestrate, the digital twin of the enterprise becomes the core mechanism that enables this integration.

A true GRC digital twin models:

  • Strategic options and decisions
  • Enterprise objectives and performance
  • Business processes and assets
  • Risks, uncertainties, and dependencies
  • Controls, obligations, and tolerances
  • Third‑party ecosystems and external signals
  • Cultural and behavioral drivers

But modeling alone is insufficient.

Without agentic AI, a digital twin is a sophisticated dashboard. With agentic AI, it becomes a homeostatic system.

Agentic AI continuously senses deviations from tolerance, evaluates impact across interconnected domains, and initiates corrective action; automatically where appropriate, escalated where necessary. This is not about removing humans from the loop; it is about removing humans from tasks that does not require conscious oversight and action.

Just as the human body regulates temperature or glucose without executive intervention, a homeostatic GRC system regulates risk exposure, compliance posture, and resilience dynamically.

Why Legacy GRC Architectures Cannot Support This Vision

This is where the architectural fault lines become impossible to ignore.

Digital twins and agentic AI require a unified, semantically rich understanding of the enterprise. They require continuous data flows, consistent taxonomies, and explicit representation of objectives, constraints, and cause‑and‑effect relationships.

Most legacy GRC platforms simply do not have this.

What I see instead are fragmented representations of reality: risk in one model, compliance in another, third‑party data somewhere else entirely. Performance and objectives — if they exist at all in the system, and most they do not — are loosely connected at best. These architectures were never designed to support living simulations or autonomous orchestration.

As a result, AI initiatives in these platforms are constrained to the edges. They summarize documents. They answer questions. They accelerate existing workflows. They do not run the system.

By 2030, this distinction will define survival or obsolescence of GRC platforms.

Here is the uncomfortable truth: most GRC platforms today cannot support true digital twins or agentic AI: not because vendors lack talent or intent, but because the core architecture was never designed for this purpose.

When I evaluate platforms, I consistently see fragmented representations of reality. Risk lives in one model. Compliance obligations live in another. Third parties live in yet another. Performance, objectives, and outcomes are often afterthoughts, if they exist at all.

Agentic AI requires context. Digital twins require coherence. You cannot simulate what you do not truly understand.

Bolting AI onto fragmented architectures results in narrow, brittle use cases. It produces insights that describe the past rather than shape the future. It creates the illusion of intelligence without delivering autonomy or orchestration.

By 2030, that gap will be fatal.

The Hidden Cost of Growth by Acquisition

Market consolidation has accelerated these problems. Acquisitions create breadth quickly, but they also import architectural debt, and even competing architectures within the same solution provider/vendor. Each acquired product brings its own code base, assumptions, data structures, and logic. Integration layers mask inconsistency, but they do not eliminate it.

Over time, innovation slows. Changes become risky. AI initiatives stall because data cannot be reliably correlated or reasoned over.

From the outside, these platforms look comprehensive. From the inside, they struggle to evolve.

This is not a criticism of individual vendors . . . it is a structural reality.

What Re-Architecting Really Means

Re-architecting is not modernization theater. It is not cloud migration. It is not refactoring.

It means rebuilding from first principles around:

  • A unified enterprise data model that connects objectives, performance, risk, compliance, assets, processes, and third parties
  • Event-driven architectures that support continuous sensing and response
  • Intelligence as a native service, not an add-on
  • Orchestration of humans, systems, and agents rather than rigid workflows

This is what makes homeostatic GRC with GRC 7.0 – GRC Orchestrate possible.

A Direct Message to Buyers

If you are writing an RFP today, understand this: you are not just selecting a tool. You are selecting an architectural future.

Many platforms will meet today’s requirements. Far fewer will meet tomorrow’s.

Ask questions that go beyond features. Ask about data models. Ask about architectural coherence. Ask how AI actually operates inside the system. Ask how digital twins are supported, not theoretically, but practically.

The vendors that feel safest today may be the most constrained tomorrow.

A Call to Action: From Workflow-Centric GRC to Data-, Knowledge-, and Reasoning-Centric GRC

After nearly three decades of living inside this market, what concerns me most is not that many GRC platforms are aging. It is that much of the market is still asking the wrong foundational question.

The question is not whether workflows are configurable enough, dashboards are modern enough, or AI features are impressive enough. The question is whether the data, knowledge, and reasoning architecture beneath the platform is capable of supporting homeostatic control at enterprise scale.

Most GRC platforms were built as systems of record and systems of workflow. They excel at documenting decisions after the fact, coordinating tasks, and producing reports that satisfy auditors and regulators. That architecture made sense when GRC was driven about compliance proof and oversight. Please note, that this is not how GRC has been defined for over 20 years, but how it has been implemented by many.

The world organizations now operate in is no longer episodic. Risk is continuous. Compliance is dynamic. Resilience is tested in real time. And leadership is increasingly accountable not for whether controls existed, but whether they worked when it mattered.

This is where database architecture — and the philosophy behind it — becomes decisive.

Relational databases (which is the foundation of the majority of GRC platforms on the market) optimized for transactions, forms, and records struggle to represent context, causality, and complex interdependencies. They are excellent at answering questions like “What was assessed?” and “Who approved this?” They are far less capable of answering “Why did this risk emerge?”“How does it propagate across the enterprise?”, or “What action will most effectively change the outcome?”

By contrast, architectures built on ontologies, knowledge graphs, and reasoning layers are explicitly designed to model relationships, inheritance, dependency, and cause-and-effect. They allow controls to be represented not as documents or checklist items, but as measurable states that can be continuously validated. They allow obligations to propagate downward into controls and evidence, and assurance to propagate upward into confidence and trust.

This distinction is not academic. It determines whether AI can simply summarize what has already happened—or whether it can reason about what should happen next and act accordingly.

Agentic AI fundamentally changes the role of the platform. When intelligence is embedded at the core — working directly against a coherent data and knowledge model — it can continuously sense deviations, reason about objectives and tolerances, simulate outcomes through digital twins, and execute corrective actions within defined authority. That is homeostasis in practice.

When AI is bolted on — operating outside the core data model, constrained by integrations, and limited to correlation — it can assist humans, but it cannot run the system. It cannot orchestrate complexity. And it cannot scale to the velocity of modern risk.

Looking out to 2030, this is where long-term viability will be decided.

Legacy providers will argue — correctly — that customers value stability, that re-architecting is disruptive, and that AI should remain advisory. Those arguments hold in the short term. But they collapse when risk signals become continuous, regulatory expectations shift toward anticipation, and boards demand decision-grade explanations rather than retrospective dashboards.

Stability that cannot adapt becomes fragility. Oversight that cannot operate at machine speed becomes theater. And GRC platforms that cannot internalize intelligence become dependent on external systems they do not control.

For buyers, this is the moment to rethink how RFPs are written. The most important questions are no longer about modules and workflows, but about:

  • How data is modeled and normalized
  • Whether the platform can explain causality, not just correlation
  • Whether controls are continuously measurable and auditable by design
  • Whether digital twins and agentic reasoning are native capabilities or future aspirations

For solution providers, this is a moment of strategic honesty. Incremental modernization will not close the gap. Rebranding AI will not change architectural reality. The platforms that endure will be those willing to rebuild around GRC data engineering, knowledge representation, and reasoning . . . placing orchestration, not workflow, at the center.

This is not about replacing existing GRC systems overnight. It is about recognizing that by 2030, GRC will be an intelligent, adaptive, and self-correcting system — a true command center for decisions, objectives, uncertainty, and integrity.

Those who embrace this shift now will shape the next generation of the GRC market. Those who delay will find themselves managing yesterday’s risks with yesterday’s tools.

A Philosophical Close: GRC, Entropy, and the Fight for Organizational Integrity

At its deepest level, this conversation is not really about technology. It is about entropy.

In physics and biology, entropy is the natural tendency of systems to drift toward disorder. Living systems survive not by resisting change, but by continuously counteracting entropy through structure, feedback, and adaptation. Left unattended, even the most sophisticated organism degrades.

Organizations are no different.

Risk accumulates quietly. Controls decay. Incentives drift. Complexity compounds. What once worked begins to fail; not catastrophically at first, but subtly. A missed signal here. A delayed response there. Over time, integrity erodes, resilience weakens, and leaders find themselves managing crises they no longer understand.

Traditional GRC approaches attempt to fight entropy through oversight, documentation, and periodic intervention. This is like asking the brain to consciously regulate every heartbeat. It does not scale, and it was never meant to.

Homeostatic GRC represents a different philosophy. It acknowledges uncertainty as a permanent condition. It assumes complexity as a given. And it designs systems that can sense deviation, evaluate impact, and correct course continuously; without exhausting human attention or organizational capacity.

This is why digital twins and agentic AI are not optional enhancements. They are the mechanisms by which modern organizations can maintain coherence in the face of relentless change. They allow enterprises to model reality as it is, not as last quarter’s report described it. They enable decisions to be tested before they are executed. And they ensure that action aligns with objectives, risk appetite, and ethical boundaries.

GRC 7.0 – GRC Orchestrate is ultimately about managing uncertainty and preserving integrity at scale. Not as a slogan, but as an operating condition: where governance guides purpose, risk management manages uncertainty, and compliance reinforces trust, all in dynamic balance.

The organizations that thrive in the coming decade will not be those with the most controls, the most reports, or the most dashboards. They will be those that have built living systems, capable of learning, adapting, and acting with precision under pressure.

That is the future of GRC.

And like all living systems, it must be designed from the inside out, or it will not survive at all.

Leave a Reply