Or, How to Avoid Booking a Table to Watch Your GRC RFP Decision Explode

Somewhere in the vast and bewildering expanse of governance, risk management, and compliance, there is a restaurant with a truly spectacular view. It is not located at the end of the universe in distance, as that would be far too simple and would merely require a travel policy exception, three approvals from procurement, and a debate about whether the journey should be categorized as operational risk, third-party risk, or a strategic resilience exercise. No, this restaurant is located at the end of the GRC universe in time .

It is the place where one sits down, orders something suitably expensive, and watches a poorly conceived GRC technology decision finally explode.

Douglas Adams fans of Hitchhiker’s Guide to the Galaxy know what I am talking about, and do not forget my Hitchhiker’s Guide to the GRC Technology Galaxy Podcast in this context.

I have been to this restaurant many times. Not by choice, exactly. I do not seek it out. I do not maintain a preferred table by the window. I certainly do not enjoy watching organizations spend significant time, money, political capital, and executive goodwill on a decision that was visibly doomed before the ink dried on the contract. But in my work advising organizations around the world on GRC-related RFPs, I have seen enough of these outcomes to recognize the cosmic tremors long before the final detonation.

I get involved in a lot of RFPs. Enterprise GRC platforms. Regulatory change management. Third-party risk management. Digital risk and resilience. Policy management. Internal control. Operational risk and resilience. Compliance management. ESG/sustainability. Internal audit. Cyber risk. You name the part of the GRC galaxy, and I have likely seen an RFP drift through it with great ambition, a spreadsheet of requirements, and a procurement portal that appears to have been designed by a committee of Vogons who were told to make risk management more painful.

Organizations engage me as a sounding board. They ask me to advise, challenge, validate, pressure-test, and help them understand the vendor landscape. Sometimes I am brought in early, when the organization is still defining its requirements and market approach. Sometimes I am brought in midway, when the longlist has become a shortlist and everyone is beginning to suspect that the demos all somehow look impressive in exactly the same way. Sometimes I am brought in at the very end, when the organization wants a final independent perspective before making a decision that will shape its GRC architecture for years.

At its best, this work is deeply rewarding. A well-run RFP can be an excellent discipline. It can clarify the organization’s needs, expose assumptions, align stakeholders, and lead to a solution that genuinely fits the business. I have worked with many organizations that did the hard work, asked the right questions, listened carefully, challenged the marketing, validated the references, and selected solutions that were right for their objectives, maturity, operating model, and future direction.

But I have also seen the other kind . . . The kind where I can see the explosion coming.

The View from the Edge of the GRC Universe

The central joke of Douglas Adam’s The Restaurant at the End of the Universe is not merely that diners watch the universe end over dinner. It is that the end is known, scheduled, and presented as entertainment. The catastrophe is not a surprise. It has been commercialized. It is on the menu.

That is what some GRC RFPs feel like to me. I am sitting in the process, listening to the discussion, reviewing the requirements, watching the vendor demonstrations, and I can already see where the decision is headed. I can see the gravitational pull of the wrong choice. I can see the organization being seduced by market noise, analyst positioning, mock-ups, artificial intelligence claims, and a demo environment so perfectly staged that one half expects the data model to offer a polite bow at the end.

The problem is rarely that people are careless. In most cases, the people involved are intelligent, dedicated, and genuinely trying to make the right decision. The problem is that the RFP process often rewards the wrong things . . .

  • It rewards the vendor that answers “yes” to the most requirements, even when those yeses conceal a small universe of configuration, services, custom work, roadmap promises, and interpretive dance.
  • It rewards the best presentation, not necessarily the best fit.
  • It rewards brand recognition and analyst placement, even when those signals are only loosely connected to the organization’s actual needs.

I have seen organizations follow advice from large analyst firms that were entirely wrong for their situation. Large analyst firms may provide broad market visibility, but broad market visibility is not the same as contextual fit. A solution that appears strong in a quadrant, wave, market guide, or other cartographic artifact of analyst civilization may still be the wrong solution for a specific organization, use case, maturity level, regulatory environment, operating model, or technology architecture.

There have been times when I have advised an organization against a direction and watched them proceed anyway. Two years later, the outcome is what I expected: the implementation is stalled, the solution is not adopted, there are zero users, the cost has multiplied, the internal team is frustrated, the business has gone back to spreadsheets, and the executive sponsor has quietly moved to another role (or often another organization) where the word “transformation” is no longer spoken in their presence. In some cases, people have lost their jobs because the chosen solution never should have been selected.

That is not a pleasant thing to witness. There is no joy in being right about avoidable failure. I would rather be wrong about doom and see the organization succeed than be correct in my prediction and watch a program collapse. My goal in advising RFPs is not to be negative. It is to prevent organizations from booking a reservation at the end of their own GRC universe.

The Raccoon and the Shiny Object

This is where an entirely different literary image becomes useful: the raccoon trap in Where the Red Fern Grows. A shiny object is placed in a hole, nails are angled around the opening, and the raccoon reaches in to grab the prize. Once its paw is closed around the object, it cannot pull its fist back through the opening. The raccoon could escape by letting go, but it refuses. It is not trapped only by the mechanics of the device. It is trapped by desire.

GRC buyers can behave the same way.

The shiny object may be a stunning demo. It may be an artificial intelligence capability that works beautifully in a controlled video but has not yet survived contact with actual regulatory complexity, messy control libraries, inconsistent taxonomies, or business units with strong opinions. It may be a vendor’s claim that the solution is “fully integrated,” which often means something very different in marketing than it does in implementation. It may be an elegant dashboard that shows exactly what the board wants to see, provided the organization first solves every underlying data, process, ownership, and accountability problem that the dashboard quietly assumes has already been solved.

I have seen buyers reach into the log and grab the shiny object. Then the warning signs appear. The client references are weak or not comparable. The implementation examples are shallow. The vendor has strong capability in one domain but is overextended in another. The roadmap is being asked to do the work of the current product. The workflow is attractive, but the data model is fragile. The solution can technically do what is required, but only after significant services work that was not fully understood during selection.

At that moment, the organization has a choice. It can let go and reconsider. Or it can tighten its grip.

Too often, it tightens its grip.

This is one of the great tragedies of GRC technology selection. Organizations sometimes know, at some level, that something is not right. The doubts are there. The questions are there. Someone in the room has asked why the reference was not similar. Someone has noticed that the vendor avoided a live configuration question. Someone has pointed out that the implementation timeline seems optimistic in the same way that building a bypass through one’s house might be described as “minor civic improvement.” But momentum takes over. The shortlist has been approved. The executive team likes the story. Procurement wants closure. The vendor has promised partnership. The shiny object remains firmly clenched in the organizational paw.

And somewhere, very far away in time, a waiter confirms the reservation.

When RFPs Work Well

It is important to say clearly that not all RFPs end in cosmic fire. Many are successful, and those are the engagements I enjoy most (and the ones that listen and adhere to my advice). In a strong RFP, the organization does not treat technology selection as a beauty contest. It treats it as a disciplined exercise in alignment. The objective is not to find the vendor with the most features, the loudest claims, or the most fashionable terminology. The objective is to find the solution that best fits the organization’s purpose, processes, risk profile, compliance obligations, maturity, culture, and future direction. And the solution that is easy to work with and engage. People matter as much as technology for success.

One of my favorite RFP experiences happened when I was brought in near the very end of the process. The organization told me they had narrowed the field to two finalists. Before they revealed the names, they asked me which three solutions I believed they should have considered based on their requirements and operating model. I named three. Two of the three were their final two. I have been doing this for 26 years as an analyst. So often I can come in and know right from the get go which solutions should be the finalists, if not the specific solution that they should go with.

That was a good sign. It told me that their process had led them to a credible destination. My role from that point was not to overturn the process or impose my own preference. It was to help them pressure-test the final decision, understand the trade-offs, and move forward with confidence. That is what good advice should do. It should sharpen the decision, not replace the organization’s judgment.

The best RFPs have several characteristics in common:

  • They begin with business clarity, not technology fascination. The organization understands what it is trying to achieve before it starts asking vendors what they can do. It defines the outcomes it needs, the decisions it wants to improve, the processes it must connect, and the pain points it must resolve. This prevents the RFP from becoming a fishing expedition in which every impressive capability becomes a potential requirement.
  • They distinguish between current need and future ambition. A good GRC technology decision should support the organization’s future direction, but it should not be built entirely on fantasy architecture. There is a difference between selecting a solution that can grow with the organization and selecting one that requires the organization to become a completely different species before value is realized.
  • They demand evidence, not performance. Strong RFP teams look beyond the demo. They ask for comparable references, realistic scenarios, live configuration examples, implementation details, and proof that the vendor has delivered similar outcomes in similar environments. They know that a beautiful demo is useful, but only as one piece of evidence.
  • They understand that fit is multidimensional. The right solution must fit the use case, maturity level, operating model, geography, regulatory complexity, data architecture, internal skills, and change capacity of the organization. It is possible for a solution to be excellent and still be wrong for a specific buyer.

This is where my work adds the most value. I know the GRC technology market. I know the vendors, the categories, the use cases, the overlaps, the strengths, and the gaps. I know where marketing claims tend to outrun operational reality. I know which solutions are truly strong in specific domains and which ones are borrowing credibility from adjacent capabilities. I know when vendors are demoing capabilities that do not exist in their product and was a fictitious mockup. I also know that most vendors are not simply “good” or “bad.” They are strong or weak relative to a particular need. The art of the RFP is understanding that difference.

The Demo Is Not the Implementation

One of the most dangerous assumptions in GRC technology selection is that the demo represents reality. It does not. A demo is a staged performance. That does not make it dishonest, but it does make it incomplete. The demo environment has clean data, obedient workflows, cooperative users, tidy organizational structures, and just enough complexity to appear credible without becoming inconvenient.

Reality has none of these manners.

Reality has overlapping regulatory obligations, inconsistent naming conventions, inherited control libraries, unresolved ownership questions, business units that insist their process is unique, third parties that refuse to respond on time, policies that have not been reviewed since the age of steam, and executives who want a dashboard that answers questions no one designed the data model to support. Reality also has resource constraints, competing initiatives, implementation fatigue, integration complexity, and change management issues that no vendor demo can fully capture.

This is why RFPs must move from demonstration to validation. I want vendors to show how their solution behaves under pressure. Show me how it handles exceptions. Show me how obligations map to policies, controls, risks, issues, and business units. Show me what happens when the same third party supports multiple critical services across multiple jurisdictions. Show me how the platform manages regulatory change when one rule affects five policies, twelve controls, three products, two regions, and one executive who has just asked why this was not escalated sooner.

A good demo answers the question, “What can the solution do?” A strong RFP goes further and asks, “Can the solution do what we need, in the way we need it done, with the resources and maturity we actually have?”

Those are very different questions.

The Client Reference Problem

Client references are one of the most important and most underused disciplines in GRC technology selection. Too many organizations accept references that are not sufficiently comparable. A vendor may provide a happy client, but happiness is not the same as relevance. The reference may be in the same industry but using the solution for a different purpose. It may have a similar use case but far less complexity. It may be a strong implementation but under a completely different operating model. Some solutions refuse to provide a comparable client reference and that is a huge warning signal that too often is ignored.

For references to be meaningful, they must be tested against the organization’s reality. A global financial services organization evaluating regulatory change management should not rely on a reference from a smaller domestic firm using the platform primarily for policy attestations. A manufacturer evaluating third-party risk and supply chain resilience should not be satisfied with a reference that covers basic vendor onboarding but not ongoing monitoring, performance, concentration risk, geopolitical exposure, or offboarding. An enterprise seeking integrated GRC should not confuse success in one department with proof of enterprise-wide capability.

The reference conversation should be specific, practical, and candid. I want to know what worked, what did not, what took longer than expected, what required services, what the vendor handled well, where the client had to compromise, and whether the organization would make the same decision again. I want to know how much internal capacity was required. I want to know whether the business adopted the process or worked around it. I want to know whether the promised reporting is actually being used by management and the board.

The most useful reference is not the one that says everything was perfect. The most useful reference is the one that tells the truth.

Where RFPs Most Often Go Wrong

Most RFP problems are not caused by one dramatic mistake. They are caused by a series of small distortions that accumulate until the organization finds itself committed to a path that no longer reflects its original needs. The process starts with good intentions, but the gravitational pull of market noise, internal politics, and vendor performance changes the trajectory.

The most common failure patterns include:

  • Confusing breadth with depth. Many GRC solutions can claim coverage across a wide range of functions. The more important question is how deeply and effectively they support the specific use cases that matter most. A platform may have a third-party risk module, but that does not mean it can support complex third-party governance across onboarding, due diligence, ongoing monitoring, issue management, performance, resilience, concentration risk, and offboarding. A solution may claim regulatory change management, but that does not mean it provides the intelligence, relevance, obligation mapping, workflow, accountability, and evidence needed by a global organization.
  • Buying the roadmap instead of the product. Roadmaps matter, but they are not the same as current capability. If a critical requirement depends on a future release, the organization should treat that as a risk, not as a solved problem. There is nothing wrong with selecting an innovative vendor with a strong direction, but there is great danger in pretending that a future promise is equivalent to a present capability.
  • Letting analyst positioning substitute for judgment. Analyst research may be helpful (and I would say that my research is), but it should not become a decision-making crutch. Rankings and market graphics are not a substitute for understanding fit. They often reflect broad market presence, vendor strategy, and general capability, not the precise needs of a specific organization.
  • Underestimating implementation and adoption. The software selection is only the beginning. GRC technology succeeds or fails in implementation, governance, data design, process alignment, ownership, and change management. A solution that looks powerful in selection can fail if the organization does not have the capacity, discipline, or clarity to implement it well. This is where so many ServiceNow GRC/IRM implementations fail.
  • Failing to challenge the shiny object. The newest capability is not always the most important capability. Artificial intelligence, automation, digital twins, analytics, and integrated intelligence all have tremendous potential in GRC, but they must be evaluated against operational reality. The raccoon’s problem was not curiosity. The problem was refusing to let go when the trap became obvious.

These patterns are avoidable. That is the point. The Restaurant at the End of the GRC Universe may have excellent views, but no organization should want to appear on the reservation list.

What Organizations Should Do Instead

A better RFP begins with the organization looking inward before it looks outward. Before asking vendors what they offer, the organization must understand what it needs. That requires more than collecting requirements from every stakeholder and turning them into a spreadsheet large enough to have its own weather system. It requires prioritization. It requires clarity on what matters most. It requires distinguishing essential capabilities from preferences, future ambitions, and decorative features that look impressive but do not materially improve outcomes.

The organization should define the business problem in operational terms.

  • What processes are broken?
  • What decisions are poorly supported?
  • What risks are not visible?
  • What obligations are not mapped?
  • What evidence is hard to produce?
  • What third-party relationships are not governed consistently?
  • What reports require manual heroics?
  • What accountability is unclear?
  • What regulatory, operational, or strategic pressures make change necessary now?

Only after that should the organization turn to the market.

When it does, the RFP should be designed to test fit, not collect affirmations. Vendors should be asked to demonstrate the organization’s scenarios, not generic ones. They should be asked to explain implementation realistically, including services, timelines, internal resource needs, configuration ownership, integration requirements, and common challenges. They should be asked where they are strong and where they are not. A vendor that cannot explain its limitations is either inexperienced, overconfident, or selling fiction.

A disciplined GRC RFP should include several practical tests:

  • Scenario-based demonstrations using real use cases. Do not let the demo remain abstract. Provide vendors with specific scenarios that reflect the organization’s complexity. For regulatory change, test how the solution identifies relevance, maps obligations, assigns accountability, tracks implementation, and produces evidence (within your jurisdictions of interest, a solution built for reg change in the USA does not always perform well in Europe as regulatory approaches vary). For third-party risk, test the lifecycle from intake and onboarding through monitoring, issue management, performance, resilience, and offboarding. For enterprise GRC, test how risks, controls, policies, obligations, issues, incidents, metrics, and objectives connect.
  • Comparable client references. Require references that align with the organization’s industry, size, geography, complexity, and use case. Ask practical questions about implementation, adoption, reporting, support, and lessons learned. Listen carefully for what is not said.
  • Data and architecture review. Understand the underlying data model. GRC is not just workflow. It is the relationship between objectives, risks, controls, obligations, policies, assets, processes, third parties, incidents, issues, and decisions. If those relationships are weak, the solution will struggle to deliver integrated insight.
  • Implementation realism. Require clarity on what is configured by the client, what requires the vendor, what requires professional services, what is included, what costs extra, and what assumptions are embedded in the timeline. Many failures begin with an implementation plan that was written with more optimism than evidence.
  • Governance and ownership alignment. Determine who will own the platform, who will govern data standards, who will maintain taxonomies, who will manage change, and how business units will be engaged. A GRC platform without governance quickly becomes a very expensive filing cabinet with workflow aspirations.

These disciplines are not bureaucratic obstacles. They are protections against expensive regret.

Fit: The Real Answer to Life, the Universe, and GRC

In Douglas Adams’ universe, the answer to life, the universe, and everything is famously simple (42), though not especially useful without understanding the question. GRC technology has a similar problem. Many organizations want the answer to be a product name. They want the answer to be the market leader, the best platform, the top-ranked vendor, the most innovative solution, or the one with the most compelling artificial intelligence story.

But the real answer is fit.

Fit to purpose. Fit to maturity. Fit to architecture. Fit to process. Fit to culture. Fit to regulatory complexity. Fit to the operating model. Fit to the organization’s internal capabilities. Fit to the decisions that need to be made. Fit to the future the organization is actually capable of building.

This is why I resist simplistic vendor rankings. I do not do quadrants or waves. There is no universal best GRC solution. There are excellent solutions for specific contexts. There are strong vendors that fit certain use cases very well and others poorly. There are emerging vendors that are innovative and worth serious consideration, but not necessarily ready for every enterprise requirement. There are established platforms with deep capability, but also complexity that may overwhelm an organization that is not prepared for it.

The right question is never simply, “Who is best?” . . . The better question is, “Who is best for us, for this purpose, at this stage of maturity, given where we need to go?”

That question changes the entire RFP.

My Role as a Guide Through the GRC Galaxy

When I advise organizations, I am not there to replace their decision-making. I am there to improve it. I help them understand the market, challenge assumptions, ask better questions, and recognize patterns that may not be visible to a team that only runs a major GRC RFP every several years. Vendors live in the market every day. Analysts and advisors live in the market every day. Buyers often do not. That imbalance matters.

My role is to bring context. I know where solutions are strong. I know where claims need to be tested. I know which capabilities are mature and which are mostly presentation. I know where a vendor’s center of gravity is and where it may be stretching beyond its proven strength. I know the difference between a platform that can truly support integrated GRC and one that has assembled a set of modules under a common brand. I know where regulatory content matters, where workflow matters, where intelligence matters, where configurability matters, and where implementation discipline matters most.

Most importantly, I know that GRC technology decisions are not really about technology. They are about enabling the organization to reliably achieve objectives, address uncertainty, and act with integrity. Technology supports that capability, but it cannot substitute for clarity, governance, accountability, and sound judgment.

That is why I push organizations to slow down at the right moments. Not to delay the process unnecessarily, but to prevent preventable mistakes. There is a great difference between urgency and haste. Urgency is appropriate when the organization faces regulatory pressure, operational risk, resilience gaps, or fragmented processes that need attention. Haste is what happens when the organization mistakes motion for progress and signs a contract because everyone is tired of the RFP.

The universe is full of tired decision-makers. Some of them are now reviewing dessert menus at the end of time.

Avoiding the Restaurant

The Restaurant at the End of the GRC Universe will always be there. There will always be bad RFPs, shiny objects, overconfident vendors, vague requirements, weak references, inflated claims, and decisions made because the room wanted closure more than confidence. There will always be organizations that discover too late that the solution they selected was not wrong in general, but wrong for them.

But the table does not have to be booked.

A well-run RFP can be one of the most valuable exercises an organization undertakes. It can align stakeholders, clarify priorities, expose weak assumptions, and establish a foundation for better GRC outcomes. It can help the organization move beyond fragmented processes and toward an integrated approach to governance, risk management, and compliance. It can connect objectives, risks, obligations, controls, policies, issues, third parties, and performance in a way that supports better decisions.

The key is discipline. Know what you need before you ask the market what it sells. Demand evidence. Test the real use cases. Validate the references. Understand the data model. Be honest about maturity. Challenge the roadmap. Beware the mock-up. Respect the demo, but do not worship it. Above all, be willing to let go of the shiny object when the trap becomes clear.

Because somewhere, at the far end of the GRC universe, there is always a table available.

The wise organization never makes the reservation.

Leave a Reply