The compliance matrix you wish existed (Part 2 of 4)
The compliance matrix is the most consequential artifact in proposal work, and the one most teams build too late. Part 2 of Reading an RFP — what it is, how to build one in 30 minutes, and why it usually doesn't get built.
A compliance matrix is a row-by-row map from every requirement in the RFP to the section of your response that addresses it. It is, by a wide margin, the single most consequential artifact in proposal work. A team that builds one in the first two days of the response cycle is going to ship a different proposal than a team that builds one as a post-hoc audit on the day before submission.
This is part two of the Reading an RFP series. Part one named the six passes that “reading an RFP” actually breaks into. Part two is about the artifact that comes out of pass two — the compliance language pass. Part three covers procurement signals; part four covers deal quality.
What a compliance matrix actually is
A compliance matrix is a spreadsheet (or a database, or a structured document — the format is less important than the discipline) with one row per requirement extracted from the RFP. Each row carries the same set of columns:
- Requirement ID. A stable identifier, usually drawn from the RFP’s own paragraph numbering (e.g., L.4.2.1.b).
- Requirement text. The verbatim sentence from the RFP, or a tight paraphrase that preserves the operative verb.
- Modal verb. Shall, must, will, should, may, describe, demonstrate, provide. The verb determines the obligation level.
- Source. Page and paragraph reference. So a reviewer can find it in the RFP without re-reading the whole document.
- Evaluation weight, if exposed. From the RFP’s own scoring rubric. Empty if not exposed.
- Response section pointer. Where in your draft this requirement is addressed. Initially blank; filled in as drafting proceeds.
- Owner. The named human responsible for the section that addresses this row.
- Status. Not started, drafted, reviewed, approved, deferred-with-rationale.
- Source block. If the response cites a KB content block, the block ID and version.
- Notes. Anything else the team needs to remember about this row.
Ten columns. None of them complicated. The discipline is in the row count, not the column richness — a real RFP produces 80 to 400 rows, and missing any of them is the failure mode.
Why teams don’t build it, and what changes when they do
VisibleThread has been writing for years that rushing into writing without fully understanding the requirements is the leading cause of proposal failure. The compliance matrix is the embodiment of “fully understanding the requirements.” Every requirement, listed, traceable. Without it, you don’t know what you don’t know.
Three reasons teams build it late or not at all:
It looks like documentation work. Ten columns of structured rows feels like overhead. It looks like the kind of thing a junior analyst should do, not the work of a proposal manager. The first time it gets built late, it gets blamed for being slow rather than for being late.
It can’t be built directly from the PDF. Extracting requirements from a 60-page document is grindy work. Every “shall” in the document is a row. Every embedded table is a hidden source of additional rows. A team that’s never automated this part of the work treats it as a multi-day undertaking.
The team that needs it the most is also the team most under deadline pressure. Lohfeld Consulting has named this pattern — proposal managers spend more time chasing SME responses than building strategy, and the matrix work is the work that gets cut when the schedule slips. The work that gets cut is the work that would have prevented the schedule from slipping. Self-inflicted.
When a team starts building the matrix in the first two days, three things change. Drafting structure follows the matrix instead of follows the team’s preferences, which means the response is organized to be easy to score against the rubric. Compliance gaps are surfaced before they become last-week emergencies. And the matrix itself becomes the review artifact — instead of asking reviewers to read a 50-page draft, you ask them to spot-check the matrix and pull on rows that look thin.
Build it in 30 minutes
Here is the version of the build I do when I’m joining a proposal mid-cycle and the matrix doesn’t exist yet. It is fast and dirty. It produces a v1 that is good enough to drive the rest of the response and that gets refined as drafting proceeds.
Step 1 — Pull every modal-verb sentence (10 minutes). Search the RFP PDF for “shall,” “must,” “will,” “should,” “may,” “describe,” “demonstrate,” “provide,” “submit.” Each match is a candidate row. Don’t filter; over-include now and prune later.
Step 2 — Categorize the verbs (5 minutes). Bucket each row as Mandatory (shall, must, will), Soft (should, is expected to), or Descriptive (describe, demonstrate, provide). Mandatory rows are non-negotiable. Soft rows are scored. Descriptive rows are content prompts.
Step 3 — Add page and section references (5 minutes). Every row gets a pointer back to the RFP. This is the part you’ll thank yourself for during reviews.
Step 4 — Map to evaluation weights, if exposed (5 minutes). If the RFP exposes a scoring rubric, each row gets a weight column populated. If not, leave it empty — you’ll fill it in later from procurement signals.
Step 5 — Draft section structure from the matrix (5 minutes). With 80 to 400 rows in front of you, the response section structure becomes obvious. Group rows by topic. The groupings are your sections. Resist the urge to structure the response around your team’s writing preferences.
That is the v1. It will have errors. It will have rows that should be merged and rows that should be split. It will get refined every day for the next two weeks. The point is to exist on day two, not to be perfect on day two.
A worked example, three rows
Imagine the RFP has the following passage in section L.4.2.1:
The Offeror shall describe its approach to data encryption at rest and in transit. The proposed solution must support FIPS 140-3 validated cryptographic modules. The Offeror should provide a description of key management practices, including support for customer-managed keys.
Three sentences. Three rows.
| ID | Requirement | Verb category | Source | Section pointer |
|---|---|---|---|---|
| L.4.2.1.a | Offeror shall describe approach to data encryption at rest and in transit. | Descriptive (mandatory description) | p. 18, ¶ L.4.2.1 | §3.2 Information Security |
| L.4.2.1.b | Proposed solution must support FIPS 140-3 validated cryptographic modules. | Mandatory | p. 18, ¶ L.4.2.1 | §3.2.1 Cryptographic Standards |
| L.4.2.1.c | Offeror should provide description of key management practices, including customer-managed keys. | Soft | p. 18, ¶ L.4.2.1 | §3.2.2 Key Management |
Three observations.
The verb categories carry different weights at evaluation. L.4.2.1.b is a binary disqualifier — if your solution doesn’t support FIPS 140-3, you cannot meet this row, period. L.4.2.1.a and L.4.2.1.c are graded on the quality of the description.
The section pointers follow the requirement structure, not the writer’s preferred outline. If your team’s instinct is to put encryption under a generic “Security” heading, that is your team’s instinct overriding the buyer’s expressed organization. The buyer’s organization usually wins more bids.
The third row uses “should.” Some teams treat “should” as truly optional. The buyer’s evaluation panel does not. A response that addresses every “shall” but skips the “shoulds” loses to a response that handles both, even though it is technically compliant on the strict reading.
Where the matrix becomes a living document
A first-pass matrix is built from the original RFP. The matrix has to update as addenda land. State and federal RFPs typically post at least one Q&A modification before the response is due — sometimes adding requirements, sometimes softening them, sometimes clarifying ambiguous language.
A matrix that doesn’t track addenda becomes a matrix you can’t trust. We’ve seen teams build a beautiful v1 and then forget to update it through the response cycle. The submission goes out compliant against the original, non-compliant against the revised. The disqualification reason on the rejection notice is “non-responsive to addendum 2.”
The fix is the same fix as for everything else in proposal work: a named human owns the matrix and updates it on every addendum. Shipley’s proposal guide treats the matrix as the proposal manager’s primary tool, and Shipley is right.
The compliance matrix as a review artifact
The matrix’s secondary role — and arguably its more consequential role — is in color-team reviews. Pink team and red team reviewers can be handed the matrix and asked to scan for thin rows. A row whose section pointer is “TBD” two weeks before submission is a red flag. A row whose status is “deferred” with a one-line rationale is a deliberate trade-off; a row whose status is “deferred” with no rationale is an oversight.
A reviewer who is asked to read a 50-page draft from cold has to reconstruct the requirement set from the document. A reviewer who is handed the matrix has the requirement set in front of them and can ask the right questions. The same reviewer produces materially better feedback in the second case, with less time investment.
The honest closing
The compliance matrix is, in the abstract, the most boring artifact in proposal work. It is also, by direct observation across many proposals, the artifact whose presence or absence correlates most reliably with the quality of the response. Treat it accordingly.
Next in the series: pass three, procurement signals — the unstated priorities that an RFP leaks if you read for them.