Field notes

The compliance matrix revisited, one year in

What we wrote about compliance matrices in May 2025, what we've learned since, and the five corrections that change the recommendation. A one-year retrospective on a foundational RFP-mechanics piece.

Sarah Smith 5 min read RFP Mechanics

A year ago we published a piece on building a compliance matrix in thirty minutes, and a follow-up on the common mistakes teams make with them. Both posts held up reasonably well. A few things in them did not. This is the year-one revisit — five corrections to the original advice, each derived from running the pattern across hundreds of bids.

Correction one: the matrix is not 30 minutes of work

The original post’s headline was a deliberate provocation — the point was that teams were spending days on matrices that should take an hour, not that the matrix was a casual effort. A year later the provocation has caused more confusion than clarity. The matrix is not 30 minutes, and claiming it was made teams undervalue it.

The matrix is the scaffold for the entire response. It is the document that decides which volumes exist, which sections exist, which reviewers look at which requirements, and which items block submission. A scaffold that sloppy will produce a response that is structurally mis-aligned with the buyer’s rubric, and no amount of polish on the writing will fix structural misalignment.

The correction: an autogenerated first pass is 30 minutes. The human review that takes the first pass to a load-bearing scaffold is usually another two to four hours, and an experienced proposal manager should be one of the eyes on it. The tool we shipped in June 2025 does the first pass; it does not do the review. We have been clearer about that in the product since, but the blog post still carries the original framing.

Correction two: “shall” versus “will” versus “must” is not as reliable as we said

The original post leaned heavily on the tradition of extracting requirements by keyword — shall, must, will provide, describe. That works for federal solicitations. It works for well-drafted commercial RFPs. It does not work for the 30%+ of commercial RFPs that are written in natural prose without ceremony, where a requirement looks like “we expect the vendor to handle…” or “the vendor’s response should address…” without any of the tradition’s keywords.

A year of running the extractor across commercial RFPs taught us the keyword approach misses 15-25% of requirements in loosely-drafted documents. The correction: we now use a hybrid approach — keyword match for the strict cases, semantic classification for the loose cases, and a reviewer pass that catches the rest. The semantic pass costs LLM time but prevents the failure mode where a requirement gets missed entirely. VisibleThread’s writing on requirement extraction across federal-style and commercial-style documents lines up with this observation.

Correction three: the matrix has to handle cross-references

The original post treated each requirement as self-contained. That was wrong for roughly 40% of federal RFPs and about 20% of commercial ones. Requirements routinely reference other parts of the document — “as specified in Section 3.2” or “meeting the criteria in Table 4” — and the matrix that ignores cross-references loses the dependency graph.

Teams that built matrices without cross-references hit a specific failure mode at red team: a reviewer would find a requirement that had been answered correctly on its face but incorrectly once you followed the reference. The correction: the matrix now has a cross-reference column, and the extractor emits the reference targets as structured data. Reviewers can filter by “answered, with unresolved cross-reference” and catch the dependency failures before gold team.

Correction four: “answered” is not a binary

The original matrix treated each requirement as answered or unanswered. Reality is messier. We now use four states: answered and verified, answered but unverified, acknowledged with caveat, explicitly declined. The four-state model changed the gold-team review rhythm: reviewers now look at the “unverified” column specifically, and any response row that is still unverified 48 hours before submission gets escalated to the proposal manager by default.

The Shipley tradition uses slightly different terminology but argues for the same distinction. A binary “answered / not answered” column makes the matrix look complete when it is actually incomplete in ways that matter for scoring. The four-state model surfaces the gap.

Correction five: the matrix outlives the response

The original post treated the matrix as a response artifact — build it for the bid, submit, done. A year of running bids taught us the matrix is one of the highest-value artifacts a proposal function produces, across bids. A matrix from a won bid is a template for the next bid from the same buyer. A matrix from a lost bid, paired with the debrief, is a teaching artifact about what the buyer actually evaluated.

The correction: we now archive matrices with the bid and surface them in the capture tool when a new RFP arrives from the same buyer or the same buyer category. The APMP body of knowledge treats this as standard practice in mature shops; we underemphasized it a year ago because we were focused on the initial-build case. The ongoing-use case is at least as important.

The shape of the correction

None of the five corrections invalidate the original advice. They sharpen it. The compliance matrix remains the single highest-leverage artifact in stages one through four of the eight-stage RFP pipeline, and the original recommendation to build it immediately after intake is unchanged. What has changed is how we think about the scope, the extraction method, the dependency graph, the state model, and the lifecycle.

The pattern that produced the corrections is one worth naming independently. A year of running a proposal function against hundreds of bids is long enough to see where the advice from month one falls short. The discipline of publishing a correction — rather than leaving the original post in place as if it was right — is one we are trying to make standard. The next year will produce a new set of corrections; we will publish those too.

The shortest version of the revised advice: build the matrix first, review it carefully, handle cross-references, track four states of completion, and keep the matrix around after the bid. Every one of those five was a correction that cost us a bid somewhere along the way. We got them wrong first; you can get them right now.

Sarah Smith is the house pen for PursuitAgent’s proposal-craft posts. It’s a composite voice, not a single person. Views reflect PursuitAgent’s position; war stories are drawn from real experience in the proposal industry without being tied to a specific employer or engagement.

Sources

  1. 1. Shipley Proposal Guide (7th ed.), Shipley Associates
  2. 2. VisibleThread — Government proposal writing: key steps, challenges, and tips for success
  3. 3. APMP Body of Knowledge