An anonymized winning proposal, torn down
A public-sector award with public artifacts. What won, what the scoring rubric rewarded, and three things a typical proposal team would have gotten wrong.
Public-sector awards in the United States come with public artifacts. The RFP is public. The award notice is public. The evaluation summary, for many state and local procurements, is public on request. For federal awards, the debrief is a statutory right when the vendor asks.
This teardown is based on a recent state-level award in the public-works domain. Specific names and numbers have been generalized to protect the vendor — the conclusions we draw from the scoring rubric and the winning response structure do not require identifying the parties. We are not quoting from the proposal text. Where public artifacts were available, we read them directly; where they weren’t, we note the gap.
The opportunity, in shape
The RFP asked for a multi-year services contract in a category where three incumbent vendors had been serving the state for several cycles. Four vendors responded. The scoring rubric, as published in the RFP’s Section L, weighted:
- Technical approach — 40 points.
- Past performance — 25 points.
- Key personnel — 15 points.
- Management plan — 10 points.
- Price — 10 points.
The RFP was 140 pages. The page limit on the technical response was 60. The evaluation panel, per the procurement notice, was five named evaluators drawn from the agency’s program office and one external subject-matter advisor.
Who won and why
The winner was not the lowest-priced bidder. The winner’s price was the second-highest of the four. The winning response scored notably above the others on technical approach and past performance; on personnel and management plan it was roughly tied with the top two competitors. On price it scored in the middle.
The delta — what actually moved the award — was on technical approach. The evaluation summary (obtained via public-records request) names three specific elements the panel flagged as differentiators. We’ll walk each one.
Differentiator 1 — The technical approach mapped explicitly to the scoring rubric
The winner’s Section 3 mirrored the rubric. Every scored sub-criterion in the RFP’s technical-approach section got its own sub-heading in the response, in the same order, using the RFP’s language. An evaluator reading for “innovation in service delivery” — a named sub-criterion worth 8 of the 40 technical points — found a heading called “innovation in service delivery” and an answer directly under it.
This sounds like table stakes. It isn’t. Two of the four bidders structured their technical sections by their own services taxonomy, requiring the evaluator to scroll back and forth between their response and the rubric to match one to the other. VisibleThread’s government-proposal research names this as the most common avoidable scoring loss — the proposal is there, the content addresses the requirement, but the evaluator has to work to find it and the score suffers.
Differentiator 2 — Past performance was mapped, not listed
The winner’s past-performance section didn’t just list six prior engagements. Each engagement was mapped to a specific element of the technical approach. An evaluator reading Section 3.2 on “phased implementation planning” saw a cross-reference to Past Performance #4, which described a prior phased implementation. The mapping was explicit and labeled.
The three competing responses listed past performance in a standalone section that read like a CV — roles filled, dates served, outcomes achieved — without the cross-references. The evaluator had to assemble the relevance themselves, which, under time pressure with five proposals to score, often doesn’t happen.
Differentiator 3 — Win themes were differentiators, not features
The winning response led with three win themes, each framed as a discriminator — something the incumbent competitors could not credibly claim — rather than as a generic capability. One of the three, for example, wasn’t “we deliver quality service” (which every bidder claimed). It was a specific operational posture the vendor had built around a recent regulatory change, which the three competing bidders had not yet built. The theme was concrete, verifiable (the vendor had documented the operational posture in their published service specs), and defensible against the competitive set.
PropLibrary’s swap test — if you can swap your company name in and the theme still reads as true for a competitor, the theme is too generic — is the test this winning theme cleared and the competitors’ themes did not. For more on the discipline of building themes that clear the swap test, see win-themes-the-swap-test.
Three things a typical proposal team would have gotten wrong
Reading the winning response and the three competing responses (the ones we could obtain; the fourth declined to share) produces three specific lessons. None of them are novel. All of them are skipped by teams under deadline.
1. The technical section gets structured around the team’s internal taxonomy
Most teams have an internal services taxonomy — the way they describe what they do, refined over years of marketing and sales conversations. When a new RFP lands, the writers default to that taxonomy because it’s what they know. The result is a technical section that makes sense to the writers and not to the evaluator.
The fix is operationally small: structure the technical section as a mirror of the RFP’s Section L rubric. Even when the rubric’s sub-criteria don’t map cleanly to the team’s services, force the structure. The evaluator will find the content; without the structure, they will score the bid against a structure the team didn’t signal.
2. Past-performance library is treated as a standalone section
The library exists. The entries are written. The team pastes the three most-relevant entries into a past-performance appendix and considers the section done. The cross-reference work — “entry #4 supports our phased-implementation claim in Section 3.2” — is the work that turns the library from a CV into a scoring tool. It takes a senior writer about 20 minutes per entry and often isn’t done because the senior writer is elsewhere on a deadline.
See past-performance-that-actually-maps for the discipline; the teardown here is the empirical case for it.
3. Win themes are assertions, not differentiators
The three competing bidders all stated win themes. None of the themes passed the swap test. “Committed to service excellence,” “proven track record,” “trusted partner” — the evaluation summary didn’t name a single one of these as a factor in the decision. The winner’s themes were named. The difference was operational, not rhetorical.
This is the point of every post we’ve published on win themes. The teardown here is the version of the argument that includes a specific public artifact — the evaluation summary — in the evidence chain.
What this teardown isn’t
It isn’t a guarantee that mirroring the rubric, mapping past performance, and writing discriminator themes will win a comparable bid. Public-sector evaluations include elements the published rubric doesn’t capture — existing relationships, prior performance with the agency, political context. What we can say is: all three of the discriminating moves in this teardown were controllable moves by the winning team. The three losing teams could have made the same moves. None of the three did.
For the pipeline view that frames where these moves fit — Stage 3 (capture), Stage 4 (compliance), Stage 5 (draft) — see the 8-stage RFP response pipeline.
Sources and method
The winning response was obtained through a public-records request. The three competing responses were obtained through the same channel, with one vendor declining to release their document; that vendor is excluded from the comparison. The evaluation summary was obtained through a separate public-records request to the contracting office. All vendor-identifying details have been generalized; the scoring shape, rubric structure, and evaluation language have been preserved.