Win themes, one year after we published the field guide
The field guide held up on most counts and was wrong on two. What I'd rewrite today, what I wouldn't, and the one test that keeps earning its place.
We published the win themes field guide one year ago this week. Hundreds of bids have run through teams I’ve coached since, and I’ve revisited the guide more times than I’d like to admit. Most of it held up. Two parts of it didn’t. This is what I’d change.
Composite essay, see footer.
What held up
The swap test. Still the single most useful diagnostic in the guide. If I rewrite your win theme into my competitor’s proposal and it still reads as a theme, your theme isn’t a theme, it’s a platitude. I wrote a dedicated follow-up on this in the swap test post because enough people asked for the longer version. The test earned its place; nothing I’ve seen in the last year has made me think it needs revision.
Verbs, not adjectives. Also held up. The verb-not-adjective post is the second-most-referenced piece in the follow-up series. “Reduces audit time by 40% by pre-filling SOC 2 evidence from live control data” is a theme. “Excellent audit support” is not a theme, it’s a placeholder where a theme should have gone.
Three themes, not seven. I argued for a hard cap at three. I still would. Teams that try to write seven themes end up with seven bullet points that compete with each other and confuse the evaluator. Three is the number that fits on a one-page exec summary and survives a twenty-minute read.
What I got wrong
The “themes emerge from capture” framing. The field guide described win themes as downstream of the capture plan — you do capture work, and the themes emerge from what you learned. That’s half right. The half that’s wrong: for teams that are mature enough to maintain a corpus of prior winning proposals, themes also emerge from pattern recognition across past wins. Specifically, if the same theme language has won three bids in the same segment, it’s probably a theme shape that works for that segment, even before capture is complete.
I understated this because I was worried about the failure mode of teams copy-pasting themes across bids. That failure mode is real. The correct response to it is not to skip the pattern-recognition move; it’s to run the swap test against the recycled theme every time. A recycled theme that passes the swap test against this specific RFP is still a good theme. A recycled theme that fails the swap test is exactly the problem I was worried about.
I’d rewrite the guide’s chapter on “where themes come from” to include both paths — emergent from capture, and inherited from past wins with fresh swap-test discipline — instead of just the first.
The “one theme per evaluator” mapping. The guide suggested that each of your three themes should map to a different buyer-side persona. The technical evaluator cares about theme A, the compliance lead cares about theme B, the economic buyer cares about theme C. That’s a clean model. It’s also often wrong.
In a well-run procurement, all three evaluators care about all three themes — they just weight them differently. A theme that only lands with the technical evaluator is a theme that your compliance lead and economic buyer will read as tangential, and they will grade accordingly. What I’d write instead: every theme should matter to every evaluator; the difference is how much of each evaluator’s attention it earns.
This is a meaningful shift. The old framing made theme selection feel like a casting exercise. The new framing makes it feel like a consensus-hunt. The consensus-hunt is closer to the truth of how evaluation committees actually work.
What I’d add
Two new sections if I were rewriting from scratch.
A chapter on theme rot. A theme that worked for your bids in 2024 won’t work in 2026, for reasons that have nothing to do with your product. The competitive landscape moved. The buyer’s problems moved. The evaluator population turned over. A theme needs to be re-tested at least annually against your current book of won and lost bids. Theme rot is an extension of the KB feedback loop work — the loop closes on specific block edits; theme rot closes on the meta-level question of “is the theme shape still working?”
A chapter on theme laddering. The field guide treated themes as flat. In practice the best themes have a ladder: a headline verb (“reduces audit time by 40%”), a proof point (“our pre-fill pulls from live control data”), and a contrast (“unlike vendors who require the auditor to re-verify every control manually”). The contrast is the part most teams skip, and it’s the part that makes the theme readable to an evaluator who is comparing your response to two others on their desk. I’ve started calling this a three-tier theme. It’s the pattern I see working in the bids that are winning now.
The one test that keeps earning its place
I’ll end with this because it’s the test I reach for most often. Read your theme to someone who has no context on the bid. Ask them, “who wrote this — the winning vendor, the incumbent, or the second-place vendor?” If they can’t tell, your theme isn’t discriminating. If they guess the incumbent, your theme is claiming status you haven’t earned. If they guess the winning vendor, you have a theme.
This is the swap test, restated for a different audience. The swap test is the one piece of the field guide I’d put on the first page if I rewrote it.