Field notes

A field guide to win themes that actually win

The canonical pillar on win themes. What they are, what they aren't, the swap-name test applied across six worked examples, and the discipline of constructing themes from capture and retiring themes that didn't earn their score bump.

Sarah Smith 16 min read Craft

Win themes are the most-talked-about and least-well-done part of proposal craft. Every kickoff has a slot for them. Every executive summary opens with them. Every debrief blames or credits them. And in most responses, they are slogans that say nothing — interchangeable phrases like “superior service,” “trusted partner,” and “innovative solutions” that survived a brainstorming session because the proposal manager ran out of time before pushing back.

PropLibrary’s swap-name test is a single sentence that exposes nine out of ten win themes I see in the wild: if you can swap your company name with another and the win theme still makes sense, the theme is too generic. It does not differentiate, it does not anchor proof, and it does no work for the response.

This post is the long version of the argument I have with proposal teams I work with. It covers what a win theme actually is, the swap-name test applied across six worked examples, the constructive process of building themes from capture instead of brainstorming, where themes have to live in the response, and the post-mortem discipline that retires themes that didn’t earn their score bump. The argument is opinionated. It is also, in my experience, what separates the responses that win from the responses that look fine and lose.

What a win theme actually is

The category confusion starts here. Win themes get conflated with three other things they are not.

A win theme is not a value proposition. A value prop is a market-level statement about who you serve and what they get. Win themes are bid-level statements about why this buyer should pick you over the named competitors on this shortlist. A value prop is a slide in the corporate deck. A win theme is a thread through a specific response. They live at different altitudes and they answer different questions.

A win theme is not a message pillar. Message pillars are the three-to-five-frame story your marketing team tells across channels. They are designed to repeat across audiences and stay consistent across time. Win themes are the opposite — they are bespoke to the bid. The same buyer next year, in a different RFP, may pull a different set of themes from your capture intelligence. A theme that worked in 2025 against incumbent A is not the theme that works in 2026 against incumbent B.

A win theme is not a slogan. This is the failure mode the swap-name test catches. “Trusted partner,” “innovative solutions,” “best-in-class capability” — these are slogans that read like themes because they sound confident. They are not themes because they make no claim that connects to the buyer’s evaluation criteria. An evaluator filters them as fluff before reading the next paragraph.

What a win theme is: a specific, proof-weighted thread that answers “why this vendor” in a way that connects to one of the buyer’s explicit or implicit evaluation criteria. It has three components, each load-bearing.

The first component is the buyer’s criterion — the explicit factor in the scoring rubric, or the implicit priority in the buyer’s strategic context, that the theme addresses. The second is the proof point — the specific, citable evidence that the vendor uniquely satisfies the criterion. The third is the structural thread — the way the theme runs through the executive summary, the technical volume, the management volume, and the cover letter, with each section reinforcing the same claim with different evidence.

A theme without a buyer criterion is a slogan. A theme without a proof point is marketing copy. A theme without a structural thread is one paragraph in an executive summary that nothing else in the response remembers.

The swap-name test, applied

Six examples, taken from real RFPs and anonymized. Four fail the swap-name test. Two pass it. I’ll walk through each with the failure or success reasoning explicit.

Example 1 — fail

Vendor X delivers superior cybersecurity outcomes through best-in-class threat intelligence and a customer-first approach.

Swap “Vendor X” for any of the three named competitors on the shortlist. The sentence is identical and equally plausible. There is no criterion (“superior outcomes” is not a buyer-defined factor), no proof point (“best-in-class” is the proof-point version of saying “trust me”), and no structural thread (nothing in this sentence will recur in a technical volume).

This is the failure mode of the kickoff brainstorm. Three people in a room agree this sounds good, the proposal manager writes it on a whiteboard, and it ends up as theme one. It does no work in the response.

Example 2 — fail

Vendor X is committed to delivering a seamless, end-to-end transformation that empowers your team to achieve operational excellence.

This is worse than Example 1. It is composed entirely of words the swap-name test catches not because they are wrong but because they are weightless. “Seamless,” “transformation,” “empowers” — these are decorative verbs. Strip them and the sentence becomes “Vendor X is committed to operational excellence,” which is also a slogan.

The diagnostic question: what does this theme commit Vendor X to do that Vendor Y wouldn’t also commit to? Nothing. Both vendors will write a response that uses the same words. The evaluator reads neither.

Example 3 — fail

Vendor X has 25 years of experience delivering mission-critical IT services to government clients.

This one is more interesting because it tries to sound specific. It names a number (25 years), a domain (IT services), and a buyer category (government clients). The swap-name test still kills it: replace “Vendor X” with any of the three other large incumbents in this market and the sentence is true of all of them.

The failure here is that the proof point is not differentiating. Twenty-five years of government IT experience is the floor, not the ceiling, for vendors on this shortlist. The theme reads as a credential check, not as a reason to choose.

Example 4 — fail

Vendor X’s proven methodology ensures on-time, on-budget delivery for every engagement.

The fail mode here is the unexamined claim. “On-time, on-budget for every engagement” — for every engagement? In 25 years? The reader’s first instinct is doubt, and the theme has no proof point that resolves the doubt. The theme also fails the swap-name test cleanly: every vendor claims this, no vendor demonstrates it in a single sentence, and the evaluator reads it as boilerplate.

A theme that opens with a claim too strong to back up creates a credibility deficit the rest of the response has to spend effort recovering from. The first impression is “this is bid copy.” Hard to come back from that in the next 49 pages.

Example 5 — pass

Of the three vendors on this shortlist, only Vendor X has operated a 24/7 security operations center for a customer in your sector at your scale for the past five years — and our retention rate on customers in that sector is 100%.

The swap test: replace “Vendor X” with any of the other named vendors, and the claim becomes either false or a claim that vendor would have to defend with their own proof. The theme is anchored in three specifics: the shortlist (the competitive frame is acknowledged), the sector-and-scale match (the buyer criterion this addresses is sector-specific operational experience at the buyer’s volume), and the retention number (a verifiable proof point).

This theme has work to do across the response. The technical volume gets to detail the SOC’s operational maturity. The management volume gets to detail the customers the SOC has served. The past-performance volume gets to enumerate references in the buyer’s sector. The cover letter compresses the theme into one sentence the executive sponsor can repeat in the down-select meeting. Every section reinforces. That’s the structural thread.

Example 6 — pass

Vendor X is the only vendor on this shortlist that built its compliance evidence directly into its product, which means our customers pass their FedRAMP audits 40% faster than the industry average — verified by our last three customer audit timelines.

This theme is more pointed than Example 5. It names a specific feature (“compliance evidence built into the product”), a specific outcome (“audit 40% faster”), and a specific verification path (“our last three customer audit timelines”). The swap-name test still kills it for any other vendor on the shortlist that doesn’t have this feature.

The risk in this theme is the 40% number. If the proposal manager has not verified that number against the actual customer timelines, the theme is a credibility liability instead of an asset. This is the discipline themes require — a number in a theme is a contractual claim, and an evaluator who finds it inflated will discount everything else in the response.

The diagnostic on Examples 5 and 6 versus 1–4 is the same diagnostic the eight-stage RFP pipeline post names for the response overall: every claim in a winning theme is traceable to a source a reviewer can verify. Themes that pass the swap-name test pass it because their proof points are specific enough to be falsifiable. Themes that fail are not falsifiable because they do not actually claim anything.

Constructing win themes from capture

The single biggest reason most win themes are slogans is that they are produced in a kickoff brainstorm, not derived from capture. A brainstorm asks the team, “what should we say about ourselves?” The answer to that question is always a slogan. Capture asks a different question: “what does this buyer reward, and what about us is uniquely strong on that dimension?”

The constructive process I use with teams I work with has four steps. The order matters; teams that brainstorm first and then back-fill capture end up with the same slogans dressed up.

Step one: list the buyer’s evaluation criteria, weighted. Most RFPs publish their scoring rubric, in whole or in part. Where they don’t, capture intelligence fills the gap — the procurement officer’s stated priorities in pre-bid Q&A, the strategic-initiative language in the buyer’s parent organization’s annual report, the operational pain points named in the SOW. The output of this step is a list, ranked by weight: technical capability — 35%, past performance — 25%, management approach — 20%, price — 20%, with sector-specific operational experience as a stated tiebreaker.

Step two: for each weighted criterion, list your top three proof points. A proof point is a specific, citable artifact. Past customers in the relevant sector. A measurable performance number from a prior engagement. A certification or accreditation other vendors don’t hold. A named team member with sector-specific experience. The output is a 3x3 or 4x3 grid: for each criterion, three potential proof points.

Step three: assess each proof point for differentiation. This is where the swap-name test gets applied early, before any theme is drafted. For each proof point, ask: “would any other vendor on the shortlist also have this?” If yes, the proof point is a credential, not a differentiator. Demote it. If no, the proof point is a candidate for a win theme.

Step four: select three to five themes from the surviving differentiated proof points. The themes are the proof points that survived step three, framed as one-sentence threads that connect a buyer criterion to your specific evidence. You will rarely have more than five. You should rarely have fewer than three. Three to five is the band where each theme can be reinforced through every major section without crowding the response.

The output of capture-derived theme construction looks like a table:

Buyer criterionDifferentiated proofTheme
Sector-specific operational experience (tiebreaker)5-year, 100% retention SOC engagement in buyer’s sectorTheme 1 (Example 5 above)
Technical capability — 35% (compliance subsection)Audit-evidence integration; 40% faster audits per last three customersTheme 2 (Example 6 above)
Management approach — 20%Named program manager with 12 years on adjacent buyer engagementsTheme 3 (a candidate)

This is mechanical work. It is also work that the brainstorm will not do for you.

The capture-derived process is what the APMP Body of Knowledge describes when it talks about strategy as the precursor to themes — themes are not the strategy, they are the expression of the strategy in the response. Without strategy, there is nothing to express. Lohfeld Consulting makes the same point: proposal teams that spend more time chasing SMEs than building strategy produce responses that have neither — and the win themes are the most visible casualty.

Where win themes live in the response

A theme drafted but not threaded is the most common failure mode I see. The team produces three crisp themes in capture, writes them into the executive summary, and then does not reinforce them anywhere else in the response. The evaluator reads the executive summary, registers the themes, and then reads the technical volume — which makes no reference to them — and concludes the executive summary was marketing.

Themes have to live in four places. Each placement does different work.

The executive summary — framed. This is the place themes get named. The executive summary opens with two or three sentences that establish the bid context, then states the themes plainly, with the proof point compressed into the same sentence. Not as a numbered list — as a paragraph. The evaluator sees the themes here for the first time and forms the mental frame they will read the rest of the response with.

Each major section — reinforced. Every major section of the response (technical, management, past performance, sometimes pricing narrative) opens with a one-sentence reference back to the theme it is most strongly tied to. Not the slogan version of the theme, the application of the theme to that section’s evidence. The technical volume does not say “Vendor X delivers superior outcomes”; it says “Vendor X’s SOC, the only one in this competition with five years of operational continuity in your sector, is staffed at the following levels…” and then describes the staffing. The theme is the rhetorical hook; the section is the proof.

The cover letter — distilled. The cover letter is signed by an executive sponsor and read by a contracting officer or evaluation lead before the rest of the response. It is short — usually one page. It distills the themes into one or two sentences each, in language the executive sponsor would actually use in a phone call with the buyer’s decision-maker. The cover letter is the version of the themes that will get repeated in the down-select meeting after the evaluators have read everything.

The Q&A or pre-bid response — confirmed. When the buyer issues pre-bid questions or amendments, the responses to those questions are an opportunity to plant the same themes early, in the buyer’s record, before the formal response is even submitted. Most teams treat pre-bid Q&A as a transactional exchange. It is a placement opportunity for the same themes that will appear in the formal response three weeks later.

The anti-pattern is themes-as-slogan: a numbered list at the top of the executive summary, repeated nowhere else, treated as a checkbox. Evaluators have read 50 of those this fiscal year. They are not the differentiator the team thinks they are.

Retiring themes that didn’t work

This is the discipline most teams skip. Themes that worked in a winning bid get reused in the next bid without re-derivation. Themes that didn’t work get reused too, because nobody runs the post-mortem that would retire them.

The post-mortem question is mechanical: did this theme earn a score bump or didn’t it? The data sources are the buyer’s debrief (where available), the scoring sheet (where shared), the buyer’s down-select rationale (sometimes documented, sometimes inferred from feedback), and the team’s own honest read of the response.

A theme that earned a score bump shows up in the buyer’s feedback. The contracting officer mentions the proof point. The down-select rationale references the differentiator. The technical evaluator’s notes (when shared) cite the theme by name. That theme is durable. Promote it — refine the language, refresh the proof point, and reuse on the next bid where the buyer criterion still applies.

A theme that did not earn a score bump shows up either as silence in the feedback or as explicit critique. The buyer says “we didn’t see meaningful differentiation on operational experience.” The down-select rationale picks the competitor on a dimension this theme was supposed to address. The theme failed. Retire it — flag it in the KB, mark the proof point as not load-bearing for this kind of bid, and force the next capture cycle to derive a new theme rather than reach for the failed one.

The discipline takes 20 minutes per bid. Most teams will not do it because the next bid is already in flight. The teams that do it produce a theme inventory that gets sharper every quarter — themes that have proven track records on specific buyer types, themes that have failed quietly and been removed, themes that are candidates and need a real test bid before they get promoted. The inventory is a knowledge asset. The teams that don’t keep one start every bid from zero.

This is the same compounding argument the eight-stage RFP pipeline post makes about the response as a whole. Without a post-mortem that writes intelligence back, every bid starts at zero. With one, every bid starts where the last bid left off. Win themes are the most visible expression of that compounding, because themes are what evaluators read and remember.

A note on theme count and length

Three to five themes per bid. Fewer than three is a thin response — the rebuttal to “what’s distinctive about this vendor” is not strong enough to carry the weight of a 50-page response. More than five is dilution — no theme gets reinforced enough to register, and the evaluator’s mental frame goes unsharpened.

Each theme should fit in one sentence in the executive summary. If it takes two sentences, the proof point is too compound and the evaluator will not parse it. The Shipley Proposal Guide makes this point about most proposal copy: every sentence should do work, and a sentence that takes a clause to set up the proof point is a sentence that should be split into the theme (one sentence) and the evidence (the next paragraph, or the next section).

Closing — the discipline, distilled

A win theme is a specific, proof-weighted thread that connects a buyer’s criterion to your evidence and runs through every major section of the response. It is not a slogan, not a value prop, not a message pillar.

Build themes from capture, not from brainstorm. Apply the swap-name test before you draft. Place themes in the executive summary, every section, the cover letter, and the pre-bid Q&A. Retire themes that didn’t earn their score bump.

The teams that do this win bids the teams that don’t can’t explain losing. The discipline is not exotic and the techniques are not new — Shipley and APMP have documented all of this in some form for decades. What’s new is how often it gets ignored in the rush to start drafting, and how visible the consequences are once you start running the swap-name test on themes you already have on the page.

Three related pieces in this series, for context:

Sources

  1. 1. PropLibrary — Proposal win themes: the good, the bad, and six examples
  2. 2. Shipley Proposal Guide (7th ed.), Shipley Associates
  3. 3. APMP Body of Knowledge — Strategy & Win Themes
  4. 4. Lohfeld Consulting — How to fix the proposal processes holding you back

See grounded retrieval in the product.

Start a trial workspace and watch PursuitAgent draft cited answers from the documents you provide.