Win-loss intelligence, Part 1 of 5: what to capture and when
The 18 fields we capture on every bid, why each one matters, and why most teams skip twelve of them. Part 1 of a five-part series on running win-loss as a habit, not a quarterly ritual.
Every proposal team I’ve worked with collects some version of win-loss data. Most of them collect about six useful fields. We capture eighteen. The gap between six and eighteen is why some teams compound and most teams don’t.
This is Part 1 of five. The series walks through what to capture, how to run the debrief, the anti-patterns that kill the practice, how findings become KB edits, and the eighteen-month view after living with the process. If you only read one part, make it this one — the capture is where everything else fails or doesn’t.
The eighteen fields
I’ll list them first, then explain the ones that get skipped.
At intake (5):
- Bid/no-bid score + one-paragraph rationale
- Named evaluation panel, if known
- Incumbent and known renewal posture
- Buyer’s strategic initiative this RFP supports
- Known disqualifiers
During capture (4): 6. The three win themes, in plain language 7. The three probable counter-themes the competition will run 8. The specific past-performance references we plan to cite 9. The SMEs required, with the specific sections they’ll own
During draft (3): 10. KB blocks used per section, with version hashes 11. Blocks edited in-response (not just reused) 12. New blocks created for this bid
During review (3): 13. Red-team comments not acted on, with written rationale 14. Gold-team comments not acted on, with written rationale 15. Compliance-matrix rows answered by inference rather than direct evidence
At submit and after (3): 16. Delta between “draft final” and “submit final” 17. Outcome source (buyer debrief, email, internal inference, public record) 18. Stated reason from buyer, verbatim where possible
The twelve most teams skip
In my experience, teams reliably capture fields 1, 6, 8, 10, 17, and 18. Those are the fields that naturally land in the proposal file or the CRM. The other twelve — the ones that matter most — require a deliberate habit.
Field 2 — the named evaluation panel
Most teams know “there’s an evaluation committee.” Few teams know the three to five people on it, their titles, their history. This field is the single highest-leverage piece of intelligence in a bid. The reason it gets skipped is that capturing it feels like “sales work” that the proposal team shouldn’t do. It’s capture work, and the proposal function owns capture in any team I’d want to join.
Field 3 — incumbent posture
If the incumbent is happy, the RFP is a compliance exercise and you’re probably bidding for second place. If the incumbent is vulnerable, the RFP is a real competition. The posture is discoverable — industry events, mutual customers, public filings, the incumbent’s layoffs or reorganizations. Most teams don’t write it down because they don’t know it. Not knowing it is the problem.
Field 7 — counter-themes
This is the field teams skip because it feels defeatist. “What will the competition claim against us?” Every proposal is a debate; writing down the other side’s argument before the draft starts is table stakes in every other debate format. In proposal work, most teams write their own themes and never name the competition’s.
Field 11 — blocks edited in-response
This one is boring and it matters more than most. When a writer edits a KB block during a bid — changing a number, softening a commitment, refreshing a statistic — that edit either belongs back in the KB or it doesn’t. If it belongs back, the next bid benefits. If it doesn’t belong back, it’s a signal the block was wrong before. Either way, the in-response edit is a flag that a block needs attention. Teams that don’t track these find the same block getting edited by three different writers across three different bids, none of them updating the source.
Fields 13 and 14 — comments not acted on
This is the most underrated pair in the list. Color-team reviewers flag problems. Proposal managers triage which ones get fixed. Shipley’s guide to color-team reviews notes how reviews slow teams down when there’s no discipline about which comments fold into the next draft. The deferred comments are the learning signal. A comment that was deferred because “there isn’t time” and then showed up in the debrief as the reason you lost is a specific, fixable process failure. Without writing down the deferral, nobody connects the dots.
Field 15 — inferred compliance
A compliance matrix can be answered two ways: by pointing at a direct piece of evidence (a signed SOC report, a named customer case study), or by inference (a paragraph of prose that argues the case). Inferred rows are where bids get unwound in debrief. Teams that capture which rows were answered by inference know exactly where to shore up the KB before the next bid in the same vertical.
Field 16 — the draft-to-submit delta
Short delta, bad bid. Long delta, healthy bid. This is a leading indicator, not a lagging one. A bid where “draft final” happened two hours before submit is a bid that skipped half the review. I’ve never seen a team that tracked this field and didn’t change its submission calendar within a quarter.
When to capture each field
Capture happens in five passes, not one. Fields 1-5 at intake, by the capture lead. Fields 6-9 after the bid/no-bid, by the proposal manager with the capture lead. Fields 10-12 by writers as they draft, ideally in the product that surfaces the block and its version. Fields 13-15 by the proposal manager during and after each color-team review. Fields 16-18 by the proposal manager at submit and after the outcome lands.
Nothing in the list is hard to capture. What’s hard is capturing it without treating each field as a separate project. The way we solved that is by making every field a side-effect of the work already happening — the writer picking a block is field 10; the reviewer flagging a comment is field 13; the proposal manager clicking submit is field 16. If the capture is a separate task, it doesn’t happen. If it’s a byproduct, it does.
What this enables
With all eighteen, a debrief is a 30-minute review of structured data. Without them, a debrief is two hours of memory. Part 2 — next Wednesday — walks through the 30-minute format. Part 3 covers the anti-patterns that even good teams fall into. Part 4 is how findings become KB edits. Part 5 closes out with what we’d change after running this for ourselves for eighteen months.
The one takeaway: the difference between a team that learns from bids and a team that doesn’t isn’t the debrief. It’s what the debrief has to work with. Six fields gets you stories. Eighteen gets you evidence.