Field notes

Win-loss, Part 5 of 5: the eighteen-month view

Eighteen months of running a win-loss program for ourselves. What we'd change if we started over today, what we wouldn't, and the one investment that compounded more than we expected.

Sarah Smith 5 min read Research

The parts of this series covered capture (part 1), ritual (part 2), anti-patterns (part 3), and the KB feedback loop (part 4). The series closes here, on the 18-month view: what it looks like after you’ve actually run it for yourself for more than a year.

The bids that got me to this view are a mix of my own consulting engagements and the internal proposal work we’ve done at PursuitAgent since shipping the win-loss dashboard. I’ll flag where the bias shows up.

Composite essay, see footer.

What I’d change if I started over

Start the gold set smaller and more segmented. Part 1 argued for 18 fields. That’s still right as a target. If I were starting over, I’d ship with 10 fields and aggressive segmentation. The 8 fields I’d leave out of v1 were the ones that seemed obvious and turned out to be hard to populate reliably — things like “buyer-side decision velocity” and “procurement-office prior experience with our category.” Both matter. Both are also hard to fill in unless you have someone close to the buyer. Starting smaller and growing is cheaper than starting too large and having half the fields be blank.

Put the KB feedback loop on day one, not month four. We shipped debrief capture before we shipped block-level edit flags. That order was wrong. For four months, we had a growing pile of debrief findings and no mechanism to route them into the KB. When we finally built the mechanism, we had to catch up on a backlog that didn’t need to exist. The loop (part 4) should exist before the debriefs start.

Don’t try to run the program without a DRI. Part 2 named this and I’ll say it again: a win-loss program without a single owner becomes a program nobody runs. In the first six months we tried to distribute the ownership across the proposal team. It worked less well than a single owner would have. We fixed it in month seven by naming one person. Everything improved.

What I wouldn’t change

The 30-minute debrief format. Four questions, one DRI, 30 minutes. Shorter and the debrief produces nothing durable. Longer and the team stops running it. 30 minutes has held up across segments, team sizes, and team moods.

Debrief every outcome, win or loss. I pushed back on this internally for a few months — the team was skeptical of spending time on wins. The win debriefs turned out to be the higher-value sessions. Part 3 has the long version.

The 45-day quarterly review. Part 4’s “anything more than 45 days old gets resolved or explicitly de-prioritized” rule has been the single most useful backlog discipline in the program. It’s not glamorous. It’s the thing that prevents findings from piling into a stack no one reads.

The investment that compounded more than expected

Linking debriefs to the specific KB blocks they implicated.

This was part 4’s core mechanic. When we built it, we thought of it as the way findings turn into KB edits. What we didn’t fully predict is how much it would compound across bids.

The specific effect: after 18 months, a proposal manager preparing for a new bid can query the KB for “blocks that have been edited from debriefs in the last 12 months, filtered to the blocks this bid will probably use.” What they see is a list of recent lessons, attached to the specific content that’s about to ship in their bid, with the original debrief context one click away. That’s a compounding asset. It gets more useful every quarter, not less.

The counterfactual I think about: what if we had built the debrief system without the block-level linking? We’d have a log of findings. The findings would be hard to re-find when they mattered. The same lessons would get re-learned. The team would still be running the ritual, and the ritual would be delivering a fraction of the value. This is a version of the failure mode part 3 named as “lessons logged, never applied.”

The honest part

Running this for 18 months moved our own internal win rate. Some of that was the win-loss program. Some of it was better capture discipline, which we worked on in parallel. Some of it was sales qualification improving. I cannot cleanly attribute the win-rate delta to any single input, and I would be suspicious of anyone who said they could.

What I can attribute: the quality of our proposals got more consistent. Reviewer feedback on drafts got more targeted because the drafts were better on the common failure modes the debriefs had surfaced. SME time on bids went down because the KB blocks they drafted six months ago now reflected the feedback from four intervening bids. The compounding showed up in the inputs. The output metric moved; attributing its movement to this one program is harder.

If I had to give one number I’m confident about: our rate of losing-for-the-same-reason-twice dropped substantially. I don’t have a clean measurement for it, and I’m not going to put a percentage next to a thing I can’t cite. The pattern is visible in the debrief log if you read a year of them end to end.

Where to start

If you’re reading the series backwards and wondering where to begin:

  1. Name a DRI. Today.
  2. Pick 10 fields. Smaller than part 1’s 18; big enough to capture outcome, loss reason, and the block assignments.
  3. Schedule the first debrief for the next bid you close, win or lose. 30 minutes, one person running it.
  4. Build the feedback loop before your third debrief. Block-level flags, two-week edit deadlines, weekly backlog review.
  5. Revisit at the 45-day mark. Revisit at 90. Revisit at the anniversary.

This is the last part of the series. It has been good to write. The next series on my list is about SME wrangling — less structured, more opinionated, about what actually gets SME time on a bid that’s three days from submit. That one’s coming in March.


Sources

  1. 1. PursuitAgent — Win-loss intelligence starts on day one
  2. 2. PursuitAgent — Part 1: what to capture and when
  3. 3. PursuitAgent — Part 2: debrief rituals that actually run
  4. 4. PursuitAgent — Part 3: the five anti-patterns
  5. 5. PursuitAgent — Part 4: closing the KB feedback loop