Field notes

Preview: what compounding means for proposal software

The thesis in 900 words. Every RFP you win makes the next one easier — what that actually requires, and why most proposal software is structurally incapable of it.

Bo Bergstrom 5 min read Category

Later this month I’m publishing the long version of our product thesis. This is the 900-word preview. The full piece lands in 10 days and runs closer to 3,000 words; if you want the condensed version, this is it.

The claim is simple: every RFP you win should make the next one easier. Not as marketing copy. As a measurable product contract. The edge of a proposal function is not how fast it drafts any individual response. It is how much of what it learns from each response is carried forward to the next.

Most proposal software cannot do this. The reason isn’t effort; it’s shape.

What compounding requires

A system compounds when its outputs feed its inputs. In proposal terms:

  1. Every drafted answer has to be traceable to a source block in a knowledge base.
  2. Every reviewer edit on that answer has to write back — specifically — to the source block.
  3. Every win and every loss has to produce a post-mortem that updates the same source blocks, with reviewer-provided labels on what worked and what didn’t.
  4. The next drafted answer has to draw from the post-edited, post-labeled blocks, not the originals.

Any system that breaks this loop — at any of the four steps — does not compound. It accumulates. Accumulation and compounding look similar after two quarters. They diverge violently after two years.

Where the incumbent systems break

The dominant proposal tools on the market today are, functionally, content libraries with search on top. Loopio, Responsive, QorusDocs, Qvidian, and the rest have variations on the same shape: a library of answer blocks, a search interface, a response editor, and a loose coupling between the two. Customer reviews on G2 and Capterra have been consistent for years: the library rots. The AI on top rots with it. The expensive tool becomes, in one G2 reviewer’s phrase, “an overpriced document repository.”

The reason the library rots is not customer negligence. It is that the tools do not structurally require the loop to close. You can draft an answer, edit it, ship it, win or lose, and never update the source block. The post-mortem — when it happens at all, which Leulu’s research on the proposal retrospective names bluntly as “rare” — produces a Word document that lives in a shared drive. The next bid starts from the same source block the last bid started from, now six months more stale.

The shape that would compound

A proposal system that compounds has to carry four properties the incumbents don’t:

Per-sentence provenance. Every sentence of every drafted answer points to a source block. Not the answer as a whole. The sentence. The reviewer knows which block sourced which claim. So does the system.

Edit-back as a default. When a reviewer edits a sentence, the edit has an option — in the UI, not buried in a workflow — to write back to the source block. The friction on edit-back has to be lower than the friction on editing locally. Most tools reverse this default.

Post-mortem as a data object. A post-mortem is not a document. It is a structured set of block-level assertions: “this block worked in this context,” “this block was the losing section, here is why.” Those assertions attach to blocks in the KB. The block’s usage telemetry carries them.

Next-draft retrieval that uses the history. The retrieval system that feeds the next draft weights blocks by their post-mortem outcomes, not only by semantic similarity to the question. A block that won three bids on this buyer type retrieves ahead of a block that has never been used. A block that lost a specific DDQ retrieves with a warning.

None of this is technically exotic. All of it requires the product to be built around the loop from the beginning. Bolting it onto a content library is the equivalent of retrofitting a building for earthquake safety — possible, but far more expensive than building for it in the first place, and you can tell.

What I am not claiming

I’m not claiming compounding is the only edge worth having. I am not claiming PursuitAgent is the only shape that could deliver it — I’d be surprised if the category didn’t evolve a handful of tools with this architecture over the next three years. I am claiming that a proposal tool that doesn’t compound is running on a ceiling its customers will eventually feel.

The ceiling is about two years. Past that, the KB rots faster than the team can manually refresh it, the drafts get more generic, the win rate stalls, and the tool becomes a thing the team has to work around. Every G2 review that reads “we used it for two years and the magic wore off” is a description of the ceiling hitting.

What the long version covers

The December 17 piece goes deeper on:

  • The four properties in concrete detail, with diagrams for each loop.
  • What “compounding” looks like in year one versus year three, with the benchmark numbers we’re tracking internally.
  • The three architectural decisions we made in the first six months of PursuitAgent that only make sense if you take compounding as the contract.
  • What I’m genuinely unsure about — mostly, whether the post-mortem loop is practically achievable in teams below five people, and what the smaller-team version looks like.

The eight-stage pipeline post from May — the canonical walk-through of what a proposal function does — is the operational scaffolding this thesis sits on top of. If you haven’t read that one, read it first. This thesis is what the eighth stage, the post-mortem, is for.

The full piece in 10 days.

Sources

  1. 1. PursuitAgent — The 8-stage RFP response pipeline
  2. 2. Leulu — The proposal post-mortem