Past-performance library hygiene for year-end
Three questions that decide whether a past-performance entry stays in the library, gets rewritten, or gets retired. A December sweep that pays off in the Q1 pursuit wave.
Past-performance citations are the single most-reused asset in a proposal library and the single most-commonly-stale asset in that library. Year-end is the right moment to sweep them. Q1 is when new buyers ask for them, and a library full of 2023-vintage citations loses bids in January that could have been won.
I run the same sweep every year. Three questions per entry. If an entry can’t clear all three, it doesn’t ship in Q1 pursuits.
Question 1 — Is the factual substrate still true?
A past-performance entry is a claim about something a customer bought from you, what you delivered, and what the outcome was. The facts rot faster than most teams acknowledge.
What changes: the customer’s name (mergers, rebrands, divestitures); the product version shipped; the scope of work after amendments; the team members who led the engagement (if they’ve left the company, a reviewer can’t verify); the metric itself if the customer reports differently now.
I go through every entry and check the factual anchors: customer legal name today, product/service as it was scoped at delivery, metric as last reported by the customer, named contact. If the customer is unreachable or the metric source is lost, the entry is a liability — it cannot be used for a reference check and a reviewer will sense the gap.
Entries with broken facts either get rewritten with updated anchors (where the customer is still reachable and willing) or retired. There is no third option. Carrying a stale entry because “it used to be good” is how content libraries become document repositories that evaluators smell from a mile away.
Question 2 — Does it map to an RFP we are actually pursuing?
The Shipley method emphasizes that past-performance citations must be relevant, recent, and similar in scope. Relevance is the first check. An entry for a 2021 implementation at a mid-size healthcare provider doesn’t map to a 2026 federal pursuit — same industry doesn’t make it relevant.
I go through the pursuit pipeline for the upcoming quarter and tag each past-performance entry against the pursuits it maps to. Entries that don’t map to any pursuit in the next two quarters go to the archive tier. They’re not deleted; they’re demoted. If a pursuit comes in later that needs them, we can pull them back. But they don’t clutter the active library.
An aside on “recent”: federal pursuits usually specify a recency window (often three to five years). Commercial buyers are more forgiving but also more attentive to whether you can point to similar-scale work. An entry older than five years should be either archived or refreshed with a proof point that shows continued delivery.
Question 3 — Can a reviewer verify it in one click?
This is the question most libraries fail. An entry that makes a claim about an outcome (“we reduced their cycle time by 47%”) needs a source. The source might be a contract deliverable, a signed case study, a customer-provided metric in an email, or a public filing. Somewhere, something in writing has to back the claim up.
The test: if a pink-team or gold-team reviewer clicks on the claim, do they land on the source within one or two clicks? If the answer is “the source is in a Dropbox folder from 2022 that nobody can find,” the claim is operationally unverifiable and cannot ship. Evaluators on the buyer side are increasingly suspicious of metric claims that don’t resolve to a source. AutogenAI’s post on proposal hallucinations names this failure mode specifically — confident-sounding claims without sources are a liability even when they happen to be true.
Where we run this check in PursuitAgent: the past-performance block carries a source pointer, and the pre-submit validator refuses to clear a response that cites a block with a null or broken pointer. A reviewer can click through from the response section to the source document in one hop. Blocks that fail this check get a freshness-alert state; see the freshness-alerts changelog.
The sweep, as a two-hour exercise
I block two hours in mid-December for this. Pull the full past-performance library into a spreadsheet. Three columns: “facts still true?”, “maps to Q1 or Q2 pursuit?”, “source verifiable in one click?” Mark yes/no/needs-rewrite on each. The counts usually look like:
- 60–70% of entries clear all three. They stay active.
- 15–20% need a rewrite — the facts drifted, the scope changed, the metric needs an update. These go to the rewrite queue, assigned to the account owner for the customer.
- 10–15% get archived. Not deleted. Moved to the “retired” tier so the active library stays clean.
- 5–10% get deleted outright — customers who refused reference permission, engagements that ended badly, metrics that can’t be sourced.
The Q1 payoff
The teams I’ve watched do this consistently start Q1 with a library they trust. When a new pursuit lands, the library surfaces five or six past-performance candidates that are actually relevant and actually current. The capture lead doesn’t spend the first week of the pursuit re-verifying whether the citations are safe to use. The proposal manager doesn’t spend gold-team explaining why three entries were retroactively retired.
The teams that skip this exercise start Q1 with a library they don’t trust. They override the library on every bid, write past-performance from scratch, and the compounding the library was supposed to deliver never arrives. Shelf’s research on outdated knowledge bases applies directly: stale content doesn’t just fail to help — it actively undermines the trust that makes the library useful.
One more thing
Past-performance hygiene is not a proposal-manager job. It’s a sales-ops or account-management job that the proposal team depends on. The account owners know whether a customer will still give a reference, what the scope drifted to, and what the current metric is. Without their participation, the sweep is the proposal manager guessing.
The most productive version of this sweep I’ve ever run was with the account team in the room and the proposal manager holding the pen. Two hours, seventy entries, and a library the team trusted on January 5th.
For related year-end workflow posts, see the rolling Q4 capacity plan and the post-mortem template preview.