Win rates by RFP format: government, commercial, DDQ
Public data on win rates is sparse and inconsistent. Here's what the GAO bid-protest record, APMP benchmarks, and vendor-published numbers actually let you say — and where the data gaps are big enough to matter.
This post is going to disappoint anyone hoping for a clean table of “here are the win rates by RFP format in 2025.” That table does not exist in public data, and any vendor publishing one is, with high probability, drawing from their own customer base in a way that selects for win rates the broader population does not see.
What does exist is a directional picture, assembled from three categories of source: government bid-protest records (US GAO), industry-association benchmarks (APMP, with the caveat that the reports are member-survey-based), and vendor-published numbers (which we treat as advocacy, not data).
This post lays out what those three sources say, what they do not say, and where the gap between them and a real answer is big enough that we will not assert a number across it.
What we wanted to know
We started with four questions.
- Is the average win rate on a federal RFP higher or lower than on a commercial RFP, and by how much?
- Are DDQs and security questionnaires won-or-lost at meaningfully different rates than full RFPs?
- Has the average win rate moved over the last five years, and in what direction?
- Does win rate correlate with response volume — i.e., do teams that respond to fewer RFPs win a higher percentage of them?
We can give partial directional answers to all four. We cannot give clean numerical answers to any of them. The reason is the same in every case: published win-rate data is either undefined (different sources mean different things by “win”), unrepresentative (vendor-customer samples), or absent (no public data set tracks the population).
Source 1 — GAO bid-protest data
The Government Accountability Office publishes an annual bid-protest report. It is the cleanest publicly available data on US federal procurement outcomes, and it is not a win-rate dataset.
A bid protest is a formal challenge to a procurement decision. The GAO reports the number of protests filed, the number sustained, and the “effectiveness rate” (the percentage of protests where the protester obtained some form of relief, including the agency taking voluntary corrective action). In the most recent fiscal year for which figures are public, the effectiveness rate has hovered in the 50s percent. The sustain rate — the rate at which GAO formally agrees the procurement was conducted improperly — runs in the high single digits.
Why this matters for our question: it tells you the procurement system is contested but not chaotic. Most awards stand. Most protests result in some procedural remedy short of overturning the award. It does not tell you what the win rate on a federal RFP is, because the GAO does not publish award-success-rate data and “win rate” in federal procurement is undefined when the same vendor competes for thousands of small awards alongside a handful of large ones.
What we can directionally say from GAO data: federal procurement has higher process visibility than commercial procurement. A vendor that loses a federal bid can read the agency’s evaluation rationale (often available via debrief or, post-protest, in the GAO record). A vendor that loses a commercial RFP usually cannot. This affects the learnable win rate, not the win rate itself — the federal data is more useful for improving on, even if the absolute rate is not directly published.
Source 2 — APMP benchmarks
The Association of Proposal Management Professionals (APMP) runs periodic benchmark surveys of its membership. These surveys are the most-cited “industry win rate” numbers in vendor marketing.
Two cautions before quoting them.
The sample is self-selected. APMP members are disproportionately mid-to-large proposal organizations with formal proposal functions. Solo consultants, small-business government contractors without dedicated proposal staff, and the long tail of companies that respond to RFPs occasionally are under-represented. The win rate reported by an APMP survey is the win rate of the population of organizations that have someone on staff who joined APMP — which is plausibly a higher win rate than the broader population.
“Win rate” is not consistently defined across respondents. Some respondents include only RFPs where they submitted a full response. Others include opportunities they declined to bid on. Some count multi-award IDIQs as wins for each task order; others count them once. The reported aggregate is the average of these heterogeneous numerators and denominators.
What we can directionally say from APMP-style benchmarks: organizations with formal proposal functions report median win rates in the 30 to 50 percent range, with the wide range itself being the most honest fact in the dataset. If a vendor tells you “the industry average win rate is 47 percent,” ask which APMP survey, what year, and what they consider a win. The answers will not all match.
Source 3 — Vendor numbers
Vendor blog posts (Loopio, Responsive, Qorus, RFPIO, AutogenAI, and the long tail) publish win-rate statistics frequently. We do not treat these as data sources, for one reason: the published numbers are aggregated from the vendor’s customer base, which is selected for wanting to subscribe to proposal software, which is selected for already being above-average disciplined about proposal work.
This is not an accusation of dishonesty. It is a sampling problem. A win-rate average across “companies that bought Loopio” is structurally different from a win-rate average across “companies that respond to RFPs.” The former is a self-selected, paid-in subset of the latter, and the published number reflects that selection.
When a vendor publishes “our customers’ average win rate is X percent,” the most charitable interpretation is “among customers who reported a win rate to us in the survey we ran, the average was X.” That is a real number about a real, narrow population. It is not the industry average.
Source 4 — DDQs and security questionnaires
DDQ and security-questionnaire data is, if anything, sparser than RFP data. There is no public benchmark we trust on DDQ win rates. There are several reasons.
DDQs and security questionnaires are not always part of a competitive procurement. Many are sent to a vendor who has already been selected, as part of due diligence before contract signature. The “win rate” framing does not apply — the questionnaire is a gate, not a competition.
When DDQs are part of a competitive process (as in some financial-services RFPs, where the DDQ is bundled into the RFP), the win rate is the RFP’s win rate, not the DDQ’s specifically. There is no clean way to attribute the win or loss to the DDQ portion of the response.
Safe Security has reported that enterprise security teams now process 500 or more questionnaires per year — but the framing there is volume of work, not win rate. The closest thing to a win-rate analogue for DDQs is “did the vendor advance to the next stage of the buyer’s evaluation,” and that data is not published anywhere we have found.
Where the data is, where it isn’t
Here is the most honest summary we can give.
Federal RFPs. Process is documented. Win rates are not centrally published. Vendors can compute their own with high confidence (every award is logged). Industry averages are not knowable without a centralized source, and that source does not exist.
Commercial RFPs. Process is opaque. Win rates are not centrally published. Vendors can compute their own with low confidence (lost bids are often unreported, no-decisions are common). APMP benchmarks are the closest available number, with the caveats above.
DDQs and security questionnaires. “Win rate” is often the wrong metric. The right metric is throughput (how many can the team process to a quality bar per quarter) and reuse rate (what percentage of answers come from prior approved content). Both are knowable internally; neither is published publicly at any scale we trust.
Five-year trend. We cannot honestly say whether win rates have moved. We do not have a stable series. The APMP surveys exist across years but the methodology and respondent mix change. Anyone publishing a five-year trend chart is, in our reading, extrapolating beyond what the underlying data supports.
Volume vs win rate. Directional anecdote is consistent — teams that say no to more bids report higher win rates on the bids they accept. The mechanism is plausible (better capture, more SME attention, better-matched bids). The published data is too thin to put a coefficient on it.
A note on the win-rate metric itself
Even setting aside the data-availability problem, “win rate” as a single metric is doing more work than it should. Two organizations with identical “win rates” can be in materially different positions.
Consider two teams. Team A submits 100 proposals a year and wins 40 — a 40% win rate. Team B submits 30 proposals and wins 12 — also 40%. The headline number is identical. The underlying business positions are not.
Team A is, on a per-proposal basis, spending less effort per submission and accepting more low-probability bids. Their win rate reflects a high-volume, lower-discrimination posture. The 60 losses are, in many cases, bids the team should have declined at the bid/no-bid stage — they cost SME time, capture effort, and proposal management overhead, and they returned no revenue.
Team B is, on a per-proposal basis, more selective. They are saying no more often. Their 18 losses are concentrated on bids where the team made a real attempt — capture work was done, the response was high-effort, and the loss is informative. Their cost per win is lower because their cost per submission, while higher per-bid, is multiplied by fewer bids.
Both teams report a 40% win rate. Team B’s win-rate-adjusted economics are materially better. The metric does not capture this difference.
The variant we suggest as an internal metric: win rate per dollar of proposal-development cost. Compute the fully-loaded cost of the proposal function (people, tools, SME time at internal rates) divided by the number of wins. That number — cost per win — is more useful than win rate alone, and it rewards selectivity in a way the simple percentage does not.
This is also why we are skeptical of the published cross-vendor benchmarks. A vendor that aggregates their customers’ “win rates” without normalizing for selectivity is producing a number that conflates two different operating models.
A second note on the trend question
We said earlier that we cannot honestly say whether win rates have moved over the last five years. There is one directional claim we can make, however: the volume of RFP-style work has increased meaningfully. Every public source we checked points the same direction.
Safe Security’s data on security questionnaire volume (500+ per year at large vendors). Industry-blog estimates of DDQ volume in financial services (200-350 questions per instrument, 15-40 hours per response, increasing prevalence in vendor due diligence). Quilt’s reporting on sales-engineer time spent on RFPs (100-300 hours per response). These are not win-rate metrics. They are volume metrics, and the volume is rising.
If volume is rising and team headcount is roughly flat, the implied effort per win has increased. That is a structural pressure on win rates — not because teams are worse at proposal work but because the surface area of work being done is larger. Even a constant absolute number of wins represents a declining win-rate-per-hour-of-effort if the denominator grows.
This shows up indirectly in the practitioner sentiment we have been tracking. The recurring theme on industry blogs and review sites is “we are doing more work for the same outcomes.” That is consistent with rising volume and flat headcount, regardless of whether the absolute win rate has moved.
What we will and will not assert
We are publishing this teardown without a headline number because the headline number is what people remember and the published numbers do not survive scrutiny. The blog rule we operate under is: if you cannot link to it or screenshot it, do not assert it as fact. The honest version of “the average win rate is X” is “we do not know.”
What we can assert:
- Process visibility is higher in federal procurement than commercial. That makes federal bids more learnable, regardless of the absolute win rate.
- DDQ and security-questionnaire work is a throughput problem more than a win-rate problem. Measuring it as the latter misses the actual cost driver.
- Self-reported win rates are systematically higher than unobserved reality. Discount any number from any vendor (including future numbers from us) by the selection bias inherent in who reported it.
- The single most useful win-rate metric for an individual proposal team is the one they compute on their own pipeline with their own definition. The published industry numbers are at best a sanity check.
Where we go next
The team is starting work on a longitudinal data set drawn from public federal procurement awards (USAspending, SAM.gov, FPDS) cross-referenced with publicly identified prime and subcontractor relationships. The goal is not an industry win-rate average — that is structurally impossible to compute cleanly. The goal is a public, citable map of which kinds of vendors win which kinds of federal procurements, with enough resolution to be useful for bid/no-bid decisions.
If that data set produces something quotable, we will publish it with the methodology open and the limitations stated. If it does not, we will publish that result too. The only conclusion we will not publish is one the data does not support.
Sources
- GAO Annual Bid Protest Report to Congress, most recent published fiscal year. gao.gov/legal/bid-protests
- APMP Body of Knowledge — methodology and benchmark survey notes. apmp.org/bok
- VisibleThread — Government proposal writing: key steps, challenges, and tips for success. visiblethread.com
- Quilt — How to identify bottlenecks in your RFP process. quilt.app
- Safe Security — Vendor security questionnaire best practices. safe.security