Proposal win rates by sector, Q1 2025
A public-data synthesis of proposal win rates across healthcare, SaaS, defense, and state/local procurement for Q1 2025 — what's measurable, what isn't, and where the sector cuts diverge.
The headline number that gets cited most often in proposal-tech marketing is “the average win rate is 47%.” That figure shows up in vendor pitch decks, conference talks, and category-overview posts. It is a sector-blended average that obscures more than it reveals. The win-rate distribution by sector is wide enough that the blended number is misleading for any specific buyer.
This post synthesizes what’s publicly measurable about Q1 2025 proposal win rates, sector by sector. The data sources are public procurement portals (primarily SAM.gov for federal, state procurement portals for state and local), industry-association benchmark reports where they are public, and inferences from publicly traded vendors’ filed materials where applicable. Where we can compute a number, we cite the source. Where we can’t, we say so.
The sectors covered
Four sectors. Each has a different procurement shape, a different evaluation regime, and a different distribution of bidder types.
Federal civilian and DoD. Procurement-portal-driven, FAR-governed, large fields of bidders, multi-stage evaluations including past-performance scoring. The most-studied sector because the data is the most public.
State and local government. Heterogeneous — every state runs its own procurement system, with different posting conventions and different evaluation rubrics. The data is public but fragmented.
SaaS / B2B enterprise. Almost entirely private. Win rates here are reported by vendors and benchmarked by industry associations (APMP, Shipley) but the underlying transactions are not public. We can characterize the distribution from association data.
Healthcare provider purchasing. A mix of public (state-run health systems issuing RFPs through state portals) and private (large IDNs running their own procurement). Mixed-public dataset.
Federal civilian and DoD
The federal data is the cleanest. SAM.gov publishes the open RFP and contract-award data; USAspending.gov correlates awards back to obligated dollars. By looking at the ratio of awarded contracts to the unique-vendor-bidder count for solicitations that reached award in Q1 2025, you can compute a bidder-side win rate at the population level.
The Q1 2025 federal civilian aggregate looks consistent with the 2024–25 baseline we wrote about previously: roughly 1-in-5 to 1-in-7 win-rate range for non-incumbent bidders on competitive (non-set-aside, non-sole-source) procurements. Incumbent win rates on recompetes run materially higher — published academic studies of federal contracting consistently put incumbent advantage in the 60–75% range on recompetes — but the aggregate figure mixes incumbent and non-incumbent and is the more honestly reportable number.
DoD is a different shape. The primes-and-subs structure of major DoD pursuits means win-rate data at the prime level is concentrated among a small number of bidders; at the sub level, the data is much more fragmented and not cleanly recoverable from public records. We don’t have a reportable Q1 2025 number for DoD at the sub level. The prime-level read is consistent with the federal civilian baseline above.
Where the sector wins and loses. Federal proposal teams that win at above-baseline rates do so by capturing intelligence months before the RFP drops — the Reading the RFP series covers the textual signals. Teams that lose at above-baseline rates are typically responding to wired-against-them solicitations because the bid/no-bid was made by a sales team that didn’t read the RFP carefully.
State and local government
State-and-local data is messier. Each state’s procurement portal has its own data conventions; some publish award notices in machine-readable form, some don’t; some publish the full bidder list, most publish only the awardee. The cleanest data we can pull at scale is from the states that operate consolidated procurement portals (California, Texas, New York, Pennsylvania, Florida, and Georgia among them) where the award and bidder-count data are both published.
For these states in Q1 2025, the bidder-side win rate distribution sits roughly between 1-in-4 and 1-in-8, depending on category. State IT modernization solicitations cluster toward the wider field (bigger bidder counts, lower win rate per bidder). State professional-services solicitations cluster narrower (smaller bidder counts, higher win rate per bidder).
The data we don’t have access to: the smaller-state and county-level procurement awards where the portals don’t publish bidder counts. Industry-association estimates put county-and-municipal procurement win rates broadly similar to state-level, but the data underneath those estimates is opaque. We are reporting the state-portal numbers, not the broader state-and-local rollup.
SaaS / B2B enterprise
The SaaS and enterprise B2B data is private. What we have are association-reported benchmark surveys. APMP and Shipley both publish aggregate proposal-shop benchmarks; the figures are useful directionally but should be read with the caveat that the responding pool is self-selected (proposal shops mature enough to participate in benchmark studies are not representative of the universe of B2B vendors).
The reported aggregate for SaaS proposal win rates in Q1 2025, blended across multiple benchmark sources, sits in a range we have seen quoted at 30–50% depending on which dataset and which segmentation. That range is wide enough to flag. It reflects two things: the survey populations differ, and the operational definition of “win rate” varies (some respondents count opportunities, some count submitted proposals, some count only invited RFPs).
For an individual SaaS proposal team, the most defensible benchmark is your own past-12-months win rate cut by deal-size segment, not the industry blended figure. Teams that benchmark against the 47% sector blend and find themselves at 30% conclude they have a problem; the 30% may simply be that they are responding to a different mix of opportunities than the survey’s responding pool.
Healthcare provider purchasing
Healthcare splits into two procurement worlds. Public — state-run health systems, county hospitals, and Medicaid managed-care procurements — runs through state portals and is partially recoverable from the same data sources as state-and-local. Private — large IDN purchasing, GPO-mediated negotiations, individual hospital RFPs for clinical and IT systems — is largely closed.
The public-portal slice for Q1 2025 looks similar to state-and-local generally: 1-in-4 to 1-in-7 range, with significant variance by category. Clinical-IT solicitations (EHR adjacents, population-health platforms, clinical-documentation tools) cluster toward the wider field. Operational solicitations (medical supplies, specific equipment categories) cluster narrower because the bidder pool is smaller and more category-specialized.
The private-side healthcare data is not reportable at the sector level. Vendors selling into IDNs report their win rates internally; the aggregate is not surveyed in a way that produces a publishable number.
Cross-sector: bidder-count distributions
A second cut on the same data is the distribution of bidder counts per solicitation, not the win rate per bidder. The two are related but distinct. A solicitation with 15 bidders produces a different procurement dynamic than a solicitation with 3, even if the bid-to-bid win rate is comparable.
The Q1 2025 federal civilian distribution clusters bimodally. Small-dollar acquisitions (typically under $5M obligated value) draw bidder fields in the 8-to-25 range, with the long tail going to 40+. Large-dollar acquisitions ($25M+) draw narrower fields, often 4 to 8 bidders, because the upfront capture investment to be competitive at the larger scale screens out smaller vendors. The implication for proposal teams: small-dollar federal pursuits face high competition density and typically reward speed-and-volume strategies; large-dollar federal pursuits reward capture investment and incumbent-defending positioning.
State and local distributions are noisier — varying by state, by category, and by procurement vehicle (open RFP vs. cooperative contract vs. emergency procurement). The cleanest pattern: state IT modernization solicitations cluster in the 6-to-12 bidder range; state professional-services solicitations cluster in the 3-to-7 bidder range; commodity-procurement solicitations (where the underlying scope is narrowly defined and the differentiator is price) cluster up to 30+.
For SaaS and B2B enterprise, the bidder-count data is private, but industry-association benchmark studies report median shortlist sizes of 3 to 5 vendors per RFP. That number has been stable across multiple years of benchmark studies. Five vendors competing for one contract implies a 20% per-bidder win rate at the population level; the variance across teams comes from whether your team is consistently in the shortlist or consistently outside it.
Sector cut: incumbent vs. non-incumbent
Across all four sectors, the most consistent finding is the incumbency premium. Incumbent vendors on recompetes win at materially higher rates than non-incumbent challengers — not because the procurement process is rigged, but because incumbents have lower switching costs to the buyer, deeper capture intelligence, and existing past-performance evidence directly relevant to the work.
Federal recompete win rates for incumbents trend around 60–75% across multiple academic studies of federal contracting. State-and-local recompetes show similar premiums, though the data is noisier. SaaS renewal rates — the closest analog to “recompete” in B2B — are vendor-reported and run materially higher (gross retention rates of 90%+ are common in steady-state SaaS), reflecting the lower switching-procurement-overhead in the SaaS purchase model.
The proposal-team implication: a non-incumbent challenger going against a strong incumbent is fighting against a structural advantage that proposal-quality alone is unlikely to close. The bid/no-bid framework’s probability of win dimension should reflect this. A non-incumbent recompete bid against an entrenched incumbent is a probability-2 or probability-1 unless the team has named evidence of incumbent vulnerability.
What the sector cuts say together
Three observations from looking across the four sectors.
The 47% blended figure is misleading for almost any specific use case. It is a weighted average of populations with very different shapes — federal contractors, state-local bidders, SaaS sellers, healthcare vendors — and applying it as a benchmark to any specific proposal shop produces wrong conclusions. The right benchmark is your sector, your buyer type, and your deal-size segment.
Win rates are inversely correlated with bidder-pool size. Across all four sectors, the categories that draw the largest bidder fields (open public-sector IT modernizations, broad SaaS RFPs) have the lowest per-bidder win rates. The categories that draw narrow fields (specialized professional services, niche clinical-IT) have higher per-bidder win rates. This is a structural observation about procurement competitiveness, not a comment on proposal quality.
Teams that bid less and bid better outperform teams that bid more and hope. This shows up in every sector for which we have decision-quality data. Quilt’s research on RFP bottlenecks is consistent with the pattern. The sector-cut data here doesn’t prove the causal claim, but it is consistent with it: high-volume bidders in any sector show win rates clustered toward the low end of the sector’s range.
What we’d track next quarter
Two things. First, we would like to have cleaner DoD sub-level data — the prime-and-sub structure of DoD pursuits is opaque enough at the public-record level that the win-rate distribution we report is undercounted on the sub side. Second, we’d like to have the SaaS benchmark population segmented by deal-size band, because the variance within “SaaS proposal win rate” is dominated by deal size, not by industry-AI factors.
If you operate in any of the sectors covered and want to flag a public data source we missed — particularly state portals with consolidated bidder-count publication, or industry-association benchmarks with public methodology — please point us to it. We will update the post.
The category-data series continues at the end of August with the State of Proposal Tools — Wave 1 2025 annual benchmark, where the sector cuts will combine with vendor-side data on tool adoption.
Sources
- 1. SAM.gov — Federal Contract Awards (Public)
- 2. USAspending.gov — Federal Spending Data
- 3. APMP — Body of Knowledge
- 4. VisibleThread — Government proposal writing: key steps and challenges
- 5. Quilt — How to identify bottlenecks in your RFP process
- 6. PursuitAgent — Federal RFP word counts, 2024–26
- 7. PursuitAgent — SAM.gov RFP volume, Q1