Field notes

A bid/no-bid scoring rubric we actually use

Five dimensions, a 1–5 score on each, a written floor, and a no-bid decision that is as cheap to defend as a bid decision. The rubric I bring into every kickoff.

Sarah Smith 9 min read RFP Mechanics

Most proposal teams have a bid/no-bid process in name only. It is a 15-minute meeting at which the answer is already baked in, the proposal manager nods, and the team goes ahead and writes a bid that nobody made a real decision about. The cost of that pattern is high — often quoted as 100 to 300 hours per response, much of it from senior engineering and sales talent.

The rubric below is not novel. It is a five-dimension scoring framework that has been variations of itself in proposal-shop folklore for at least a decade. What I’ll do here is write down a specific version that works, with the levels defined precisely enough that two evaluators applying it to the same RFP get within one point of each other on each dimension. Cheap to use. Hard to fudge.

The five dimensions

DimensionQuestion it answersCommon failure
Strategic fitDoes this customer move the business forward?Confusing logo prestige with strategic fit
Probability of winWhat are our actual chances of winning?Optimism inflation by sales
Cost to produceWhat does this response cost in real hours?Underestimating SME and review time
Opportunity costWhat else would the team do this week?Treating opportunity cost as zero
Deal qualityWill the won deal be a good customer to have?Ignoring terms, payment, and reference value

Each gets a 1–5 score. There is a written floor. A bid that doesn’t clear the floor is a no-bid. The decision is logged with a one-paragraph rationale. That rationale, more than the score itself, is what makes the rubric a discipline rather than a scorecard.

Strategic fit (1–5)

What it measures: whether winning this customer advances the company’s strategy. Not whether the customer is large or prestigious. Not whether the logo would look good on a slide.

  • 5 — Anchor account. Customer is in our target ICP, in a vertical we are deliberately growing, and would open additional accounts via reference or category proof. Winning this account changes the slope of next year’s pipeline.
  • 4 — Strong fit. Within ICP, expandable to other accounts. Reference value moderate.
  • 3 — Acceptable fit. Within ICP, no specific reference or expansion lever beyond the account itself.
  • 2 — Marginal fit. Outside ICP but adjacent. Would be a special-case implementation. Customer-success cost likely above average.
  • 1 — Poor fit. Outside ICP, niche use case, no reference value, customer-success burden high. Even if won, the deal is a distraction.

The most common failure here is scoring on logo size rather than fit. A Fortune-500 logo in a vertical you don’t sell to is a 2, not a 5. The discipline is to score against your written ICP — and if you don’t have a written ICP, that is a different problem the bid/no-bid is exposing for you.

Probability of win (1–5)

What it measures: the realistic chance of winning the deal, calibrated against the buyer’s procurement reality.

  • 5 — Strong incumbent or insider. We are the incumbent, or a senior person at the buyer has explicitly indicated we are the preferred vendor.
  • 4 — Warm. Active relationship with the evaluation panel. RFP language reflects a discovery conversation we participated in. No clear competitor has the inside track.
  • 3 — Cold-but-fair. No relationship; the RFP appears to be a genuine open competition; our offer maps to the buyer’s stated criteria.
  • 2 — Wired against us. RFP language reflects another vendor’s product (terminology, architecture, specific features). We are likely the second name on the shortlist for procurement-rules cover.
  • 1 — Wired hard against us. Specific certifications, vendor lists, or geographic constraints in the RFP we don’t meet. The RFP looks like it was written for a specific competitor.

The most common failure here is optimism inflation, especially from sales leadership who have seen the buyer once and concluded the deal is “warm.” The defense is to require evidence in the rationale: name the human at the buyer, name the conversation, name the specific RFP language that supports the score. If the rationale is “we have a good feeling,” the score is no higher than 3.

A cool real-world test: a probability-of-win score above 3 should be re-checked when the RFP is awarded. Teams that average a self-rated 4 across all bids and win less than 30% of them are over-rating their pipeline by something like a full point. Recalibrate the next quarter accordingly.

Cost to produce (1–5, where higher = more expensive)

What it measures: the realistic person-hour cost to ship the response, including SME time and review time, not just the writer’s time.

  • 1 — Light. Standard response from existing library content. Minimal SME engagement. ~40 person-hours total.
  • 2 — Moderate. Some custom content needed. SME engagement on one or two sections. ~80–120 person-hours.
  • 3 — Heavy. Custom architecture write-up, detailed pricing build, multiple SME interviews. ~150–250 person-hours.
  • 4 — Very heavy. Federal-style compliance matrix with hundreds of rows, multiple specialized SMEs, multiple review cycles. ~300–500 person-hours.
  • 5 — Extreme. Major federal pursuit, multi-week multi-team commitment. 600+ person-hours.

For consistency, write down typical past examples for each level so future evaluators can calibrate. The biggest failure here is invisible SME time. Sales engineers often spend more hours on a single response than the proposal manager does. The full team’s hours are what go on the line.

Opportunity cost (1–5, higher = costlier)

What it measures: what the team would do with the same hours if the bid did not exist.

  • 1 — Slack. Team has bench capacity this week. Bid does not displace billable or pipeline-critical work.
  • 2 — Light displacement. Some shifting but no high-priority work delayed.
  • 3 — Real displacement. A specific named project is delayed by this bid. The delay is acceptable.
  • 4 — Heavy displacement. A pipeline-critical activity (named demo, named POC, named product release) is at risk.
  • 5 — Crippling. The bid would consume capacity needed for an existing committed customer or higher-probability deal.

The discipline here is to name what is being displaced. “Opportunity cost is high” without a specific named alternative is hand-waving. “We would be running the demo for the named-account-X this week, which is a higher-probability deal” is opportunity cost.

Most teams skip this dimension entirely or set it to a default 2. Doing so encodes a false assumption: that the cost of bidding is only the cost of the bid. The cost of bidding is the bid plus everything it displaces.

Deal quality (1–5)

What it measures: whether the won deal would be a good customer to have on the other side of the contract.

  • 5 — Anchor terms, strong economics. ACV in the top quartile of our customer base, gross margin healthy, payment terms standard or better, reference rights granted, multi-year commitment.
  • 4 — Good terms. Above-median ACV, healthy margin, acceptable terms, likely reference value.
  • 3 — Standard. Median deal economics. No concerning terms.
  • 2 — Concerning. Below-median ACV or margin, MFN clause, aggressive SLA penalties, payment terms beyond 60 days.
  • 1 — Bad on multiple dimensions. Low ACV, low margin, hostile terms, no reference rights, single-year with termination-for-convenience. Even if won, the deal is a problem.

Procurement teams sometimes succeed in attaching terms in an RFP that change the deal quality severely between intake and award. The quality score should reflect the terms in the RFP, not the terms the team hopes to negotiate later. If the RFP requires acceptance of unfavorable terms as a condition of bid, score deal quality accordingly.

The floor

The floor I set varies by context, but a common workable rule is:

  • Composite score floor. Sum of strategic fit + probability + deal quality minus cost minus opportunity cost. Floor at +5.
  • Hard veto on probability ≤ 1. A 1 on probability is a no-bid regardless of other scores. Bidding into a wired deal is the most expensive thing a proposal team does and the most defensible thing to refuse.
  • Hard veto on deal quality = 1. A bid that wins a 1-quality deal makes the company worse, not better.

The composite floor is a tunable. Some teams operate at +3 because their pipeline is thin. Some operate at +7 because they are proposal-capacity-constrained. The number matters less than the discipline of writing it down and applying it.

What the rubric actually changes

Two things, both directly observed in teams that adopt it.

Bid volume goes down. Teams that adopt a real rubric and respect their floor typically respond to fewer bids. The reduction is usually somewhere in the 20–40% range in the first quarter of adoption.

Win rate on the bids they do submit goes up. The hours saved by no-bidding the wired-against-us, low-quality, low-fit bids get reinvested into the bids they actually submit. Capture work is more thorough, drafts get more SME attention, color teams happen on schedule. The win-rate effect is real and noticeable, though I am not going to publish a percentage because the magnitude depends heavily on the team’s starting baseline.

The rationale paragraph

The rubric is incomplete without it. Every score should be written down with a one-paragraph rationale. “Strategic fit: 4. Customer is mid-market healthcare, in our growing vertical. Reference rights would open the named-prospect-y deal that’s been stalled. Not a 5 because the customer’s geography is outside our deployed support footprint and would require a six-month service-readiness investment.” That paragraph is what makes the score auditable. Six months later, if the team is debriefing a loss, the rationale tells you whether the bid/no-bid was a good decision badly executed or a bad decision that got executed cleanly. Both are useful to know. Neither is knowable without the rationale.

This is not a heavy process. It adds maybe 30 minutes to a kickoff. The teams that adopt it complain about that 30 minutes for about a month. Then they stop complaining, because the proposal pipeline starts running on bids that have been thought about, and the team’s hours go where they are most likely to compound. Which is the whole point of having a process at all.

Sources

  1. 1. Shipley Proposal Guide (7th ed.), Shipley Associates
  2. 2. Quilt — How to identify bottlenecks in your RFP process
  3. 3. PursuitAgent — Eight-stage RFP response pipeline