The complete bid/no-bid scoring framework
The canonical bid/no-bid framework. Five variables scored 1–5, weighting, the rubric template, the bid-decision meeting, override discipline, and where the rubric is honestly wrong.
The bid/no-bid is the single most consequential edit you make on any RFP. Everything that comes after it — capture, compliance, draft, color teams, submit — is downstream of one decision: should we have written this bid at all. Teams that say yes to everything run their proposal engines at 100% utilization losing 90% of the time. The Shipley Proposal Guide names this the “bid decision gate” for a reason. Skip the gate and the proposal function never improves, regardless of how good the writing gets. Qorus’s data shows that SME wrangling has been the top-cited proposal challenge for five consecutive years, and the simplest tool for reducing that wrangling is not better collaboration software — it is fewer bids on the queue.
This post is the canonical version of the rubric I bring into every kickoff. There is a shorter craft essay that previewed it; this is the long-form treatment with weighting, override discipline, and the meeting structure that makes the rubric actually run. Five variables, scored 1–5, weighted to a composite, with a written floor and a senior approver. About 30 minutes of work per opportunity. The most expensive ritual in proposal management when measured by hours invested, the cheapest when measured by hours saved.
I’ll walk through each variable in turn — what it measures, how to score it 1–5, what signals inform each level, and a worked example. Then the rubric template, then how teams actually run the bid-decision meeting, then where the rubric is honestly wrong and how to log the overrides.
The five variables
1. Strategic fit
What it measures. Whether winning this customer advances the company’s strategy. Not whether the customer is large or prestigious. Not whether the logo would look good in a slide. Strategic fit asks a single question: if we won this account, would the slope of next year’s pipeline change?
The most common failure mode is scoring on logo size rather than fit. A Fortune-500 logo in a vertical you don’t sell to is a 2, not a 5. A 200-person mid-market customer in your fastest-growing vertical is a 4 or a 5. The discipline is to score against your written ICP, not your collected hopes.
How to score:
- 5 — Anchor account. Customer is in target ICP, in a vertical we are deliberately growing, and would open additional accounts via reference or category proof. Reference rights are granted or negotiable.
- 4 — Strong fit. Within ICP, expandable to other accounts. Reference value moderate.
- 3 — Acceptable fit. Within ICP, no specific reference or expansion lever beyond the account itself.
- 2 — Marginal fit. Outside ICP but adjacent. Would be a special-case implementation. Customer-success cost likely above average.
- 1 — Poor fit. Outside ICP, niche use case, no reference value, customer-success burden high. Even if won, the deal is a distraction.
Signals that inform the score. Does the account match the named verticals in your ICP doc? Is the buyer’s size in your target band? Is their geography inside your deployed support footprint? Do they grant reference rights or have a posture against reference disclosure? Is there a follow-on revenue path — adjacent business units, expansion seats, integrations into a category you sell — or is the deal an island?
Worked example. A 3,000-employee regional health system issues an RFP for a clinical-documentation platform. Our ICP is mid-market healthcare, vertical we are deliberately growing. They grant reference rights with a six-month delay. Their geography is inside our deployed support footprint. They have three sister hospitals on the same EHR — winning this account creates a credible path into the other three. Score: 5. A near-identical RFP from a 3,000-person law firm: outside ICP, no expansion path, the deal would be a special-case implementation. Score: 2.
2. Probability of win
What it measures. The realistic chance of winning the deal, calibrated against the buyer’s procurement reality. Not the chance that we could win if everything broke our way. The chance we will, given what is observable about the RFP and the buyer.
This is the variable most prone to optimism inflation. Sales leadership has met the buyer once, has a “great feeling,” and rates the probability a 4. Six months later the deal is awarded to the incumbent and the team is surprised. The defense is to require evidence in the rationale: name the human at the buyer, name the conversation, name the specific RFP language that supports the score. If the rationale is “we have a good feeling,” the score is no higher than 3.
How to score:
- 5 — Strong incumbent or insider. We are the incumbent, or a senior person at the buyer has explicitly indicated we are the preferred vendor.
- 4 — Warm. Active relationship with the evaluation panel. RFP language reflects a discovery conversation we participated in. No clear competitor has the inside track.
- 3 — Cold-but-fair. No relationship; the RFP appears to be a genuine open competition; our offer maps to the buyer’s stated criteria.
- 2 — Wired against us. RFP language reflects another vendor’s product (terminology, architecture, specific features). We are likely the second name on the shortlist for procurement-rules cover.
- 1 — Wired hard against us. Specific certifications, vendor lists, or geographic constraints in the RFP we don’t meet. The RFP looks like it was written for a specific competitor.
Signals that inform the score. Does any evaluation criterion name a specific tool or architectural pattern we don’t use? Is there a “incumbent shall be considered” or equivalent advantage? Has the buyer published past RFP awards we can read? What does the sales team’s direct contact at the buyer say off the record? Read the reading-the-RFP series for the textual signals to look for.
Worked example. A federal civilian agency issues an RFP for a knowledge-management platform. The eval criteria specify “FedRAMP High.” We are FedRAMP Moderate, with High in process. The criteria specify “supports Splunk integration via [specific connector name].” That connector is the incumbent’s product. Score: 1. Bidding into this is the most expensive thing the team will do this quarter. The honest no-bid is the right call.
A test I recommend to teams: track self-rated probability against actual win rate over a quarter. Teams that self-rate an average 4 and win less than 30% are over-rating by a full point. Recalibrate downward.
3. Cost to produce
What it measures. The realistic person-hour cost to ship the response, including SME time and review time, not just the writer’s time. Quilt’s data has the realistic high-end at 100 to 300 hours per RFP response, with senior engineering and sales engineering being the most-loaded talent. The full team’s hours are what go on the line — and on the rubric.
This dimension is scored such that higher = more expensive, which inverts the intuition for the other variables and is something to be careful about when summing the composite.
How to score:
- 1 — Light. Standard response from existing library content. Minimal SME engagement. ~40 person-hours total.
- 2 — Moderate. Some custom content needed. SME engagement on one or two sections. ~80–120 person-hours.
- 3 — Heavy. Custom architecture write-up, detailed pricing build, multiple SME interviews. ~150–250 person-hours.
- 4 — Very heavy. Federal-style compliance matrix with hundreds of rows, multiple specialized SMEs, multiple review cycles. ~300–500 person-hours.
- 5 — Extreme. Major federal pursuit, multi-week multi-team commitment. 600+ person-hours.
Signals that inform the score. Number of unique requirements in the RFP. Page-count limit on the response (longer caps mean more writing; shorter caps mean more editing). Number of pricing scenarios required. Whether the response demands custom diagrams, custom pricing models, or custom past-performance write-ups. The number of named SMEs who must contribute. The number of review cycles required by the buyer’s evaluation timeline.
Worked example. State government RFP for a constituent-portal platform. 240-row compliance matrix. Pricing scenarios for three deployment topologies. Required architecture diagrams for security, data flow, and disaster recovery. Three rounds of clarification questions per the buyer’s procurement schedule. Two specialized SMEs (security architect, data privacy lead) each estimated at 25 hours. Score: 4.
Lohfeld’s research is consistent: proposal managers spend more time chasing SME responses than building strategy. Score the cost honestly. Underestimating SME time is the most common scoring error in this dimension.
4. Opportunity cost
What it measures. What the team would do with the same hours if the bid did not exist. This is the dimension teams skip most often, or set to a default 2, which encodes a false assumption: that the cost of bidding is only the cost of the bid. The cost of bidding is the bid plus everything it displaces.
Higher score = costlier.
How to score:
- 1 — Slack. Team has bench capacity this week. Bid does not displace billable or pipeline-critical work.
- 2 — Light displacement. Some shifting but no high-priority work delayed.
- 3 — Real displacement. A specific named project is delayed by this bid. The delay is acceptable.
- 4 — Heavy displacement. A pipeline-critical activity (named demo, named POC, named product release) is at risk.
- 5 — Crippling. The bid would consume capacity needed for an existing committed customer or higher-probability deal.
Signals that inform the score. What is on the team’s calendar for the next three weeks? Are any other RFPs in progress? Are any customer deliverables (POCs, milestones, integrations) committed in the same window? Is the implementation team about to ship something that requires the same SMEs?
Worked example. Two parallel RFPs landed within a week of each other. The first is for an existing strategic-account expansion (P-of-W: 5). The second is from a new logo, lower probability (P-of-W: 3). Bidding both means the security architect splits time, draft quality drops on both, and the strategic-account renewal is at risk because the architect was supposed to lead a customer security review the same week. The opportunity cost of the second bid is 5 — bidding into it consumes capacity needed for a higher-probability commitment. Score: 5 on opp cost for bid #2. The honest call is no-bid #2 and capture intelligence for the next time this buyer goes to RFP.
The discipline here is to name what is being displaced. “Opportunity cost is high” without a specific named alternative is hand-waving. “We would be running the security review for named-account-X this week, which is renewal-critical” is opportunity cost.
5. Deal quality
What it measures. Whether the won deal would be a good customer to have on the other side of the contract. A bid that wins a bad deal makes the company worse, not better. Procurement teams sometimes attach terms in the RFP — most-favored-nation pricing clauses, aggressive SLA penalties, termination-for-convenience, payment terms beyond 60 days — that change the deal quality severely between intake and award. The quality score should reflect the terms in the RFP, not the terms the team hopes to negotiate later.
How to score:
- 5 — Anchor terms, strong economics. ACV in the top quartile of our customer base, gross margin healthy, payment terms standard or better, reference rights granted, multi-year commitment.
- 4 — Good terms. Above-median ACV, healthy margin, acceptable terms, likely reference value.
- 3 — Standard. Median deal economics. No concerning terms.
- 2 — Concerning. Below-median ACV or margin, MFN clause, aggressive SLA penalties, payment terms beyond 60 days.
- 1 — Bad on multiple dimensions. Low ACV, low margin, hostile terms, no reference rights, single-year with termination-for-convenience. Even if won, the deal is a problem.
Signals that inform the score. Read the contract clauses attached to the RFP. Read the master services agreement template if one is referenced. Note the payment terms, indemnification language, SLA-penalty schedules, audit rights, and renewal terms. If the RFP doesn’t include the contract template, score conservatively and flag the unknowns in the rationale.
Worked example. A municipal-government RFP requires net-90 payment terms, includes MFN pricing across all government customers, and grants the buyer the right to audit our financial statements annually. ACV is below our median. The deal is structurally hostile to the business model. Score: 1. Even if won, this customer is a drag on the team for the duration of the contract.
The rubric template
Here is the template I use. Five variables, weighted, with a composite and a floor.
| Variable | Score (1–5) | Weight | Weighted score |
|---|---|---|---|
| Strategic fit | _ | 1.5x | _ |
| Probability of win | _ | 2.0x | _ |
| Cost to produce (inverse) | _ | 1.0x | _ |
| Opportunity cost (inverse) | _ | 1.0x | _ |
| Deal quality | _ | 1.5x | _ |
| Composite | _ |
The weights say: probability of win matters most (it is the variable with the most direct correlation to the only number we care about, which is whether we win), strategic fit and deal quality matter second-most (they determine whether winning is good for the business), cost and opportunity cost matter but are correctable through capacity decisions.
The composite is computed as:
composite = (1.5 × strategic_fit)
+ (2.0 × probability_of_win)
+ (1.5 × deal_quality)
- (1.0 × cost_to_produce)
- (1.0 × opportunity_cost)
Note the cost and opportunity-cost terms subtract — higher values on those dimensions are bad. The composite is bounded roughly between -10 (worst) and +25 (best). I set the floor at +12 for teams in steady state. Teams whose pipeline is thin sometimes operate at +9. Teams that are proposal-capacity-constrained operate at +16. The floor matters less than the discipline of writing it down and applying it.
Hard vetoes sit on top of the composite:
- Probability of win = 1. Hard no-bid regardless of other scores. Wired-against-us bids are the most expensive thing a proposal team does.
- Deal quality = 1. Hard no-bid regardless of other scores. Winning a 1-quality deal makes the company worse.
- Strategic fit = 1. Hard no-bid regardless of other scores. Even a free win out of category drains customer-success capacity for years.
A bid that fails any veto is a no-bid. The composite floor is the second gate. Both have to clear.
How teams actually use it
The rubric is useless if it lives in a spreadsheet that someone fills out alone. The discipline runs in a meeting. Here is the structure that works.
Length: 15 minutes. Longer meetings dilute the decision; shorter meetings turn into rubber stamps. The work to populate the scores happens before the meeting — written, by the capture lead — so the meeting is for review and decision, not data gathering.
Attendees: three to four named roles. The capture lead (presents the scores). The proposal manager (sanity-checks cost-to-produce). A senior commercial owner — VP of Sales or Head of Proposals or a founder, whoever is the budget-holder — who has authority to say no. Optionally, a named SME if the cost-to-produce is contested.
Format: written scores, oral discussion. The capture lead posts the populated rubric to the team channel 24 hours before the meeting. The senior approver reads it before the meeting. The meeting is for discussion of contested scores and a final yes/no. If the composite clears the floor and no veto fires, the meeting is short. If a score is contested, the meeting is the place to surface the disagreement and adjudicate.
Output: a written decision with a rationale paragraph. The capture lead logs the decision — bid or no-bid, the composite score, and a one-paragraph rationale — in the proposal record. “No-bid. Composite 7. Probability of win scored 2 because the RFP names the incumbent’s product architecture; the rationale paragraph cites the specific clauses. Strategic fit 4 (within ICP), but probability and cost-to-produce dominate. Capture intelligence collected for next-cycle pursuit; account stays on the named-pursuit list for 2026.”
This last part — the rationale paragraph — is what makes the rubric a discipline rather than a scorecard. Six months later, when the team is debriefing a loss or a near-win, the rationale tells you whether the bid/no-bid was a good decision badly executed or a bad decision that got executed cleanly. Both are useful to know. Neither is knowable without the rationale.
Why the meeting is oral but the scores are written. Oral “we should go for this” conversations are how teams end up bidding into 1-probability deals. The momentum of an enthusiastic sales lead, plus the politeness of a proposal manager who doesn’t want to push back, plus a half-formed rubric in someone’s head — that is how every wired-against-us bid gets approved. Written scores break that pattern. They make disagreement explicit and scorable. They force the senior approver to either accept the scores or argue with a specific number.
A common implementation step. Teams that adopt this rubric for the first time often spend the first month over-bidding (the discipline isn’t internalized yet) and the second month over-no-bidding (the rubric is being applied with too-fresh skepticism). By month three, the floor stabilizes and the team’s bid volume settles 20–40% below baseline with a measurable lift in win rate on the bids they do submit. I am not going to publish a percentage on the win-rate lift because the magnitude depends heavily on the team’s starting baseline; the directional effect is consistent across teams I’ve worked with.
When the rubric is wrong
The rubric is a heuristic. There are real situations where the rubric says no-bid and the right answer is bid. Three I’ve seen repeatedly:
Strategic-customer override. The customer is a named anchor account on the company’s three-year plan. Winning this account is part of the company’s strategy at a level above the proposal function. The rubric scores the bid below the floor — typically because cost-to-produce is high and probability of win is medium — but the strategic value of the customer relationship overrides the rubric. The override is allowed, but the override has to be logged.
Reference-value override. The customer is in a vertical the company is breaking into and a reference there would open the next ten conversations. Probability of win is moderate; cost is high; the rubric says no. The override is “we are willing to invest in this bid as a marketing-cost-shaped pursuit even at a probability the rubric would refuse.” Allowed, with a logged override.
Capture-intelligence override. The bid is a learning bid. The team needs to understand how this buyer evaluates, what the procurement office requires, what the eval panel responds to. A losing bid against this buyer feeds the next bid. The override is “we are willing to lose this bid to learn things.” Allowed, with a logged override and a written hypothesis about what we expect to learn.
How to log the override. The override is a separate field on the proposal record. It names the override category (one of the three above, or a new one), the senior approver who authorized it, and a one-paragraph rationale. Most importantly, it ties the override to a post-mortem question. “Override category: reference-value. Approver: VP of Sales. Rationale: First mid-market hospitality vertical pursuit. Reference rights negotiable. Acceptable to lose at composite 8 if we collect three pieces of capture intelligence: the eval panel’s named criteria, the incumbent’s pricing band, the procurement office’s preferred contract template.” The post-mortem then checks whether we got the three pieces of intelligence. If we did, the override was successful regardless of win/loss. If we didn’t, the override was a slip that should not be repeated.
Why log overrides. Because patterns show up in post-mortems. A team that overrides three times in a quarter and loses all three should be looking at whether the rubric is too strict (in which case adjust the floor) or whether the team is using the override category as a permission slip to keep bidding into deals that shouldn’t be bid (in which case the override discipline itself has eroded). The override log is the thing that makes the rubric a learning tool rather than a static rule.
VisibleThread’s research has the leading cause of proposal failure as rushing into writing without fully understanding the requirements. The bid/no-bid is where the team makes the deliberate choice not to rush — the choice to spend 15 minutes deciding whether to spend 200 hours writing. It is the cheapest gate in the eight-stage pipeline and the one with the highest downstream effect.
Closing
If you take one thing from this post, it is that the bid/no-bid decision is a written artifact, not a meeting. The rubric is written. The scores are written. The rationale is written. The override, if one is invoked, is written. The post-mortem references the rubric. Six months later, anyone on the team can read the proposal record and reconstruct why the team chose to bid or not bid this opportunity. That reconstructibility is what makes the proposal function compoundable. Without it, the team is rolling dice on the same kinds of opportunities forever.
The bid/no-bid sits at Stage 2 of the eight-stage RFP response pipeline. Stage 1 is Intake. Stage 2 is the gate this post is about. Everything after Stage 2 is downstream of the decision the rubric helps you make.
One honest question to leave with: if your team adopted this rubric tomorrow and applied the floor strictly, how many of the bids in your current quarter would have been no-bids? If the answer is “more than 20%,” the rubric is doing the work it is supposed to do.
Sources
- 1. Shipley Proposal Guide (7th ed.) — Bid Decision Gate
- 2. APMP Body of Knowledge — Opportunity Qualification
- 3. Qorus — Winning proposals: how to stop wrangling SMEs
- 4. Quilt — How to identify bottlenecks in your RFP process
- 5. Lohfeld Consulting — How to fix the proposal processes holding you back
- 6. VisibleThread — Government proposal writing: key steps and challenges
- 7. PursuitAgent — Eight-stage RFP response pipeline
- 8. PursuitAgent — A bid/no-bid scoring rubric we actually use
- 9. PursuitAgent — Reading the RFP like a procurement lead
See grounded retrieval in the product.
Start a trial workspace and watch PursuitAgent draft cited answers from the documents you provide.