Field notes

The RFP section priority matrix

Evaluator weight times effort hours equals where to spend the draft budget. A simple matrix that tells you which sections deserve gold-team review and which sections deserve a paragraph and a citation. With three worked examples.

Sarah Smith 6 min read RFP Mechanics

Every RFP has sections that decide the bid and sections that do not. A team that drafts both kinds with equal effort runs out of time on the sections that decide the bid. A simple matrix, run at kickoff, prevents that.

The matrix has two axes: evaluator weight (how many points is this section worth) and drafting effort (how many SME-hours does it take to draft well). A section that is high weight and high effort is where you spend your week. A section that is low weight and high effort is where you copy-paste from the KB and move on. The mistake most teams make is treating drafting effort as a constant.

The two axes

Evaluator weight. Most federal RFPs and many commercial RFPs publish their evaluation criteria. Section L (instructions) and Section M (evaluation) in federal solicitations spell out what the technical proposal will be scored on. Even when the buyer does not publish weights, you can usually infer them from the structure: a section with five sub-questions in the technical volume is heavier than a section with one question in the management volume.

If the buyer publishes percentages, use them. If they do not, score each section 1-5 against three proxies: page-count allocation, sub-question count, and the language of the evaluation rubric (a section graded “pass/fail” is binary; a section graded “exceeds/meets/fails” has three levels of differentiation).

Drafting effort. This is the variable most teams underestimate. Effort is not “how long is the answer.” It is “how much SME time does this require.” A 200-word past-performance summary that the proposal writer can pull from an existing reference takes 15 minutes. A 200-word security architecture description for a buyer with novel data-residency requirements might take 4 hours of CISO time plus 2 hours of writer time.

Score effort 1-5 in SME-hours: 1 = under an hour, 5 = a full day or more.

The matrix

A 5-by-5 grid. Rows are weight; columns are effort. Each section gets a coordinate. The four quadrants:

High weight, low effort. Easy wins. Pull from KB, polish hard, run the citation check, ship. Examples: standard past-performance citations for a buyer’s mission-aligned reference, boilerplate compliance attestations the team has already responded to many times.

High weight, high effort. The week’s work. Win themes go here. Differentiators go here. Gold-team review goes here. Most of the SME time goes here. If the matrix shows you have three of these and a 14-day deadline, your bid forecast was wrong; renegotiate or drop something.

Low weight, low effort. Boilerplate. Pull from KB, sanity-check, ship. Do not over-invest. The trap here is “well, it’s only 30 minutes” — yes, and 30 minutes times 25 sections is the day you do not have.

Low weight, high effort. The dangerous quadrant. A section that takes 6 hours of SME time and is worth 3% of the score. Three options: cut it down to a paragraph, satisfy compliance with a citation to existing material, or — if the section is in the compliance matrix as required — answer minimally and move on. Do not let a 3-percent section consume a senior engineer’s day.

Three worked examples

Example 1: a federal IT modernization RFP. Evaluator weights from Section M: technical approach 40%, management 25%, past performance 20%, price 15%.

The technical approach has six sub-sections: architecture, data migration, integration, security, testing, transition. We score effort on each. Architecture is high-effort (custom for this buyer); data migration is medium-effort (some KB content); integration is high-effort (the buyer’s existing systems are unusual); security is medium (KB is mature); testing is low (KB-driven); transition is low.

Output: architecture and integration get the gold-team review and the senior architect’s calendar. Data migration gets a writer plus a 2-hour SME review. Security gets a KB pull plus a freshness check. Testing and transition get the writer drafting from KB.

The matrix does not tell us what to write. It tells us where to spend the budget.

Example 2: a state government professional-services RFP. Evaluator weights: technical approach 50%, key personnel 20%, past performance 20%, price 10%.

Key personnel is high-weight (20%) and surprisingly high-effort. Why? Because the buyer required custom resumes formatted to their template, with one-paragraph relevance narratives per person, plus letters of commitment. That is 6 named staff times two hours of personnel-department coordination per person, plus a writing pass for the relevance narratives.

The team’s instinct was to treat this as a low-effort administrative section. The matrix forced them to budget 16 hours into key personnel that would otherwise have gone to the technical narrative. They lost 4 of those hours from the lowest-weight technical sub-section. The bid won. The CIO mentioned in the debrief that the personnel package was the single most-discussed part of the response. The matrix called it.

Example 3: a security DDQ as RFP attachment. A 60-page RFP with a 200-question DDQ as an evaluation attachment. The DDQ was scored 15% of total. Drafting effort, naively, was 200 questions times 8 minutes = 26 hours. Catastrophic.

The matrix forced a sub-decomposition. Of the 200 questions, 140 mapped directly to a CAIQ identifier the team had answered before — those went through the DDQ response playbook at 30 seconds per question. Forty were custom and required SME review at 6 minutes each. Twenty were genuinely new and required escalation at 30+ minutes each.

Total effort dropped from 26 hours to about 12. The matrix made the sub-decomposition obvious: the DDQ classification bucketed the questions by reuse potential before the writers touched them.

When to run it

Kickoff. Day one of the response. Before the writers are assigned. The matrix outputs an effort budget, and the budget tells the proposal manager whether the team’s available SME-hours match the work. If they do not, the bid/no-bid decision has to be reopened — or the team negotiates scope (declining to bid one optional volume is sometimes the right move).

The matrix is also a defense. When a senior engineer says “I can’t get to that technical section until Friday,” you point at the matrix and either re-prioritize or escalate to the executive sponsor. “It’s high weight and high effort, and we lose the bid if we deprioritize it” is a sentence that gets attention. “Bob says he’s too busy” is a sentence that does not.

What the matrix is not

It is not a substitute for a capture plan. It is a way to spend the budget the capture plan defines. It does not tell you what your win themes are. It tells you which sections will carry them.

It is also not perfectly objective. The effort scores are estimates. The weight scores are sometimes inferences. But a rough matrix at kickoff is better than no matrix at all, and the discipline of running it forces the team to talk about effort before the team is mid-draft and out of time.

Sources

  1. 1. Shipley Proposal Guide — Section weighting and bid strategy
  2. 2. VisibleThread — Government proposal writing