Reading an RFP like the procurement lead who wrote it
RFPs are procurement documents written by named humans with known constraints, drafted from templates reused for fifteen bids. Read them that way and the response writes itself differently. The canonical long version.
Procurement leads write RFPs in a specific way. They have templates they have reused for fifteen or twenty bids. They have evaluators who grade against a rubric the procurement lead drafted before the requirements were written. They have policy constraints, precedent constraints, and accountability constraints that the document reflects on every page.
If you read the RFP the way it was written — by a named human, in a constrained role, drafting from a familiar template against a rubric that already exists — you respond to it 40 to 60 percent better. That number is not from a published study. It is from the difference I have repeatedly seen between teams that read RFPs as procurement documents and teams that read them as sales documents. Anecdote, calibrated against fifteen years of debriefs and four hundred bids. Take it as directional, not a statistic.
This is the long-form version of an argument I previewed last week. Eight sections. The reframe, the constraints the procurement lead is operating under, the templates the document inherits, the scoring paragraph (the most important paragraph in the RFP), the red flags hiding in the requirements, the pre-bid Q&A process, the debrief, and the closing reframe. Read it once, then read your next RFP with it in mind. The work changes shape.
1. The reframe
There are two ways to read an RFP. One is the way most sales teams scan it: find the requirements, identify the deadline, hand it off to proposal. The other is the way the procurement lead who wrote it reads it: as the manufactured output of a constrained role, drafted from a template they have used for many bids, designed to compare a small set of vendors against a rubric they wrote first.
Most proposal teams default to the first reading. Most win rates reflect that.
The reframe sounds pedantic. It is the most useful thing I know about reading RFPs. An RFP is not a sales document — a document written to you with the goal of getting you to do something. It is a procurement document — a document written for an evaluation panel, by a procurement officer who is themselves accountable to a chain of policy and precedent. The structure is comparative, not persuasive. The questions are designed to elicit comparable answers across vendors so a rubric can be applied. The boilerplate is inherited from precedent and from policy, not invented for this buy.
Read it that way and the questions change. Instead of “what do they want from us,” the questions become “what is the rubric, what are the constraints on the procurement lead, who are the evaluators, what is the inherited template, and what does the buyer’s actual decision shape look like underneath the surface text.” Each of those questions has an answer in the RFP itself, often in sections most proposal teams skim. The rest of this post walks through where to look.
VisibleThread has written that rushing into writing without fully understanding the requirements is the leading cause of proposal failure. The version of “understanding the requirements” I am proposing here is the version that takes the document seriously as a procurement artifact, not as a fact-pattern to extract requirements from.
2. The procurement lead’s constraints
The person who wrote your RFP is a named human with a job title, a budget code, and a chain of accountability. Most public-sector RFPs name them by name in the contact section. Commercial RFPs often do not name them, but the role exists — usually a category manager, a strategic sourcing lead, or a vendor management lead in IT or procurement. Their constraints are knowable. There are four, and they show up in the document if you know where to look.
Agency or organizational policy. In federal procurement, the FAR (Federal Acquisition Regulation) governs most of what the procurement officer can ask, how they can score it, and how they can communicate with vendors. In state and local procurement, the relevant procurement code does the same job. In commercial procurement, internal procurement policy plus a legal review constrains the document at every step. The procurement lead does not get to ask for whatever they want. They get to ask for what their policy permits, formatted in the way their policy requires.
When you see clauses that feel oddly specific or oddly absent — a question about subcontractor flow-down requirements that seems out of place, a complete absence of any reference to small-business preferences in a federal bid — those are usually policy artifacts, not signals of buyer priority. Read them as policy compliance and move on.
Precedent. The RFP you are reading is, with very high probability, version 4 or 7 or 12 of a template the procurement office has reused for similar buys. The structural choices, the section ordering, the boilerplate language, the standard attachments — these are not original to this RFP. They are inherited. If you have responded to this buyer before, the inherited structure is your map: the new sections (or the modified ones) are the procurement-lead’s actual editorial decisions for this buy. Everything else is the inherited frame.
The way to find the inherited template, if you have not bid to this buyer before, is to look at recent past awards. Public-sector procurement portals (USAspending, SAM.gov, state procurement portals) often have past RFPs and award notices searchable. Pull three. Read them side by side with the current document. The sections that match across all three are the template. The sections that vary are the editorial choices.
The need for comparability. An RFP that lets four vendors answer in four different ways is an RFP whose rubric does not work and whose award is challengeable. The procurement lead is constrained, structurally, to write questions that elicit comparable responses. That is why questions get tighter, formats get prescriptive, and page limits get enforced. It is also why answers that drift outside the comparison frame get scored low — even when they are true and impressive — because they cannot be compared.
When a question feels constrained or shallow, the constraint is doing work. The procurement lead is not avoiding the deeper question because they do not care about the deeper answer. They are avoiding it because the deeper answer is incomparable across vendors. Your response should answer the constrained question first and only add depth where the format permits.
The rubric that was written first. In most disciplined procurement shops, the scoring rubric is drafted before the requirements section. The requirements then get written to the rubric. This is why some RFPs feel oddly weighted toward dimensions that seem secondary — the dimension is on the rubric, so the question has to be there. It is also why the highest-payoff section of the RFP to read first is the scoring methodology, not the requirements list. We will return to this in section 4.
Read the RFP knowing those four constraints exist and the document stops being a wall of asks. It becomes a set of choices a human made, in a job they have done before, against pressures you can name. Lohfeld Consulting has argued that proposal managers spend more time chasing SME responses than building strategy. The strategy work that does happen should start at the procurement-lead level, not at the requirements level.
3. What the templates look like
If you read enough RFPs across enough buyers, you start to see the same sections, in roughly the same order, across nearly all of them. There are a few standard structures. Knowing them lets you skim the document for the editorial decisions instead of getting lost in the boilerplate.
The standard public-sector template runs roughly:
Section 1 — Introduction and Background. The buyer’s organization, the program this RFP supports, the high-level outcome being procured. Read for two things: the program name (which is usually searchable in past procurement records, telling you the history of the buy) and the stated outcome (which is what the rubric will, in part, score against).
Section 2 — Scope of Work or Statement of Work. What the vendor is being asked to do. Read carefully. The scope is the part of the document that varies most from RFP to RFP. It is the procurement lead’s actual editorial position on what the work is. Mismatches between scope and your delivery model are bid-shape decisions you make in the bid/no-bid stage, not in the writing stage.
Section 3 — Vendor Qualifications and Eligibility. Pass/fail criteria. Certifications required, business size or status (small business, woman-owned, veteran-owned for federal), past performance thresholds. If you do not meet the eligibility criteria, the rest of the document does not matter — the response is non-responsive on its face. Read this section first.
Section 4 — Submission Format and Instructions. Page limits, font requirements, file format, naming conventions, submission portal mechanics. The boring section that disqualifies more bids than any other. We covered this in Stage 7 of the eight-stage pipeline — submit is mechanical, and mechanical failures are surprisingly common.
Section 5 — Evaluation Methodology and Scoring. The most important paragraph in the document, addressed in section 4 below.
Section 6 — Questions and Answers Process. The pre-bid Q&A window, the buyer’s mechanism for answering vendor questions, and the timeline. Addressed in section 6 below.
Section 7 — Contract Terms and Conditions. Boilerplate legal language attached as a model contract. Read for non-standard clauses — anything that requires a redline. Most of this section is policy-driven; some of it is the procurement lead’s specific protective drafting. Distinguish the two.
Attachments. The compliance matrix template, the technical requirements appendix, the price worksheet, the certifications package, the past-performance form. Attachments often contain the most editorially distinct content of the RFP — they are where the procurement lead does the work the main document’s templating does not allow.
Some sections appear redundantly. There may be eligibility language in the introduction, in the qualifications section, and in an attachment. This is usually policy redundancy — the procurement lead is required by policy to state the criteria in multiple places. Do not assume redundancy means emphasis.
Other sections are absent. A sophisticated commercial RFP may omit any reference to evaluation methodology entirely. This is not a procurement lead being sloppy — it is a deliberate choice to keep the rubric internal. When the rubric is not published, you have to infer it from the question structure and the weight of editorial attention. Section 4 is harder, but not impossible, when the methodology is unstated.
4. The scoring paragraph
Find the paragraph that describes the scoring methodology. Read it before anything else.
In most public-sector RFPs, this paragraph lives in section 4 or 5, often under a heading like “Evaluation Criteria,” “Scoring Methodology,” “Award Selection Criteria,” or “Source Selection Plan.” Sometimes it is a standalone document — Attachment B or an “Evaluation Plan” — referenced from the main RFP. Sometimes it is one paragraph; sometimes it is three pages.
The paragraph tells you, typically, three things.
The factors being scored. Technical approach, past performance, management approach, key personnel, cost, sometimes oral presentation, sometimes a separate “small business participation” or “socioeconomic” factor. The list of factors is the list of dimensions the evaluation panel is graded against. Every section of your response should map to at least one factor.
The relative weights. Numerical weights (40 / 20 / 10 / 30) or qualitative descriptors (“technical approach is significantly more important than cost”). Where the weights are qualitative, the procurement lead is signaling the priority order without committing to specific numerics — read carefully. “Significantly more important than” generally means the trade space allows higher cost for higher technical merit; “approximately equal to” means cost matters more than the team often assumes.
The evaluation method. This is the question that determines the entire shape of your response. Two main methods:
- Best-value tradeoff. The evaluator buys higher quality at higher cost when the quality differential justifies it. The technical narrative is the whole game. Cost matters but is one input among several. Most large federal procurements use this method.
- LPTA — Lowest Price, Technically Acceptable. The evaluator awards to the cheapest vendor whose technical response meets a defined threshold. The technical narrative is a checkbox exercise — meet the threshold, do not over-write, win on price. Common in commodity procurements and in some cost-constrained federal buys.
A team that conflates the two writes best-value-style narratives into LPTA bids and loses to a cheaper compliant vendor. Or it writes LPTA-grade minimal narratives into best-value bids and loses to a more thorough competitor. The mismatch is invisible if you have not read the scoring paragraph.
A worked example. Suppose the scoring paragraph reads: “Technical approach (40%), past performance (20%), management approach (10%), cost (30%). Award will be made on a best-value tradeoff basis.” Now read the requirements section against that. The technical-approach section of your response gets 40% of the weight, so it deserves 40% of your writing budget — not 40% of your page count, which is a different question (page count is constrained by the submission format), but 40% of the editorial attention, the SME time, the win-theme integration. The cost-volume section gets 30% but it is mechanical work; the editorial attention there is in the cost narrative, not the spreadsheet. Past performance gets 20% — your selection of which prior contracts to cite matters more than the volume.
If the scoring paragraph instead read “LPTA, technical threshold defined in Section 3, lowest compliant price wins” — the editorial allocation flips. You write the technical section to meet the threshold, no more. You spend the saved capacity on driving cost down and on bulletproofing the cost narrative. Over-investing in technical brilliance in an LPTA bid is wasted effort. The evaluator does not get to reward it.
The scoring paragraph is the procurement lead telling you, in plain language, how the rubric works. Most teams do not read it before they start writing. Your team’s win rate moves measurably when it does.
5. The red flags in the RFP itself
Some RFPs are not worth bidding. The procurement lead has, often inadvertently, signaled in the document itself that the procurement is wired, captured, or politically rather than commercially driven. There are five patterns I have learned to recognize. Each one is not, on its own, conclusive — but two or more of them in a single document is a strong bid/no-bid signal.
Pattern 1 — Hyper-specific requirements that match exactly one product. “The system shall include a feature that does X, in interface mode Y, with vendor-Z’s specific protocol.” If the requirements read like the spec sheet of a single product, the RFP is wired for that product’s vendor. The procurement lead may not have written it that way intentionally — the requirements may have been drafted by the buyer’s technical staff, who based them on the incumbent’s existing product. Either way, you are bidding against an inside track.
Pattern 2 — A short response window for a complex scope. Federal scopes that would normally allow 60 to 90 days for response are sometimes posted with 14- or 21-day windows. A short window favors the incumbent (who has the institutional knowledge to respond fast) or a vendor with prior coordination (who has been shadow-writing the response since before the RFP posted). If the window is short relative to the scope, ask why.
Pattern 3 — Past performance criteria the incumbent uniquely meets. “Vendor must have at least three years of experience supporting [specific named program] for [specific named buyer].” That is a past-performance bar exactly one vendor clears. If the past-performance language reads as if it was written from the incumbent’s resume, it probably was.
Pattern 4 — Pre-bid Q&A that produces no substantive answers. A buyer that uses the Q&A process to deflect rather than to clarify is a buyer that has decided the decision before the RFP was posted. If you submit five substantive questions in the Q&A window and get back five non-answers (“see RFP Section 3.2”), the procurement is either wired or the buyer does not value vendor input.
Pattern 5 — Award criteria that mention “preferred local presence,” “established relationships,” or “demonstrated commitment to [the buyer’s region or community].” These are sometimes legitimate procurement preferences. They are also sometimes the language used when the buyer has a relationship with a specific vendor and wants to bias the evaluation toward them without saying so directly.
Two or more of these patterns in a single RFP is your bid/no-bid signal. The bid is bid-able, in the technical sense, but the probability of winning is materially lower than the document’s surface text suggests. Allocate accordingly.
6. The Q&A process
The pre-bid question-and-answer process is the single most under-used intelligence tool in proposal work. Most teams use it to ask clarifying questions about ambiguous requirements. That is fine. It is also a small fraction of what the process can do.
The Q&A is governed by procurement policy. The procurement lead must answer every submitted question in writing, and the answers are typically published to all participating vendors. (Sometimes anonymized to remove the asking vendor’s identity; sometimes not.) That structure makes the Q&A a public, on-the-record statement of what the procurement lead’s office considers important.
What you can learn from your own questions: when you ask “Section 3.4 says ‘preferred,’ does that mean preferred or required?” — the answer the procurement lead gives you constrains how they will evaluate. If they say “preferred means strongly weighted but not pass-fail,” you now know the trade space. If they say “preferred is a synonym for required in this context,” you know the bar. Either answer is more useful than guessing.
What you can learn from other vendors’ questions when they are published: the questions other vendors ask reveal where they are spending their attention. If three of the published questions are about the cost-narrative requirements, the other vendors are bidding aggressively on cost. If two are about the security architecture, you have a sense of where the technical scrutiny is going to fall. The published Q&A is a free intelligence feed, and it is one of the highest-density sources of information about the competitive shape of the bid.
How to use it: submit your own questions in two waves. The first wave clarifies anything genuinely ambiguous in the requirements — these are the questions you would submit anyway. The second wave probes the rubric and the trade space. “Will the evaluation panel score the management-approach section against any specific PMO methodology, or is the methodology choice the vendor’s?” — that is a question whose answer materially shapes how you write the section. The procurement lead may decline to answer, or may answer in a way that surprises you. Either response is useful.
Caveat: do not submit questions that reveal your win-theme strategy. Some teams ask questions like “would the evaluator value a vendor with [specific differentiator we are about to claim]?” That kind of question telegraphs your approach and gives competitors time to mirror it. Keep your strategic questions confidential and ask the structural ones publicly.
7. The debrief
When the award is announced, you have either won or lost. In either case, request a debrief.
For federal procurement, the debrief is typically a right, not a courtesy. Under the FAR, unsuccessful offerors are entitled to a written or oral debrief explaining the basis for the award decision, the strengths and weaknesses of their proposal, and how the award was scored against the published criteria. Successful offerors can also request a debrief, though the rights are slightly different.
For state and local, debrief rights vary by jurisdiction. Many states grant them on request; some require them. In commercial procurement, debriefs are at the buyer’s discretion and are sometimes offered as a relationship-management gesture rather than a formal process.
Request the debrief regardless. The information density is high.
What you learn from a debrief on a loss: the specific weaknesses the evaluation panel identified in your proposal. The specific strengths they identified in the winner’s. The relative scoring across factors (you may not get raw scores, but you will get qualitative ranking — “your past performance was rated weaker than the awardee’s”). The procurement lead’s editorial assessment of where the bid was strongest and weakest.
What you learn from a debrief on a win: what almost went wrong. The factors where you were closest to losing. The specific strengths the evaluator weighted heavily. This is calibration data for the next bid — you now know which parts of your response actually moved the rubric, and you can double down on them.
Procurement leads will, in my experience, often tell you exactly why you lost if asked correctly. The phrasing matters. “Why did we lose” gets you a polite nothing. “We are committed to improving for the next opportunity — could you help us understand the specific weaknesses the panel identified in our technical approach, and the strengths they saw in the winner’s, so we can calibrate our own?” gets you, surprisingly often, a substantive answer.
The trick is to ask in the procurement lead’s frame. They are constrained on what they can disclose (no proprietary information from the winner, no scoring details that violate procedural fairness), but within those constraints they have wide discretion to be helpful. Asking in a way that respects the constraint is what opens the discretion.
The debrief is the post-mortem stage of the proposal pipeline. We covered it in Stage 8 of the pipeline post — a post-mortem that does not write its output back into the corpus is a post-mortem that has not happened. The debrief is the input to that post-mortem. Without it, every RFP starts at zero.
8. The reframe, again
Here is the closing shape of the argument.
Stop reading RFPs as sales documents. Start reading them as procurement documents written by named humans with known constraints, drafted from templates that have been reused for many bids, designed to compare a small set of vendors against a rubric the procurement lead wrote first.
When you read the document that way, six things change.
You read the scoring paragraph first, not last. You allocate writing effort against the scoring weights, not against the requirements list. You identify the procurement lead by name and pull their past awards to learn their template. You read the questions other vendors submit during the Q&A window as competitive intelligence. You watch for the five red-flag patterns and bid/no-bid against them. You request a debrief on every bid, win or loss, and you ask the question in the procurement lead’s frame.
None of this is novel. Shipley and APMP have written about most of it for decades. Lohfeld and VisibleThread and Fairmarkit have written about the operational pieces. The reframe — read the document as procurement, not as sales — is what ties it together. Once you have it, the rest of the work changes shape.
The next post in this series — running in early August — covers what changes inside your own proposal process when the team adopts this reading discipline. Capture plans get tighter. Compliance matrices get scored against the rubric. Win themes get written for the evaluator the procurement lead is selecting for, not for the buyer’s organization in the abstract. The downstream effects are large, and they all flow from the upstream change in how the document is read.
Until then: pick up the next RFP that lands in your inbox. Find the scoring paragraph before you read anything else. Read it twice. Now read the document around it. The whole RFP looks different. The work it asks of you is different. The response you write — when you write it from inside that frame — will be different in ways the evaluator will recognize.
That is the entire claim. Forty to sixty percent better, in my experience. Read the document the procurement lead actually wrote.
Sources
- 1. Shipley Proposal Guide (7th ed.), Shipley Associates
- 2. APMP Body of Knowledge
- 3. VisibleThread — Government proposal writing: key steps, challenges, and tips for success
- 4. Lohfeld Consulting — How to fix the proposal processes holding you back
- 5. Fairmarkit — Four RFP pain points and how to overcome them
See grounded retrieval in the product.
Start a trial workspace and watch PursuitAgent draft cited answers from the documents you provide.