Field notes

The unwritten rules inside every RFP (Part 3 of 4)

Procurement leads write RFPs in a particular dialect. Once you can read it, the scoring rubric, the disqualifiers, and the actual priorities surface within the first 20 pages.

Sarah Smith 8 min read RFP Mechanics

There is a dialect that procurement leads write RFPs in. It is consistent across industries — federal, state, healthcare, enterprise IT, finance — because the people who write RFPs learn the dialect from the people who taught them, who learned it from the ones before. It is not in any style guide. It does not show up in the RFP’s table of contents. But it runs through every page, and once you can read it, an RFP that looked like a 70-page maze of requirements collapses into about a dozen things that actually matter.

Parts 1 and 2 of this series covered Intake (how the document arrives, how to fingerprint it) and Bid/No-Bid (the decision before the decision). Part 3 is the dialect itself. Part 4, next week, is the bid/no-bid gut check that ties it all together.

The “shall” census

Open the RFP. Search for the word “shall.” Count the hits.

That count is your compliance burden in its purest form. Every “offeror shall” is a discrete requirement that has to be addressed in the response, mapped to a section of your draft, and signed off by a reviewer at the gold-team stage. A 42-page state RFP I reviewed last month had 187 hits on “shall.” A federal IT modernization RFP from a year prior had 412. A mid-market enterprise security questionnaire that came in as a Word document had 38.

The number tells you the size of the compliance matrix you need to build. It also tells you something about the buyer. A high “shall” count from a buyer who has run this procurement before usually means the requirements were inherited from a prior RFP and copy-pasted; some of them won’t apply to your category and the buyer will be receptive to “acknowledged, no response required” lines. A high “shall” count from a buyer running their first procurement of this type means they have over-specified out of caution, and every single requirement will be read literally.

The dialect: “shall” is a hard requirement. “Should” is a soft one. “May” is permissive. “Will” is a description of the buyer’s behavior, not yours. “Offeror should describe” is gentler than “offeror shall provide.” VisibleThread’s compliance tooling makes this explicit — they classify clauses by modal verb because the modal verb is the load-bearing word. Treat them all as “shall” and you’ll over-respond. Treat them all as “should” and you’ll fail compliance.

The scoring paragraph hidden in section 4.2

Every RFP has a section that explains how the response will be evaluated. In federal procurement it is usually called “Evaluation Criteria” or “Basis of Award.” In commercial RFPs it is sometimes called “Evaluation Methodology” or buried inside a section called “Submission Requirements.” It is almost never in the table of contents at the top level. It is usually a sub-section of a sub-section, three or four pages of text, with a numbered list of factors and a paragraph for each.

This is the most important section in the document. It is also the section that first-time RFP responders skim because it doesn’t ask them to do anything. They look for the questions and miss the rubric.

A worked example. In a state RFP I read last quarter — a procurement for a case management system for a public agency — the evaluation section listed five factors:

  1. Technical approach — 35 points.
  2. Past performance — 25 points.
  3. Project staffing — 20 points.
  4. Cost — 15 points.
  5. Small business participation — 5 points.

That weighting is the entire shape of the response. Technical approach at 35 points means the technical section needs to be the longest, the most evidence-heavy, and the one that gets the most senior writer time. Past performance at 25 points means the case studies need to be crisp, recent, and specifically aligned to the buyer’s domain. Project staffing at 20 points is the section most teams write last and most under-resource — and it carries a fifth of the score.

Cost at 15 points is the second-most-misread number in the dialect. A vendor sees “only 15 points for cost” and concludes that price isn’t load-bearing. The actual reading is: cost is graded on a curve against the other respondents, and a vendor priced 30% above the second-lowest bid will lose all 15 points outright. 15 points on a 100-point scale is a 15% deficit you cannot overcome with any other section. Cost is always load-bearing. The points distribution tells you whether it is the deciding factor or a tie-breaker.

The “or equivalent” clause

This is one of the more interesting tells in the dialect.

A specification that names a brand or model — “the system shall be hosted on AWS GovCloud” or “the platform shall use Microsoft Power BI for analytics” — sometimes carries an “or equivalent” qualifier and sometimes does not. The presence of “or equivalent” is the buyer leaving a door open for vendors who use different infrastructure. The absence of “or equivalent” is the buyer signaling that the named brand is mandatory.

The trick: many RFPs include named brands without “or equivalent” because the original draft was written by someone on the buyer side who wasn’t thinking about competitive procurement law, and the procurement office didn’t catch it. The named brand gets written in, the procurement officer reviews, and “or equivalent” gets added to most but not all of the spec lines. The ones that didn’t get the qualifier are sometimes oversights.

Asking the question in the Q&A period — “Is the AWS GovCloud requirement strict, or will an equivalent FedRAMP High environment be considered?” — is how you find out. The answer in the addendum tells you whether the absence was deliberate. If the answer is “strict,” you know whether to bid. If the answer is “or equivalent acceptable, please describe,” you know the buyer is open and the spec was an oversight.

What procurement actually scans for

Procurement leads read your response in a specific way. They are not reading it the way the technical evaluators read it. They are scanning for compliance.

What they scan for, in order:

  • The cover letter, signed by an officer of the company, with the right entity name on it.
  • The statement of confidentiality / non-disclosure acknowledgment.
  • The pricing section, in the format and table layout the RFP specified.
  • The certifications — small business status, security clearances, bonding, insurance limits.
  • The technical section, but only at the section-header level — does it match the structure the RFP asked for?
  • The compliance matrix, if the RFP asked for one (and increasingly they do).

They do not, in this first scan, read the technical content. The technical content gets read by a separate set of evaluators. Procurement’s job is to confirm that the response is responsive — that it has the required pieces, in the required formats, signed by the required people. Responses that fail this scan get rejected before any technical evaluator sees them.

The implication: section structure, formatting, and the procurement-facing artifacts (cover letter, pricing tables, certifications) deserve as much attention as the technical content. They are the gate. A brilliant technical proposal in the wrong format gets rejected at the gate.

The “describe” verb

When an RFP says “describe your approach to X” or “describe the experience of your proposed staff with Y,” it is asking for prose. Not a yes/no, not a checklist — prose with concrete claims and supporting evidence.

Most responses to “describe” prompts are too short. They give two paragraphs of generic capability statements. The evaluator reads them, finds nothing scoreable, and gives a middle grade by default.

What the dialect actually asks for: half a page to a page of specific, evidence-weighted prose with named projects, named tools, and named outcomes. “Our team of 14 cloud engineers has migrated 22 state agencies to FedRAMP High environments since 2021, including the Texas Department of Information Resources (2022, 18-month engagement), the Ohio Department of Administrative Services (2023, 12-month engagement)…” Three sentences of that beat two paragraphs of “our team has extensive experience with cloud migrations.”

The evaluator can read evidence-weighted prose and assign a grade. They cannot grade generic capability statements without arbitrary judgment. The dialect-aware response makes the evaluator’s job easy. The dialect-naive response makes the evaluator’s job hard. Easy-to-grade wins.

The disqualifiers buried in the boilerplate

Section 1 of every RFP is boilerplate. It says who the buyer is, when the procurement opens, when the response is due, where to send questions. Most responders skim it.

The disqualifiers live in the boilerplate. A typical state RFP has, somewhere in the first five pages:

  • A page-count limit on the technical narrative (“not to exceed 50 pages, single-spaced, 11-point font”).
  • A file-format requirement (“submit as a single PDF, not exceeding 25 MB”).
  • A submission portal requirement (“responses received outside the eVA portal will not be considered”).
  • A debriefing eligibility window (“debriefings will be conducted within 30 days of award; requests after 30 days will not be honored”).

Miss any one of these and the response gets disqualified before it is read. Page-count overruns are the most common. A 50-page limit feels generous when you start; by red-team, you have 73 pages of content and four people arguing about which 23 to cut. By gold team, you are still over and someone is told to “tighten the technical section” — which usually means dropping evidence that earns points.

The fix is simple and structural: put the disqualifiers in the kickoff email. Put them on the first page of the proposal-team Notion page. Print them. Re-read them at every color-team review. The dialect rewards teams who read the boilerplate.

Closing — Part 4 next week

Part 4 of this series, landing next week, is the bid/no-bid gut check. The five signals that an RFP is a wired bid for an incumbent, a wish-list draft from a buyer who isn’t ready to procure, or a request that won’t be funded. Reading the dialect is necessary; deciding which RFPs to read it on is what protects your team’s hours.

Sources

  1. 1. VisibleThread — Government proposal writing: key steps and challenges
  2. 2. Shipley Wins — Color team reviews