Anatomy of a 40-ish-page state RFP: a composite teardown
A structural walk through a typical mid-sized state RFP — modal-verb density, scoring rubric, buried disqualifiers. Composite teardown built from public state procurement patterns, not one specific document.
This is a composite teardown. We are not naming a specific solicitation number and we are not walking through one real document line-by-line. The shape described below is drawn from the patterns that recur across public state-level RFPs — typically 30 to 50 pages, issued by a state agency under the state’s own procurement code, published on a state procurement portal under standard open-records terms. Where we cite specific structural features, they are features we see repeatedly across documents of this size; where we cite specific language, the language is modal-verb-standard (“shall,” “must”) and not lifted from any one solicitation.
The point of the teardown is structural literacy. If you understand how a mid-sized state RFP is typically organized, you can read a new one end-to-end in an afternoon instead of spreading it across a week, and you can find the editorial weight early enough to make a real bid/no-bid decision.
The shape of the document
A representative mid-sized state RFP — call it 40-ish pages — tends to break into seven sections in roughly this sequence:
- Pages 1–4: cover, table of contents, schedule, points of contact.
- Pages 5–9: background on the agency and the program being procured.
- Pages 10–14: scope of work — the substantive description of what the contractor will do.
- Pages 15–28: requirements. The longest section. This is where most “shall” clauses live.
- Pages 29–32: evaluation criteria. The scoring rubric.
- Pages 33–36: submission instructions. Where the disqualifiers hide.
- Pages 37–42: appendices — sample contract, certifications, debriefing process.
The shape is typical for state-level procurements. Federal RFPs are usually longer (a comparable scope at the federal level would run 100+ pages). Mid-market commercial RFPs are usually shorter (20–30 pages) but have a similar internal structure compressed.
Modal-verb density — where the “shalls” cluster
A 40-ish-page state RFP typically carries well over a hundred modal-verb obligations. The distribution across sections is informative regardless of the exact count:
- A small cluster in the scope-of-work section.
- The bulk of obligations in the requirements section.
- A meaningful handful in the submission instructions — and those are the most dangerous ones, because they govern whether the response is even read.
- A few in the evaluation criteria section.
- A few in appendices and incorporated documents.
A team that builds a compliance matrix from the requirements section alone will miss the submission-side disqualifiers. Reading for modal verbs across every section — not just “requirements” — is the unglamorous discipline that separates compliant responses from rejected ones.
A caveat worth keeping. Not every “shall” generates a compliance-matrix row. Some are introductory (“the offeror shall meet all requirements set forth in this section”). Some are cross-references to other sections. Some are references to external statutes. The naive count from a search-and-replace (“shall: N”) overstates the drafting workload. The discrete-requirement count — the number of rows a compliance lead actually owns — is usually about half the raw modal-verb count.
The scoring rubric
The evaluation-criteria section is the most important section of the document. If a vendor reads only one section, it is this one. A typical mid-sized state rubric allocates points across five factors:
- Technical approach — typically the largest share, around a third of points.
- Past performance — a quarter or so.
- Project staffing / key personnel — roughly a fifth.
- Cost — often a smaller share than vendors expect, frequently 15–25%.
- Small business / DBE / local participation — often a single-digit share but sometimes pass/fail.
Each factor typically has a paragraph describing how it will be scored, with sub-factors for the larger buckets (solution architecture, implementation methodology, risk management, innovation on the technical side; relevance, recency, customer references on past performance).
The weighting tells you the proposal’s investment plan: writer-hours should track the rubric. A team that drafts a proposal where the staffing section is treated as boilerplate is leaving a fifth of graded points on the table.
An artifact we see in this kind of rubric consistently: an “innovation” sub-factor (or a synonym — “creative approach,” “value-added services”) that is almost never well-defined. The paragraph usually says something close to: “innovation will be evaluated based on the offeror’s proposal of approaches not currently in use by the Agency.” That is graderless. Innovation can be claimed on almost any feature. A skilled responder uses this kind of vague sub-factor as a place to claim differentiation that wouldn’t fit elsewhere.
The disqualifiers
A mid-sized state RFP typically hides 15–25 clauses whose violation would cause the response to be rejected before evaluation. They cluster in three categories.
Format and length.
- Technical narrative page-count, font, margin, and spacing limits.
- Separate-file requirements for cost vs. technical.
- Pricing template requirements (offerors must submit on the provided spreadsheet).
Submission mechanics.
- Portal-specific submission (eVA, state-specific eProcurement portal, etc.).
- Cutoff time enforcement to the minute on the due date.
- Signed cover letter from an officer with authority to bind the offeror.
Certifications.
- Current Certificate of Good Standing from the state of incorporation.
- Proof of liability insurance at the specified limits.
- Conflict-of-interest disclosure forms.
None of these clauses is hard to satisfy. All of them are easy to forget. The pattern across documents of this size: teams build solid technical responses and lose at the gate because nobody owned the disqualifier checklist.
What automated extraction does well — and where it stops
We run this kind of document through our own RFP Analyzer extraction stage as part of the teardown workflow. Qualitatively:
- Modal-verb extraction is reliable on the main body, weaker on clauses cited inside external statutes.
- Matrix-row classification — distinguishing introductory “shall” clauses from discrete requirements — is stronger than a naive search but not perfect.
- Scoring-rubric extraction is the easiest part of the work and the most useful output. Rubrics are structurally consistent (numbered factors, point weights, paragraph descriptions) and the parser handles them cleanly.
- Disqualifier extraction is the hardest part. Disqualifiers are not always in the submission section. Some hide in footnotes. Some hide in appendices. Some are referenced as “in accordance with [external state statute].”
A compliance lead reviewing the extracted rubric in our UI can build the proposal’s structure directly from it. The disqualifier surface is the place we have the most work to do — every missed disqualifier is a potential rejection.
Patterns across documents
A few qualitative patterns we see hold up across state-level RFPs of this size:
- Compliance-matrix yield for mid-sized state procurements typically runs 80–200 rows. Federal yield runs 200–1000+. Commercial yield runs 30–80.
- Scoring rubrics on state RFPs reliably allocate the majority of points to a combination of technical approach and past performance. Cost is rarely above 25%.
- Disqualifier counts run 15–25 on state-level documents. Most are formatting and submission mechanics; a smaller number are certifications.
- The “innovation” sub-factor (or its synonyms) appears in a clear majority of state RFPs. It is rarely well-defined. It is consistently a place where skilled responders claim differentiation.
What this teardown can’t tell you
Three honest limits.
This teardown cannot tell you which vendor will win a specific procurement. Capture intelligence — incumbent knowledge, relationship history, debrief data — is gathered through phone calls and sales conversations, not structural reads of documents. The structural analysis above is necessary but nowhere near sufficient for a bid decision.
This teardown cannot give you year-over-year consistency analysis for a specific agency. A compliance lead bidding on work from one agency would benefit from a year-by-year comparison of that agency’s RFPs to spot drift in disqualifier strictness or shifts in scoring weights. That work needs specific documents named. This is a composite.
This teardown cannot tell you what the buyer’s unstated priorities are. The procurement officer who drafted any specific RFP operated under constraints and politics that aren’t in the document. Capture intelligence is gathered through phone calls, prior bid debriefs, and the long memory of senior account executives. A teardown reads what’s on the page; capture reads what’s around it.
What this teardown can do is give a proposal lead a template for how to read a document of this shape. The modal-verb pass, the rubric extraction, the disqualifier inventory — these are mechanical first steps that the proposal team can do in a couple of hours and that compound across every bid the team responds to. The first hour of any RFP read should be these passes. The next 40 hours of work depend on them.
Why we write this as a composite
We considered writing this as a walkthrough of one anonymized RFP. Two reasons against. First, state RFPs often have identifying surface features — agency names, program area language, specific statute references — that survive anonymization and let a careful reader back-trace to the source. Second, the structural claims in the post are general patterns; tying them to one document would imply a level of single-source specificity that the analysis doesn’t rest on. Composite framing is the honest framing for a structural teardown.
The California and DoD composites we’ve published (California, DoD) use the same pattern. If you have a specific state RFP open right now and you’re using this post as a checklist, the right next step is to put this post down and finish reading the actual RFP. The patterns will be there. The specifics that will decide whether you win are in the document, not in this post.