The federal task-order RFP wave of 2026
IDIQ task-order patterns from the first half of 2026. How response windows, page caps, and evaluation schemes have shifted — and how teams are staffing against them.
A task-order RFP is an IDIQ’s real-world face. The master vehicle is the umbrella; the task orders are the work. Anyone who has watched GSA OASIS+, CIO-SP4, Alliant 2, or the newer OASIS+ successor knows that task orders are where the calendar actually lives — and where win-rate math actually happens.
We’ve been tracking task-order RFP metadata since early 2025. This is the spring 2026 read: what changed, what didn’t, and how the teams we talk to are staffing against the shift.
The short read
- Response windows compressed again. The median response window on the task orders we indexed in Q1 2026 was 14 calendar days, down from 17 in Q1 2025. The 10th-percentile window is now five business days.
- Page caps are flattening. After three years of shrinking page limits, the 2026 median technical volume cap is 25 pages — roughly where 2025 landed. The compression has stopped.
- Evaluation schemes are simpler. More task orders are “LPTA with acceptable-rated technical” or “trade-off with weighted technical” rather than the heavier evaluated-proposal schemes of 2022–2024. Fewer oral presentations.
- Past-performance asks are tighter. Three references, same-agency preferred, last three years. Less “five references of any similar size in the last seven years” — more specificity.
Response windows: five days is now normal at the tail
Fourteen-day windows make task-order work functionally impossible to do well with a normal proposal process. The teams responding at that clock aren’t running a Shipley color-team review on every pursuit — they’re running a compressed variant. Two-stage review, no pink team, a single combined red/gold. Submit is a separate discipline run by a named owner who is not the writer.
The five-business-day windows at the tail change the math further. A team that bids at that window on something they haven’t captured is losing. A team that bids at that window on something they’ve been capturing for six months has a real shot. Capture leads are the deciding role.
This tracks with what VisibleThread has written about compressed federal cycles — rushing into writing without understanding the requirements is the leading failure mode, and the compressed window makes that failure mode the default.
Page caps: the compression stopped
From 2022 to 2025 we watched federal page limits shrink every year. Forty-page technical volumes became 30, then 25, then in some cases 15. The theory: evaluation panels wanted to read less.
In 2026 the curve flattened. The median technical volume page cap on the task orders we indexed is 25, same as 2025. The modal number is 20. A few agencies — GSA in particular — pushed back to 30 on complex integration task orders.
Our read: evaluators hit a floor. Below 20 pages it’s very hard to describe a complex integration, a 10-year past performance record, and a key-personnel roster without cutting something load-bearing. The push to shrink further appears to have stalled.
Evaluation schemes: the trade-off drift
Two schemes are dominating the 2026 task-order stack:
- Acceptable-rated technical + price (LPTA in practice if not in name): the technical volume must clear a pass/fail bar; after that, lowest evaluated price wins. Roughly 45% of the task orders we indexed in Q1.
- Trade-off with weighted technical (best-value): technical and past performance are scored; price is evaluated separately; the combined score drives award. Roughly 40%.
The remaining 15% splits between LPTA (pure), highest-technically-rated with price realism, and a handful of unusual schemes that don’t fit a named pattern.
What changed from 2025: oral-presentation schemes are rarer — down to about 8% from 15% the year before. The theory isn’t that agencies dislike orals; it’s that the contracting officers drafting these task orders have less time and written evaluations require less scheduling.
Past performance: three references, same-agency preferred
This is the most consistent shift of the year. A year ago, the modal past-performance ask was “up to five references of similar size and scope, last seven years.” This year’s modal ask is “three references, same-agency preferred, last three years.”
The operational implication: a contractor without recent same-agency past performance is effectively locked out of the task orders where their technical capability is strongest. We see this most clearly in HHS and DoD health agencies — repeat contractors are winning at rates well above their objective technical scores because the past-performance scoring is carrying them.
For non-incumbents, this raises the bar on what counts as a capture strategy. It’s no longer enough to have a good solution — there has to be a bridge past-performance narrative, often through subcontracting or teaming, that addresses the same-agency preference.
How teams are staffing
We asked six federal contractors — a mix of small-business primes and mid-tier integrators — what they changed about proposal staffing between Q1 2025 and Q1 2026. The patterns:
- Dedicated task-order capture leads. Five of six now carry at least one capture lead whose sole job is task-order pursuits against a named IDIQ. In 2025 most of these roles were shared with broader capture duties. The rationale: a capture lead who splits attention across base-contract pursuit and task-order pursuit will default to whichever has a closer deadline, and task orders always do. Separating the roles removes the tradeoff.
- Proposal managers doing triple duty. Two of six described their proposal managers running three concurrent task orders in the Q1 push. This is unsustainable and the teams know it. The delta between 2025 and 2026 isn’t that managers can do more — it’s that they can’t, and something is breaking. Burnout showed up in the exit conversations of two senior proposal managers at these firms; both cited the cadence as the reason.
- SME rotations. Four of six have moved to a rotation where a core SME pool is on-deck for task orders one week in four, with billable projects shielded the other three. This is the most promising staffing change we’ve seen — it reduces the “steal the best engineer from a real project” failure mode Quilt has documented across the industry. The two firms that haven’t adopted a rotation reported the highest SME dissatisfaction scores in their internal surveys.
- Outsourced color-team reviewers. Three of six now contract a red-team or gold-team reviewer from outside the firm. The inside view: freshness beats familiarity. An outside reviewer catches the tired assumptions an internal team has stopped seeing. The outside view: it’s expensive and the scheduling is painful. Both are true.
- Proposal-writer pools shared across task orders. Two of six have moved from “each pursuit gets a dedicated writer” to “a small pool of writers rotates across active pursuits on weekly slots.” The argument is that pooled writers build better pattern recognition across task orders from the same agency — the same evaluator language shows up, the same compliance gotchas recur, and a writer who has seen three of them reads the fourth faster.
The sub-pattern: single-award vs. multi-award task orders
Not all task orders are shaped the same. A single-award task order — where the agency selects one vendor from the IDIQ holder pool — runs a heavier evaluation. A multi-award task order — where several vendors each get a piece of the work — runs lighter.
Q1 2026 data on our indexed set: roughly 70% of task-order RFPs we saw were single-award, 25% multi-award, 5% unclear. The single-award fraction has been flat year over year. What changed is the evaluation depth on single-award task orders — same technical volume page cap as 2025, but more detail expected per page. Evaluators are reading for specificity.
The multi-award posture rewarded different behavior. Several of the multi-award task orders we reviewed let vendors respond with a base proposal that could be re-used across similar orders. Teams that had standardized their response templates — particularly for past-performance and key-personnel sections — bid three or four multi-award task orders in the time it took to bid one single-award. This is where response-reuse infrastructure earns its keep.
Bid/no-bid at 14 days
Compressed windows force the bid/no-bid decision to be fast and honest. Several teams we talked to reported that their bid/no-bid meeting has moved from a 45-minute review at the start of the pursuit to a 15-minute scored decision made within 24 hours of RFP drop. Past 24 hours and the team has already begun drafting, and the sunk cost argument takes over.
The 15-minute decision is only possible with a pre-built framework. The teams running it walk into the meeting with a capture lead’s one-page capture memo already drafted, a capture-plan summary, and a scored framework they apply consistently. No ad-hoc deliberation on a five-business-day window.
This is the highest-leverage change a team can make against a compressed-window task-order wave: kill the pursuits you shouldn’t be on fast enough that the team’s real capacity is spent on the ones they should. The bid/no-bid framework post from month one has the scoring pattern.
Why the shift is happening
We’re not reading the contracting officer’s mind, but the patterns are consistent with a few structural pressures:
Federal fiscal cadence. More task orders are being awarded off expiring funds at the end of the fiscal quarter and fiscal year. That creates concentrated waves of RFPs with short windows because the contracting shop is trying to obligate before the funds expire. This has always been true; the concentration appears to have intensified.
Contracting officer workload. Contracting shops are thinly staffed. A 14-day window reflects the writing rate the contracting officer can sustain — they’d rather post a shorter-window RFP that they finalize today than a longer-window RFP that sits in their queue for two more weeks.
Evaluation simplification. Trade-off evaluations with fewer criteria are faster to run. Agencies moving toward acceptable-rated-plus-price are responding to the same time pressure on the evaluation side that produced the window compression on the RFP side.
None of these are reasons to celebrate; they’re reasons the environment is what it is.
What good looks like
A task-order response practice that works in 2026:
- A live bid/no-bid framework (see our framework post) that kills at least 40% of inbound task orders before a single page is drafted.
- A compliance matrix built from intake, not as a post-hoc audit.
- A past-performance library that’s indexed by agency, not just by size-and-scope — because the evaluation panels are reading for same-agency first.
- A SME rotation that treats task-order weeks as capacity, not as overtime.
- A post-mortem that writes back to the library within one week of award or debrief. The compounding edge only works if the learning loops close.
Where this goes
Next quarter’s read: we expect the response-window compression to continue but flatten, page caps to stay at 25, and same-agency past-performance preference to become more explicit (possibly formalized in evaluation criteria rather than just “preferred”). If oral presentations spike back — say, because a new administration or a policy memo pushes for them — we’ll catch it in the next quarterly cut.
The wave-2 state-of-tools post lands next week and includes a section on which proposal-management platforms have caught up with task-order cadence and which are still built for the 60-day federal pursuit. See the preview from April 4.