Healthcare RFP trends for 2026, early read
Seven clauses that were not in 2025 healthcare RFPs and are now showing up repeatedly in the Q4 2025 sample. What they tell us about buyer priorities for the year ahead, with sourcing where it is public.
The research team reviewed a sample of 38 healthcare-sector RFPs and DDQs issued in Q4 2025. The sample covers acute-care systems, payer organizations, and health-tech vendors purchasing from sub-tier suppliers. The question we were answering: what clauses and requirement patterns are appearing in the 2025 sample that were not present, or were materially different, in the 2025 baseline we have on file.
Seven clauses came up repeatedly. Below is the early read. The full methodology note is at the bottom.
One — AI model governance, specifically scoped
Where 2025 RFPs asked for general AI governance policy documents, 2025 RFPs are asking for specific model-level artifacts: the list of models in use (by name and version), the training-data provenance statement for each, the human-in-the-loop policy for each use case, and the audit trail retention period. The questions are structured, not open-ended.
The regulatory context is the tightening of HHS and state-level AI transparency expectations, which predate specific rules but are shaping buyer-side procurement expectations. Vendors whose AI governance program is still at the policy-document level will fail this section. Vendors with model-level artifacts pass it.
Two — Training-data exclusion clauses
The downstream effect on vendor architecture is real. A vendor whose AI capability depended on aggregate training on customer content will need to either rearchitect the capability to avoid that path or explicitly carve out the buyer from the training pool. Both options are live; neither is cheap. For proposal teams, the answer to this clause cannot be a generic privacy statement. It needs to name the specific mechanism that prevents the buyer’s data from entering the training loop, including the controls on that mechanism and how the buyer can verify it.
A new pattern: RFPs explicitly require that the buyer’s data cannot be used to train or fine-tune any model, without the possibility of opt-in. Not “opt-out by default.” Not “opt-in available.” Explicit contractual exclusion with an enforcement mechanism.
About 60% of the healthcare sample now carries this clause in some form. The 2025 baseline had it in under 20% of responses. The shift is driven by buyer counsel’s read of HIPAA obligations in an AI context and is unlikely to reverse.
Three — Data residency at the model-inference layer
Data residency is not new, but the scope has changed. 2025 RFPs asked about data residency at the storage layer (where is the database). 2025 RFPs ask about data residency at the model-inference layer (where does the prompt get sent and processed, including transit to a model provider’s infrastructure).
For vendors using third-party foundation models, this is a harder question. The answer usually requires stating which model providers are in the inference path, which regions those providers serve from, and whether the inference traffic crosses jurisdictional boundaries. Some buyers are requiring a dedicated inference region; others are requiring on-premises inference for specified data categories.
Four — PHI-in-prompt handling
The pattern is worth explaining in context. Before mid-2025, most AI features in healthcare products sat behind de-identification boundaries — the product would strip or tokenize PHI before anything hit a model. Through 2025, as customers pushed for AI features that operate on clinical or near-clinical content, more products began accepting PHI into the model path under BAA coverage. Buyers are now catching up to that shift by asking detailed questions about how the PHI is handled in the prompt path.
A specific question now appearing in about 40% of the sample: what happens when Protected Health Information (PHI) is included in a prompt to an AI feature? The clause is asking whether the system detects PHI, how it redacts, what the audit trail captures, and what notifications are generated.
A “we de-identify at the input layer” answer is now insufficient. The question is asking for the specific detection technique, the false-negative rate, and the recovery procedure when detection fails.
Five — Business Associate Agreement addenda for AI
The standard Business Associate Agreement (BAA) template is getting addenda that specifically address AI processing. The addenda cover model-use disclosure, training-data restrictions, AI-incident response procedures, and termination rights if the vendor’s AI-use profile changes during the contract.
This is not a single template — every buyer’s counsel is drafting their own version — but the pattern is consistent enough that vendors should prepare for negotiating an AI BAA addendum as a standard step in the contracting phase.
Six — Evaluation-framework transparency
Several RFPs in the sample are asking vendors to publish the evaluation framework used internally to assess the AI system’s performance on healthcare tasks. The ask is unusually specific: benchmark datasets, failure-mode taxonomy, false-negative and false-positive rates on defined test sets, and a schedule for re-evaluation when the model changes.
This is a meaningful ask. Most vendors do not have a documented evaluation framework in the form the buyer wants, and the ones that do are reluctant to publish it because the framework itself is competitive information. Expect this to be one of the contested points in Q1 2026 negotiations.
Seven — Incident response for AI-specific failures
Incident-response clauses have always been in healthcare RFPs. The new pattern is AI-specific incident categories: hallucination producing a clinical recommendation, model drift affecting a reported outcome, model-provider outage affecting an automated workflow, and prompt-injection resulting in unauthorized access to PHI.
For each category, the buyer is asking for the detection mechanism, the escalation threshold, the notification timeline, and the remediation procedure. The incident-response runbook needs explicit sections for these. Generic incident-response procedures that do not name AI-specific failure modes will not satisfy the requirement.
Two related patterns that did not make the top seven but are worth naming. A growing minority of RFPs are asking for the vendor’s definition of an AI-specific incident — acknowledging that the category is unsettled and that the buyer wants to understand the vendor’s threshold before an incident happens rather than after. And a smaller but nonzero set of RFPs are asking for the vendor’s history of AI-specific incidents over the past 24 months, at the same level of detail a traditional security-incident history would be requested. That second ask is sharp; vendors with no incident history to speak of will want to be careful not to answer in a way that sounds like “nothing ever went wrong,” which reads as unserious. The right framing is to describe the evaluation practice that surfaces issues, the frequency with which it does, and the categories of findings that resulted — without overclaiming stability.
What this implies
Three operational implications for proposal teams covering healthcare in 2026.
The past-performance record needs refreshing. A 2025 past-performance writeup that describes a healthcare engagement without reference to any of the above clauses will read as stale. Go back to the 2025 and early-2025 engagements and rewrite the past-performance entries to reflect what the current buyer is asking about.
The KB needs model-level content. The KB probably has policy-level content on AI. It probably does not have model-level content — the specific list of models, the per-model governance, the per-model evaluation. Building that content is a multi-week project and should start this quarter.
The SME roster needs a named owner for AI governance. Healthcare RFP responses in 2026 will require SME sign-off on AI-governance sections that did not exist in 2025. Identify the internal owner, ideally with a background in clinical informatics or regulatory affairs, and formalize the role.
The evidence packet deserves a template. DDQs in the healthcare sector increasingly ask for evidence attachments — SOC 2 Type II reports, HITRUST certifications, penetration-test summaries, model-evaluation artifacts — with structured references to specific sections of the questionnaire. Building a template for the evidence packet, with clear section labels and a cover sheet that maps evidence to questionnaire sections, saves hours per response and reduces the reviewer’s friction on the buyer side. Teams that ship a polished evidence packet score noticeably better on the “completeness” axis in the evaluations we have seen.
The renewal motion needs its own playbook. A new-customer healthcare RFP has a different shape than a renewal with an existing healthcare customer. Renewals are increasingly being run as formal procurement rounds with the buyer treating the incumbent the way they treat a challenger — full DDQ, full evaluation committee, full pricing review. For the incumbent, the trap is responding to a renewal RFP the way you would respond to a new-customer pursuit, which reads as if the relationship history is an afterthought. The right motion leads with the relationship and the delta since the last renewal, then addresses the questionnaire. Several of the renewals in our sample that went to a challenger lost on this first-read test.
Methodology note
The sample of 38 RFPs and DDQs was drawn from Q4 2025 responses our customers have shared anonymized with us for research purposes. It is not a random sample — it is weighted toward the mid-market and large-provider segments where our customer base concentrates. Small-provider and purely-payer RFPs are under-represented. The 2025 baseline is a similarly-weighted sample of 29 responses from Q4 2024 through Q2 2025.
The comparisons above should be read as directional rather than statistically representative of the full healthcare procurement universe. We will revisit in April with a Q1 2026 sample and update the percentages.
A second methodological note worth naming: some of the clauses above are new in their current form but are descendants of earlier requirements. The training-data exclusion clause is a direct descendant of the broader “no use of PHI for unrelated purposes” clause that has been in healthcare contracts for over a decade. The incident-response AI categories descend from the general incident-response expectations that HIPAA Security Rule updates have been shaping for years. Treating the 2026 clauses as entirely novel misses that context; the better read is that they are the specific operational form of long-running obligations, now made explicit because buyers have AI-specific concerns that the general language did not adequately cover.
What to watch through Q1
Two signals that will tell us whether the Q4 2025 pattern holds or shifts.
The first is the consistency of the AI-governance language across buyers. Q4 2025 showed meaningful variance in how different buyers wrote the same underlying clause. If Q1 sees convergence — more buyers adopting the same phrasing from the same source (likely a shared template circulating among healthcare buyers’ counsel) — vendors can invest in a single canonical answer. If Q1 sees continued variance, vendors will need to maintain a library of answers tuned to each buyer’s framing.
The second is whether the evidence-packet expectations formalize further. The direction of travel is toward more-structured evidence packets with more-explicit maps to the questionnaire. If a standard emerges — something like the SIG questionnaire has become on the broader-IT side — the sector will stabilize quickly. If no standard emerges, vendors will continue to build bespoke packets per buyer, which is expensive.
We will update this post in April with a Q1 sample, a recount of the percentages, and specific notes on which of the seven clauses strengthened, weakened, or resolved.