Field notes

Vendor onboarding DDQs across four industries

Finance, healthcare, SaaS, and defense. The same 200 questions in four different rephrasings. A teardown of how the category-specific framing changes what the buyer expects to see in the answer — and what stays the same underneath.

The PursuitAgent research team 10 min read Procurement

A due-diligence questionnaire (DDQ) for a finance customer, a healthcare customer, a SaaS customer, and a defense customer reads like four different documents. Underneath, they are mostly the same document. The same 200 questions appear in all four — about access controls, vendor management, incident response, business continuity, financial stability, regulatory posture, employee security awareness, and a long tail of operational specifics. What changes is the framing, the citation expectations, and the standards each industry expects the answer to be aligned to.

This is a research-team teardown of the four industry variants, side by side. We pulled four representative public DDQ templates from public procurement portals and vendor-management materials and read them comparatively. The conclusion: writing a DDQ answer once and recycling it across industries is the failure mode the DDQ anatomy series flagged. The same answer needs to be re-anchored to the standards the industry’s evaluator looks for, even when the underlying capability is identical.

Methodology

We took four publicly available DDQ templates as representative anchors:

  • Finance: a vendor-management questionnaire used by US-regulated banks and reflecting OCC and FFIEC examination expectations.
  • Healthcare: a HIPAA-aligned vendor security questionnaire of the kind hospital systems and payers send to subprocessors.
  • SaaS: a SOC-2-aligned questionnaire of the kind enterprise SaaS buyers send to their downstream vendors.
  • Defense: a CMMC- and DFARS-aligned questionnaire of the kind US Department of Defense primes send to their subcontractors.

We mapped the questions across all four and identified the overlap and the divergence. Sample: roughly 200 questions per template, with about 140 that appear in all four (the operational core) and about 60 that are industry-specific.

The 140-question operational core

The shared core covers categories that every enterprise buyer asks about regardless of industry. The categories and their typical question counts:

  • Access controls (MFA, privileged access, password policy) — ~25 questions.
  • Data handling (encryption, retention, classification) — ~20 questions.
  • Incident response (notification timelines, runbooks, tabletop exercises) — ~15 questions.
  • Business continuity and DR (RTO/RPO, backup testing) — ~15 questions.
  • Vendor management (subprocessors, fourth-party assessments) — ~15 questions.
  • Employee security (background checks, awareness training, offboarding) — ~15 questions.
  • Financial stability (audited financials, parent guarantees) — ~15 questions.
  • Operational continuity (SLAs, support coverage) — ~20 questions.

The same questions, rephrased per industry. “Do you encrypt data at rest” appears in every template. The follow-up questions differ.

The 60-question industry surface

Where the templates diverge:

Finance

The finance variant draws heavily from FFIEC examination expectations. Questions cluster on:

  • Audit history (prior SOC 1 Type 2 reports, not just SOC 2).
  • Concentration risk (what percentage of your revenue comes from this customer’s industry).
  • Bank Secrecy Act / AML alignment for any vendor that touches transaction flow.
  • Regulatory action history (FINRA, FDIC, state regulators) for any vendor with a related entity in financial services.
  • Specific OCC bulletins (2013-29, 2020-10) referenced by name, expecting the vendor to know which apply.

A SaaS-flavored answer to “describe your audit history” that lists SOC 2 Type 2 and ISO 27001 will be technically responsive in a finance DDQ but will miss the SOC 1 Type 2 expectation. The answer needs re-framing per industry.

Healthcare

The healthcare variant lives under HIPAA and HITECH. Questions cluster on:

  • Business associate agreement (BAA) terms, including subprocessor flow-down.
  • PHI handling specifics — minimum necessary standard, breach notification timelines (60 days under HIPAA, often shorter contractually).
  • Specific HIPAA Security Rule citations expected in answers (45 CFR 164.308, 164.310, 164.312).
  • HITRUST certification status (often implied as a near-requirement for hospital systems even when not contractually required).
  • State-specific privacy law alignment (CMIA in California, similar elsewhere).

A finance-flavored answer to “describe your data handling controls” needs to be re-cast in HIPAA-Security-Rule language. The underlying control is the same (encryption, access logging, role-based access). The citations are different.

SaaS

The SaaS variant reads more like a SOC-2 self-attestation. Questions cluster on:

  • Trust services criteria alignment (security, availability, processing integrity, confidentiality, privacy).
  • Subprocessor disclosure and notification timelines.
  • API security, authentication mechanisms, OAuth scopes, key rotation.
  • Customer data isolation in multi-tenant deployments.
  • Standard SaaS contractual terms — DPA, SLA credits, termination data return.

The SaaS DDQ is the lightest of the four in terms of citation expectation. A SOC 2 Type 2 attestation usually closes most of the questions; the buyer expects the report itself, not a 60-page narrative that re-states the report. Vendors that over-write SaaS DDQ answers signal naivete.

Defense

The defense variant sits under CMMC, DFARS 252.204-7012, and (for some categories) ITAR. Questions cluster on:

  • CMMC certification level (1, 2, or 3) and corresponding NIST SP 800-171 control alignment.
  • Controlled Unclassified Information (CUI) handling.
  • Foreign ownership, control, or influence (FOCI) disclosures.
  • Cyber incident reporting timelines (72 hours under DFARS 252.204-7012).
  • Specific NIST 800-171 controls referenced by ID (3.1.1 through 3.14.3).
  • US persons requirements for any role with access to certain data classes.

A SaaS-flavored answer to a defense DDQ that doesn’t cite NIST 800-171 control IDs will be marked as non-responsive. The defense variant has the highest citation density expectation of the four.

Citation expectations vary in density too

Beyond framing, the four variants have meaningfully different citation density expectations on average. We measured this across the four templates by counting the number of explicit citation hooks (places the question asks for “reference,” “identify,” “cite,” or specifies a control framework by name).

  • Defense: ~1.4 citation hooks per question on average — the highest density. Defense evaluators expect citations to NIST 800-171, CMMC, DFARS, FIPS, and (for some questions) ITAR or EAR. An answer without citations fails.
  • Healthcare: ~0.8 citation hooks per question. HIPAA Security Rule citations expected on most security-section answers; HITRUST citations expected when applicable; FIPS 140 validation expected on cryptographic claims.
  • Finance: ~0.7 citation hooks per question. SOC 1 / SOC 2 citations expected; FFIEC handbook references expected on operational controls; OCC bulletin references expected on third-party-risk-management questions.
  • SaaS: ~0.4 citation hooks per question. SOC 2 attestation typically closes the security section without per-question citation requirements.

The density variance is large enough that the same KB block, populated for SaaS framing, is structurally insufficient for defense framing. The block has the underlying claim but not the citation density the defense template requires.

What stays the same

Underneath the framing, four things are constant.

The underlying control surface is the same. A vendor that does MFA, encryption, role-based access, incident response, and quarterly access reviews has the same posture across industries. The DDQ is asking about the same posture in different vocabularies.

The certification artifacts overlap. SOC 2 Type 2, ISO 27001, and (for vendors that pursue them) HITRUST and CMMC are the four most common attestations and most enterprise vendors hold at least the first two. The same artifacts answer most operational-core questions in any of the four DDQs.

The operational evidence is the same. Access logs, encryption configurations, incident runbooks, backup test reports — the artifacts are the same documents stored in the same systems. The DDQ asks for them in different forms.

The failure mode is the same. Vendors that recycle stale answers signal that they have not engaged with the buyer’s specific industry framing. Buyers — across all four industries — flag recycled answers and downgrade the vendor in evaluation. Safe Security’s research on the security-questionnaire surface called this out specifically: vendors recycle outdated answers, organizations collect reassuring “yes” responses that don’t reflect real posture, and trust degrades on both sides.

A worked example — the same control across four templates

Take a single underlying capability — encryption at rest with AES-256, keys managed by the cloud provider’s KMS, key rotation every 90 days, attested in the most recent SOC 2 Type 2 report. One control, four DDQ variants asking about it.

Finance variant. “Describe the encryption controls applied to customer data at rest, including key management and rotation cadence. Reference any FFIEC IT Examination Handbook guidance your controls align with, and identify the relevant SOC 1 Type 2 or SOC 2 Type 2 control with the most recent test date.” The answer needs the control description, the FFIEC handbook reference (Information Security booklet, page references), the SOC 2 control ID, and the audit period. A finance evaluator without the FFIEC reference reads the answer as incomplete.

Healthcare variant. “Describe the encryption mechanisms applied to ePHI at rest. Identify the HIPAA Security Rule technical safeguard (45 CFR 164.312) the controls satisfy, the encryption standard (FIPS 140-2 / 140-3 validated module status if applicable), and any HITRUST common security framework controls that map to this capability.” The same underlying capability, framed in HIPAA Security Rule language, with FIPS validation as a discriminator some healthcare buyers weight heavily.

SaaS variant. “Describe encryption at rest. Reference the SOC 2 Trust Services Criterion control. Subprocessor encryption status if customer data is shared with subprocessors.” Shorter. The buyer expects the SOC 2 attestation and the DPA to cover the rest. Over-writing this answer signals naivete.

Defense variant. “Describe encryption controls protecting CUI at rest in accordance with NIST SP 800-171 control 3.13.11. Identify FIPS 140-2 / 140-3 validation status of the cryptographic module. Confirm CMMC certification level and the corresponding control family score.” The defense variant expects the NIST 800-171 control ID, the FIPS validation, and the CMMC score. Without all three, the answer is non-responsive.

The capability — AES-256 at rest, KMS-managed, 90-day rotation — is constant across all four. The framing language, the citation expectations, and the supplementary evidence requirements are different in every one. A vendor that maintains four parallel framings of the same underlying KB block scores well across all four. A vendor that recycles a single framing scores well in one and poorly in three.

Implications for KB design

Three implications for proposal teams that respond to DDQs across multiple industries.

KB blocks need industry tagging. A single “encryption at rest” block is the wrong shape. Four variants — finance-framed, healthcare-framed, SaaS-framed, defense-framed — with shared underlying evidence and industry-specific framing layers is the right shape. The block surface is larger. The retrieval is more accurate per industry.

Citation expectations need to be encoded in the response structure. A defense DDQ block that doesn’t cite NIST 800-171 controls is an incomplete block. The block template per industry should require the citations the industry expects. We covered the related per-section citation density target on the engineering blog earlier this week.

Cross-industry recycling is a tell. A team that submits a SaaS-flavored answer to a healthcare DDQ is signaling either inattention or cost-cutting. Either signal is bad. Buyers notice. The KB needs to make the right framing the path of least resistance, or vendors will default to the framing they have at hand and lose evaluation weight to the friction.

Closing

Four industries, 200 questions each, 140 in common, 60 industry-specific. The vendor that responds to all four with one set of answers is responding to one customer well and three poorly. The vendor that maintains four parallel framings — same evidence, different citations and language — is responding to all four well at modest extra cost.

Arphie’s piece on AI-assisted DDQ response flagged that the time lag between questionnaire completion and implementation changes can outpace the response cycle. The same observation applies across industries: a healthcare DDQ answered today against last year’s HITRUST framework is already partially stale. Industry-specific framing is also a freshness problem, not just a wording problem.

For the per-section anatomy of finance, legal, security, and operations DDQs, see the DDQ anatomy series, Parts 1 through 4.

Sources

  1. 1. Loopio — Best DDQ software
  2. 2. Safe Security — Vendor security questionnaire best practices
  3. 3. Arphie — How AI is transforming security questionnaire processes
  4. 4. PursuitAgent — DDQ Anatomy series