DDQ Anatomy, Part 4 of 4: operations and vendor management
The closing section of a vendor DDQ. Incident response from the operational side, business continuity, vendor risk management, and the questions that decide whether you're a vendor procurement will renew.
This is the last post in the DDQ Anatomy series. Part 1 framed the four sections; Part 2 covered legal and privacy; Part 3 covered the security section. Part 4 closes with operations and vendor management — the section evaluators read last and care about more than vendors think.
What the section is for
The operations and vendor management section answers a practical question: if we sign with you, can you actually deliver, and can you keep delivering when something breaks. The earlier sections probed legal posture and security posture. This section probes operational maturity. It is the section that decides whether a buyer’s risk function recommends renewal eighteen months from now.
The questions cluster.
- Incident response, operationally. Detection capability, runbook discipline, named on-call, public status page, post-incident communications.
- Business continuity. Continuity plan documented, tested, named recovery owner.
- Disaster recovery. RTO, RPO, failover capability, regional posture.
- Vendor risk management. Your own vendor due diligence on your sub-processors and critical suppliers.
- Change management. Release cadence, change approval, customer notification on breaking changes.
- Customer success and support. Support tiering, response times, escalation paths, named CSM.
- Contract and lifecycle. Onboarding, offboarding, data return, contract amendment process.
About 35 questions in a typical mid-market DDQ. Less than the security section by question count, more than legal and privacy by buyer attention.
The 10 questions you will see
Public status page. “What’s the URL of your customer-facing status page?” If the answer is “we don’t have one,” the evaluator marks the response down. Public status pages are now table stakes; vendors without one read as operationally immature.
Incident notification commitment, operationally. “On a confirmed P1 incident affecting customer data, what’s your customer notification timeline, by what channel, and who is the named accountable signer?” Buyers ask this in operational terms because they’ve been burned by sales-side commitments that operationally don’t hold. Have a runbook number, a named role, and a notification channel that isn’t email-only.
Mean time to detect, mean time to resolve. “What are your MTTD and MTTR averages over the past 12 months?” If you have monitoring, you have these numbers. If you don’t, the question is going to surface that you don’t.
Post-incident review process. “Describe your post-incident review process, including customer-facing post-mortem cadence.” Mature vendors run blameless post-mortems within five business days, publish customer-facing post-mortems for P1 incidents, and track action items to closure. Less mature vendors hand-wave.
RTO and RPO, with recent test evidence. “State your recovery time objective and recovery point objective. When was the last full DR drill, and what was the result?” The RTO/RPO numbers are not enough. Buyers want evidence of testing.
Sub-processor list and risk reviews. “List your critical sub-processors. For each, describe the risk review you conduct annually.” Buyers want to know that the sub-processor list is governed, not just published. Have a process; describe it; cite when the last review ran.
Change management for breaking changes. “What’s your customer notification window for breaking API changes, schema migrations, or material UX changes?” 90 days is the modern minimum for breaking changes; 180 days is what enterprise buyers ask for.
Support tiering and SLAs. “List your support tiers, response time SLAs per tier, and your escalation path.” Fill-in-the-blank if your KB has it. The escalation path is the part vendors most often forget — the buyer wants to know who they can call when the standard support tier hits its third dead end.
Onboarding plan. “Describe a typical onboarding plan from signature to production use.” A named project manager, a written timeline, a kickoff agenda, a definition of “live.” Generic answers (“we have a robust onboarding process”) lose points.
Data return and offboarding. “On contract termination, what’s your data return and deletion process, and what’s the timeline?” This is the question vendors most often answer incompletely. Buyers want format (export schema, file format, transit method), timeline (within how many days of termination), and certification (a named individual confirms deletion).
That’s 10. The remaining questions are variants — different industries probe DR harder, different buyers probe sub-processor risk harder, different sectors probe support tiering harder. The patterns hold.
Where the section breaks for vendors
Three failure modes recur.
No tested BCP/DR. A vendor lists RTO and RPO numbers but has not run a real failover test in the past 12 months. The DDQ asks for the most recent test date and result; the vendor has nothing to cite. Tested BCP/DR is meaningfully harder than untested BCP/DR, and the gap is what evaluators are probing for.
Vendor risk management is unwritten. The vendor has a sub-processor list. They do not have a written process for risk-reviewing those sub-processors. The DDQ asks for the process; the answer is improvised on the spot. The improvisation reads as improvisation.
Customer-facing post-mortems are missing. P1 incidents happen. The vendor handles them internally. Customers learn about the incident through their own monitoring or a CSM call, not through a public post-mortem. The DDQ asks for the cadence of customer-facing post-mortems; the answer is “we communicate with affected customers directly,” which evaluators read as “we don’t publish post-mortems.”
What good looks like
A vendor that takes operations seriously has, on the public site:
- A status page at a stable URL.
- A trust page that names BCP/DR posture, RTO/RPO, sub-processor governance.
- A public post-mortem archive for past P1 incidents.
- A change management page that names the breaking-change notification window.
Internally, the same vendor has:
- A KB block per question pattern from the 10 above.
- A 90-day review cadence on operational evidence (MTTD/MTTR numbers, last DR drill, last sub-processor review).
- A named operational owner for the DDQ section, distinct from the security or legal owner.
- A runbook for incident communications that names roles, timelines, and the customer-facing post-mortem template.
The 35 questions become a 60-minute pass instead of a half-day grind, and the time saved goes into the operational discipline that makes the questions easy to answer the next time.
The compounding part
A theme across all four parts of this series. The DDQ as an instrument is repetitive. The 45 legal-and-privacy questions, the 60 security questions, the 35 operations questions — most of them are variants of patterns that recur across every DDQ a vendor will ever respond to. Building the KB once, owning each block, reviewing on cadence, and treating the DDQ response as a retrieval-and-edit task instead of a from-scratch authoring task is the only way to make the work sustainable as the volume grows.
Safe Security’s reporting on a single engineer processing 250-plus questionnaires per year is not a stable equilibrium. The volume is growing. The teams responsible for the responses are not. Either the responses get worse — which the buyer side is increasingly catching — or the response process compounds, which is what the well-built KB and the named-owner-per-block discipline is for.
The four-part DDQ Anatomy series ends here. The next thread, starting next week, is the buyer-side perspective: what evaluators actually read, what they discount, and how a vendor’s response looks from the other side of the table.