Field notes

DDQ answer voice: why consistency beats polish

Buyers forgive plain writing. They do not forgive a questionnaire that reads like it was stitched from eight different people. How to keep 300 DDQ answers sounding like one voice.

Sarah Smith 5 min read Procurement

A buyer reading your 300-question DDQ notices voice drift before they notice anything else. The first 20 answers are terse and first-person plural. Answers 40 through 90 are ornate and written from the product’s perspective. Answers 100 through 140 are bullet-heavy with no connecting prose. Somewhere around answer 200 the tense shifts to passive and stays there.

The buyer does not call out the drift. They do mark the vendor down on “professionalism” or “quality of response” — vague categories that collect exactly this kind of stitched-together feel. Consistent voice beats polished voice. A plain, uniform questionnaire outperforms a dazzling-but-inconsistent one.

I have been on both sides of this. I have read hundreds of competitor DDQs as a buyer-side reviewer and written or edited thousands as a proposal lead. The pattern is robust. This is how to avoid it.

Pick three things and enforce them

A house voice document for DDQs does not need to be long. It needs to be specific. Three categories of choice account for 90% of the drift a buyer will notice.

Subject. Pick one. “We,” the vendor’s name, “the platform,” or “the product.” Each works. Mixing does not. I recommend “we” for most vendors — it is the most natural and the least likely to read as robotic. The DDQ is from the vendor to the buyer; first-person plural matches that relationship.

Yes-pattern. Pick one shape for affirmative answers. “Yes. [One-sentence qualifier.] [Evidence reference.]” is my default. Every yes answer opens with the word yes. Every yes answer is followed by a qualifier sentence that narrows the scope where needed. Every yes answer cites the evidence at the end. When the pattern is consistent, a buyer reading fast knows exactly where to look for the substantive information in each answer.

Numeric-lead pattern. For any question with a numeric answer, lead with the number. “Four hours.” “Since 2021.” “Ninety days.” Not “Our RTO is approximately four hours based on our current infrastructure as described in section 4.2 of our SOC 2.” The number goes first. The context goes after.

That is it. Three choices. Everything else — paragraph length, bulleted vs. prose, which section cites which artifact — can be decided question-by-question without the buyer noticing. These three cannot.

What breaks consistency

Four real patterns that produce drift, all of them fixable.

Multiple authors without a final pass. Four SMEs each answer the 20 questions assigned to them. The package ships. The four voices are legible in the final document because nobody did the reading-through pass to flatten them. Fix: one reviewer reads every answer in sequence before ship. They are not rewriting; they are making one-sentence voice edits to bring outlier answers into line.

Copy-paste from prior submissions without voice edits. A block of text that was written for a 2023 financial-services DDQ gets pasted into a 2025 healthcare DDQ. The 2023 text uses a more formal voice than the rest of the 2025 document. The paste reads as transplanted. Fix: every pasted answer gets a voice pass before approval — not a content pass, a voice pass specifically looking for outlier sentence structure.

Auto-drafted answers that inherit the block’s original voice. If the KB block was written by a particularly ornate engineer, every answer auto-drafted from that block carries the ornateness. Fix: normalize voice in the KB itself. The block is the base text; every auto-answer inherits it. One editing pass on the KB saves 40 editing passes across questionnaires.

Different handling of the word “yes.” Some analysts write “Yes.” Others write “Yes, we do.” Others write “Confirmed.” Others write “Yes — [qualifier].” Four answers into the questionnaire, the buyer has seen four different shapes. Fix: the yes-pattern from above, enforced.

The quality-pass protocol

On the teams I work with, the pre-ship quality pass is a single analyst reading every answer in the response in sequence. Not editing for content — that happened during drafting. Editing for voice.

The checklist is short:

  • Does every answer start with the same subject form? (Flag any that switch to third person.)
  • Does every yes answer use the same shape? (Flag any that start with a different word.)
  • Does every numeric answer lead with the number?
  • Is there any paragraph longer than five sentences? (Flag for review — DDQ answers should be tight.)
  • Is there any answer that drops from first-person-plural into passive voice mid-paragraph?

A 300-answer questionnaire takes about 90 minutes to quality-pass at this depth. That is not cheap. It is cheaper than the alternative, which is shipping a questionnaire that reads as stitched and losing a deal on a vague procurement-side quality concern nobody ever writes down.

Polish is a second-order concern

I want to be clear about the title. Polish — elegant prose, varied sentence structure, the kind of writing that reads beautifully — is a real craft and I have nothing against it. But it is the second most important voice concern in a DDQ, and the first is consistency. A beautifully-polished question among 299 plainly-written ones sticks out worse than 300 plainly-written ones.

If you have the budget for both, do both. If you have the budget for one, pick consistency. The buyer is not grading your questionnaire like they are grading an executive summary. They are reading fast, looking for disqualifying answers, and forming a gestalt impression of the vendor’s operational maturity. A consistent voice supports that maturity read. A polished-but-inconsistent voice undermines it.

The DDQ is not a place to show off. It is a place to be trusted. For the companion piece on how the retrieval side of this produces the consistency automatically, see the 80/20 pillar.