Field notes

The SME draft packet, generated automatically

What we ship to an SME alongside the question so they can answer in five minutes instead of fifty. The packet's components, the retrieval that builds it, and the design choices that keep the SME out of our tool.

The PursuitAgent engineering team 5 min read Engineering

The SME draft packet is the document we generate and send to an SME alongside the proposal question. The point of the packet is to let the SME answer in five minutes instead of fifty. The five-minutes-not-fifty number is the difference between an SME who responds today and an SME who defers until the deadline week and ships a fragment.

This post walks through what’s in the packet, how the system builds it, and the design choices that kept us from over-engineering it.

What’s in the packet

A packet has six components. The first three are required. The last three are conditional.

The question, verbatim. The exact RFP question, copied from the buyer’s document, with no rephrasing. The SME has to see what the buyer asked, not the proposal manager’s summary of what the buyer asked.

The context. Three to five sentences that establish the bid: the buyer name, the bid type, the scoring weight on this question’s section, and the deadline. The SME reads this in 15 seconds and knows whether to answer fully or briefly.

The proposed answer. A grounded draft pulled from the KB, with citations, in the format the SME would expect to see (paragraph, table, or bullet list as appropriate). The SME’s job is to review and approve this draft, not to write from scratch.

Conditional: the prior answer. If this buyer has been responded to before, the previous response to a similar question is included with a note. The SME often just confirms whether anything has changed.

Conditional: the diff. If a prior answer exists and a candidate updated answer differs, the diff is highlighted. The SME reviews the diff, not the whole answer.

Conditional: the unknowns. If the system flagged any specific claim in the proposed answer as unverifiable (a number, a named entity, a date), the unknown is called out at the top of the packet with a “please confirm” annotation.

The packet ships as a single message in the SME’s existing chat surface — Slack or Teams — via the SME bot. The SME doesn’t log into a portal.

How the packet is built

The build pipeline runs automatically when the proposal manager marks a question as “needs SME input.” Five steps.

async function buildPacket(question: Question, bid: Bid) {
  const retrieved = await hybridSearch(question.text, bid.companyId);
  const ranked = await reranker(retrieved, question.text);
  const draft = await groundedDraft(question.text, ranked);
  const verified = await verifyClaims(draft, ranked);
  const prior = await findPriorAnswer(question, bid.buyerId);
  return composePacket({ question, bid, draft: verified, prior });
}

Step one — hybrid retrieval against the company’s KB. We covered the dense+BM25 combination in the hybrid search post. The retriever returns 10 to 20 candidate blocks.

Step two — reranking. The cross-encoder reranker scores the candidates against the question text and surfaces the top three to five. We wrote about why this earned its keep in the reranker post.

Step three — grounded draft. The drafting model rewrites from the top candidates with explicit citation. We covered the constraint pattern in Pledge enforcement in code.

Step four — claim verification. Numeric and named-entity claims are checked against the cited source. Unverifiable claims are not removed silently — they are surfaced as “unknowns” in the packet for the SME to confirm.

Step five — prior-answer lookup. If we’ve responded to this buyer before, the prior answer to the closest matching question is fetched and diffed against the current draft.

What’s intentionally not in the packet

Three things we considered and excluded.

No “AI confidence score.” We considered showing the SME a confidence number on the proposed answer. We decided not to. The SME’s review judgment is what we’re trying to capture, and a confidence score primes them toward “looks fine, accept.” If the system isn’t confident, the right move is to call out the specific unverifiable claim, not to soften the whole packet with a number.

No competitor comparison. Some teams asked us to include “here’s what we said in the response to RFP X from competitor Y.” We declined. The SME is not the right reviewer for competitive positioning; that’s a proposal-manager task. Crowding the packet with strategy distracts from accuracy.

No long historical context. The packet does not include “here’s how this question has evolved across the last 12 buyers.” That’s interesting to the proposal manager and a distraction to the SME. The SME is reviewing one answer for one bid.

The Sparrow team’s research on content libraries names a related anti-pattern: content-library systems that fail because they push too much context to too many people. The packet pushes the right context to the right person.

What the packet costs

A packet build runs about $0.04 to $0.08 in model spend, depending on the question’s retrieval depth and the draft length. For a typical RFP with 30 SME-flagged questions, that’s $1.20 to $2.40 in packet generation. The proposal manager’s time saved (an hour per packet, conservatively, on the SME-chasing side) makes the math trivial.

The Qorus research on the SME bottleneck — 48% of teams citing this as their top problem for five years — does not get fixed by sending the SME more documents. It gets fixed by sending the SME exactly the right document, in the surface they already use, with a clear two-minute review affordance. The packet is what that looks like in practice.

Tomorrow’s post goes deeper on the prompt and the retrieval context for the grounded-draft step. The Friday changelog (Day 135) covers the answer-block tagging work that makes the prior-answer lookup work across bids.

Sources

  1. 1. Qorus — Winning proposals: how to stop wrangling SMEs
  2. 2. Sparrow — RFP content library best practices