Blog · Author
The PursuitAgent engineering team
Engineering · 91 posts.
Citations are the product, a year in
The line from our grounded-AI pledge at month one to how the product is now judged by reviewers. Citations were a feature. Now they're what customers are buying.
The system-health check at year one
Four dashboards we watch every morning. What each one caught this quarter — and the one that nearly missed a regression.
The evidence vault, a year in: attestations, tests, audits
What lives in the evidence vault, what expires, and the alerting that catches expirations before a DDQ cites a stale attestation.
Block schema v3: merging KB blocks and evidence atoms
The schema change that let DDQ evidence live in the same store as proposal answers. What we split, what we merged, and the migration that took a week longer than we planned.
Draft latency, a year on: 45s P95 to 28s
A year of draft-latency work. What moved P95 from 45 seconds to 28, which changes cost quality and which cost money, and the three tradeoffs we chose not to take.
Query understanding, a year on: where the model won
A year of hand-written query-rewrite rules versus LLM-based query rewriting on RFP questions. Which side won, where the hand-written rules still beat the model, and what the hybrid looks like now.
Hallucination rate: a year-in measurement update
How we measure hallucination rate on grounded drafts, what the number looks like a year in, what moved it since the early baseline, and where the number lives in production for customers to see.
Draft attribution in exports: PDF, DOCX, HTML
Inline citations have to survive the export. How the rendering preserves citation anchors across the three export formats, where each format makes it hard, and the specific decisions we made to keep the attribution auditable.
New models, quarterly eval: Sonnet 4.6, GPT-5.2, Gemini 3.1 Pro
An internal eval across three current-generation models for our specific workloads — drafting, claim verification, extraction. What moved, where we switched defaults, and why one workload still sits on a year-old model.
Compliance extraction, revisited
The grammar we moved to for requirements extraction, why we stopped treating 'shall' as a single class, and the evaluation showing a 38% drop in false-positive requirements.
March reliability incidents, documented
Two incidents on the platform this month — one degradation, one full outage. What triggered each, how long they ran, what the user impact was, and the specific changes we made after.
The SLA on draft generation: 45 seconds, 95th percentile
The operational target we hold draft generation to, why it's 45 seconds and not 30 or 90, and the specific things we do to hold the number under peak federal-FY-Q2 load.
RAG for past-performance reference selection
How the retriever picks the three best past-performance references out of 180 for a given scope. Not cosine similarity on a paragraph — structured retrieval over multiple facets with a scorer that knows what a good reference looks like.
The claim-verification cost profile, stage by stage
Per-claim verification is the defense against citation hallucination. It also costs real money. A breakdown of token costs at each stage of the verification pipeline, with the numbers we actually see in production.
Reviewer feedback routing: comment to block to KB
How we close the loop from an inline comment on a draft paragraph to a versioned edit on the KB block that generated it. The routing is boring; the discipline it enforces is the whole game.
The DDQ evidence-provenance API
External auditors can now walk from a DDQ answer back to the source evidence without opening the KB. The endpoints, the auth model, and what we hardened before shipping.
The prompt test suite, an update
300 tests across our drafting and verification prompts. What they cover, what they miss, which ones still flake, and how we keep the flaky ones from becoming the reason we stop running CI.
One year of grounded retrieval: what changed, what didn't
The engineering companion to the founder retrospective. A year of build-log posts, condensed: what the retrieval stack looks like now, how verification evolved, what the gold set became, and what's still unsolved.
Embedding evaluation, revisited
What we measure differently from 12 months ago. How the gold set grew, which metrics earned their spot in CI, and which ones we quietly retired.
Grounded AI for win-theme discovery
How we surface candidate win themes from a corpus of 80 winning proposals without inventing them. The retrieval pattern, the entailment guard, and where the system refuses rather than guesses.
Linking debrief notes to the specific answer blocks that failed
How every debrief comment becomes a KB-block edit suggestion. The two-pass linker that walks from a free-text comment to the exact block that sourced the failed answer, with the SQL and the heuristics.
Clustering win themes across 200 past bids
How we cluster win-theme assertions across a corpus of past proposals to surface repeat themes, where the signal is real, and where the clustering is just noise dressed as insight.
The win-loss database schema, explained
The five tables behind PursuitAgent's win-loss intelligence feature: proposal, theme, outcome, debrief, and the join back to the RFP block that sourced the claim. With the SQL.
A pgvector migration postmortem
An index rebuild that cost us 90 minutes of degraded search across a handful of tenants. What we changed in the runbook, and the piece of the migration we wish we had rehearsed.
Draft autocomplete latency, end to end
Typing lag, inference queue, streaming output. The three budgets that add up to the 240ms P95 we hold ourselves to, and what happens when any one of them slips.
Migrating to Gemini Embedding v3, the safe way
A dual-index backfill and a staged cutover across two weeks. How we evaluated retrieval deltas before the switch, what we watched for during the cutover, and the one metric that gated the final flip.
Caching the draft step
How we cache partial drafts across proposals without introducing stale-answer risk. The cache key design, invalidation rules, and the directional cost impact we measured internally.
Retrieval over Slack history: what works, what's too sharp
An experiment with RAG over customer-Slack channel history. Three useful retrieval patterns, two failure modes that led us to gate the feature behind explicit capture flags, and the operational guardrails.
What we learned analyzing 90 days of search logs
Three patterns in the KB-search query logs we did not expect, and one UX change we made because of the findings. Notes from a quarterly log review, written in the build-log spirit.
When two citations disagree: how the draft resolves it
Two KB chunks say different things about the same claim. The conflict-resolution logic that decides which one the drafted answer cites — when to prefer newer, when to prefer higher-authority, and when to refuse.
Observability for drafting: traces, logs, and replays
How we debug a bad draft six weeks after the fact. The three-layer observability stack — request traces, retrieval logs, and deterministic replays — that makes post-hoc drafting issues tractable.
Grounded-AI regressions we caught in year one
Four regressions in our grounded-drafting pipeline this year. How we caught each one, how long it took to roll back, and the one we did not catch in time. Engineering notes, not a victory lap.
The prompt library behind grounded drafting
Seven named prompts, one kill-switch registry, a versioning scheme, and the governance pattern we use to keep prompt sprawl from becoming an outage. Engineering notes on how we actually run prompts in production.
Backup and restore for a KB that contains embeddings
Point-in-time restore, vector consistency, and why we run a full restore drill once a month. The engineering notes on backing up a knowledge base that is half relational and half vector.
Prompt versioning in production, the boring way
Git, tags, eval gates. How we roll a prompt change without breaking drafts in flight, and why the boring version is the one that actually works.
Block reuse tracking: the metric that matters
Which KB blocks got used, in what proposals, and what they correlated with winning. How we instrument reuse, what the numbers told us, and where the signal turns into noise.
Detecting ungrounded spans in drafts, line by line
A per-sentence classifier that flags which spans in a drafted RFP answer lack source coverage in the retrieved context. What it costs, what it catches, and what it still misses.
Retrieval eval snapshot, December 2025
Quarter four retrieval evaluation numbers against our held-out RFP and DDQ corpus. What moved since September, what's still stuck, and which regressions we're not yet fixing.
The async drafting worker pool, explained
How 40 concurrent draft sections get written without saturating the LLM budget or crashing the rate limiter. The worker pool, the budget enforcer, and the retry ladder.
Multi-tenant DDQ templates across customer accounts
How one SOC 2 answer shape generalizes across many customer tenants without leaking tenant-specific facts. The separation between template structure and tenant content, explained.
Tuning pgvector HNSW for proposal workloads
M, ef_construction, ef_search — the three knobs that decide retrieval latency and recall in a pgvector HNSW index. What we chose for PursuitAgent and why.
Confidence-threshold tuning for DDQ auto-answer
Where we set the confidence bar for auto-answering a DDQ question. The precision/recall trade-off, explained with our own data and the number we actually use for security questionnaires.
Turning a SOC 2 PDF into 140 KB blocks
The ingest, the extraction, the linking. A worked trace of how a SOC 2 Type II report becomes the set of KB blocks that DDQ answers cite — with the real pgvector row shape at the end.
Security questionnaires: the 80% that's really retrieval
The canonical Engineering pillar on DDQ automation. A 300-question security questionnaire is not 300 unique questions — it's mostly retrieval against a corpus that's already written, plus a small tail that isn't.
The evidence vault: where SOC 2 PDFs live and how they cite
How a DDQ answer citing 'SOC 2 report, section CC6.1' actually finds the right PDF, serves it to the right buyer, and keeps the audit trail. The storage, access, and audit layer underneath.
The DDQ evidence-attachment API
How buyer-side evidence-request fields get auto-populated from a KB evidence vault. The schema, the matching logic, and the human-in-the-loop step we will not remove.
Per-customer embedding tenancy, explained
How tenant isolation works at the vector level in PursuitAgent. Why we use Postgres row-level security on pgvector as the default, where shared embedding spaces would be cheaper, and the trade-offs we are not willing to take.
Citation UI: three designs we tried, two we kept
How we render inline citations next to grounded-AI output. Three UX experiments — footnote chips, side-pane evidence cards, and inline hover popovers — and what we learned about which ones reviewers actually use.
KB block versioning: the five-year commit history
How a KB block evolves across 18 proposals, three approvals, and one rollback. The data model behind block versioning, why we keep every prior version, and the queries that make it useful.
The background job queue for proposal processing
How Hatchet orchestrates the ingest, classify, draft, and verify stages of a proposal response. The four stages, the retry policies, the dead-letter handling, and the one place we deliberately chose synchronous over async.
Hallucination monitoring in production
The metric we watch weekly: per-claim refusal rate, citation-mismatch rate, and the human-graded sample. What we do when each one moves, and the threshold values that trigger an alert.
Semantic deduplication of KB blocks at ingest
How we merge near-duplicate KB blocks at ingest time using embedding similarity, the threshold we settled on after testing four values, and the trade-off we accept by tuning toward over-merging.
In preview: the retrieval-eval dashboard, publicly visible
Our internal retrieval evaluation dashboard is going public in preview. Real gold-set numbers, real regressions, updated nightly. Here is what is on it and what we deliberately left out.
Our retrieval eval, quarterly report
A quarter of running our retrieval evaluation harness against a frozen gold set: the regressions we caught, the two changes that actually moved precision, and the metric we stopped reporting because it lied.
Security questionnaires: linking answers to evidence
How a SOC 2 attestation PDF becomes a citation source for DDQ answers. The ingest pipeline, the per-control extraction, and the per-claim linking that makes 'yes' answers verifiable instead of theatrical.
The citation density target per section
Why executive summaries get two citations per paragraph and technical sections get five. The rationale for citation density as a section-level target, and what happens to drafts that fall below it.
Numeric claim extraction and verification
How we parse numbers from drafts — percentages, dollar figures, head counts, dates — and check each one against a KB source before the sentence ships. The pipeline, the regex floor, the LLM ceiling, and what we still get wrong.
Cost control for RAG: daily budgets, fallback models, burn alerts
How we keep RAG spend predictable per tenant. Daily budgets, model-tier fallbacks, and burn-rate alerts before the bill spikes — with the dashboard and the rules.
How the draft packet is generated, line by line
The prompt, the retrieval context, and the output template that produce an SME draft packet. A worked example from a real-shaped RFP question to a ready-to-review answer.
The SME draft packet, generated automatically
What we ship to an SME alongside the question so they can answer in five minutes instead of fifty. The packet's components, the retrieval that builds it, and the design choices that keep the SME out of our tool.
The SME Slack bot: architecture and boundaries
How the PursuitAgent SME bot asks for input, what it does with the answer, and what it deliberately refuses to do. A short tour of the boundary between the bot and the human.
KB schema evolution, year one
Four migrations we ran on the knowledge-base blocks table, three we rolled back, and what the schema looks like now. A field report on schema discipline.
Retrieval evaluation, part 2: dealing with numeric claims
Why numeric facts break vanilla retrieval and the two tactics — hybrid search and numeric-claim isolation — that fix it. Continuation of the eval series.
Confidence scores for grounded drafts, explained
What '82% confident' means in our drafting engine, how it's computed from retrieval and entailment signals, and where it leads the reviewer.
Streaming drafts over SSE, with citations inline
How we stream draft output to the browser while keeping citation integrity intact. The architecture, the failure modes, and the part we got wrong twice.
How we curate the retrieval gold set
120 questions, three annotators, a disagreement-resolution protocol. The recipe behind the held-out set we evaluate every retrieval pipeline change against — and the parts we plan to open-source.
Retrieval over diagrams, not just text
How we index D2 code and diagram descriptions so an architecture question can ground to a specific figure. The pipeline, the failure modes, and the citation surface for a diagram source.
The answer provenance graph in the KB
Every block in the knowledge base tracks source, author, approver, and last-used-in. The provenance graph isn't bookkeeping — it's a product surface. Here's what it stores and what it powers.
The reranker that paid for itself
Rerankers add latency and cost. They earn it back when retrieval is borderline and the wrong block in the top-K poisons the draft. Where we run a reranker, where we do not, and the honest tradeoffs.
The cost per response, broken down to the penny
Embedding calls, retrieval compute, draft tokens, verifier tokens, storage. The unit cost structure of a single drafted RFP answer, with a worked example. We publish the unit economics, not customer costs.
Ingesting a 300-question security questionnaire
A 300-question security questionnaire is a throughput problem, not a writing problem. The ingest pipeline has five stages: extract, classify, dedupe against the last one, retrieve, assemble. Here is what each one does and where it costs.
Query rewriting for RFP questions with implicit context
Most RFP questions retrieve poorly because they assume context the corpus does not carry. Query rewriting turns 'describe your approach' into a retrieval string that hits. Examples, the rewrite chain, and the cost tradeoff.
The grounded drafting loop, step by step
Retrieve, draft under constraint, verify, emit — or refuse. The four-step loop that produces every drafted answer in PursuitAgent, and the failure mode each step exists to prevent.
The chunk size ablation: 256, 512, 1024 tokens on RFP text
We ran the same retrieval pipeline at three chunk sizes against our RFP-text gold set. Directional results, the tradeoffs that surfaced, and why we don't ship a single global chunk size.
Our eval harness, on the command line
A walkthrough of the dev loop for retrieval changes — one command to baseline, one command to re-run, one to diff. The CLI ergonomics that keep us from tuning by feel.
How we evaluate retrieval quality on our own corpus
Our gold set, the metrics we track, the eval harness on a laptop, the regression-guard CI job, and the directional numbers we'll publicly stand behind. Long read.
The hallucination budget, per claim
Treat hallucination as a cost: each claim in a draft has a probability of being mis-attributed. Here's how we budget it, how we trade latency against grounding strength, and why the budget is per-claim, not per-draft.
Inside the ingest pipeline: parse, extract, index
How a PDF becomes searchable KB blocks. LlamaParse for parsing, structural-plus-semantic extraction, pgvector indexing with HNSW. Where each stage wins and where it falls over.
The claim-level verification pass, explained
After the draft model writes a sentence, a smaller verifier model reads each substantive claim and asks: is this entailed by the source block? Here's how that works, what it costs, and where it still misses.
Our retrieval latency budget, explained
Where the milliseconds go in a single retrieval call: embedding lookup, vector search, reranker, hybrid merge, payload hydration. P50 120ms, P95 400ms, and what we cut to get there.
Hybrid search: dense embeddings plus BM25 for proposals
Pure dense retrieval misses on numeric identifiers, product names, and SOC codes. Pure BM25 misses on paraphrase. The blend ratio we use, how we tune it, and the test set that catches regressions.
Grounded Retrieval 101, Part 4: what we're still wrong about
The closing post of the Grounded Retrieval 101 series. Three failure modes we have not solved — numeric precision, compound claims, synonym drift — with the test cases that surface them and what we are doing about each.
How the citation rendering stack works
From a retrieval hit to a verify button next to a sentence, in four components. The plumbing behind every cited claim PursuitAgent ships, and why we render the source inline instead of in a footnote.
Grounded Retrieval 101, Part 3: the citation rendering stack
From a verified retrieval hit to an inline citation a reviewer can hover and trust. Four components: citation marker, hover card, source viewer, and audit log.
Testing retrieval: gold sets, precision@k, and why BLEU lies for proposals
Surface-form metrics like BLEU and ROUGE rate proposal text by token overlap. Token overlap is a poor proxy for whether the answer is actually right. Here's the eval stack we use instead.
Grounded retrieval: what it is, what it isn't, what we measure
The canonical long-read on grounded retrieval: the three invariants, the anti-patterns, the eval harness, the four open failure modes, and the research we're running next.
Our chunking pipeline, end to end
Five stages between an uploaded PDF and a retrievable KB block: parse, structural split, semantic rechunk, overlap, and index. Where each one fails and why we kept the boundaries.
Grounded Retrieval 101, Part 2: why citations don't guarantee groundedness
A citation tells you which passage was retrieved. It does not tell you whether the cited passage actually supports the generated claim. Part 2 of the Grounded Retrieval series — the entailment gap, and what closes it.
Grounded Retrieval 101, Part 1: what RAG is and why it still hallucinates
RAG in three sentences, then the hard part: why retrieval-augmented generation still produces fabricated answers, and what the academic and practitioner literature says about it. Part 1 of a four-part series.
How we chunk proposals for retrieval
Fixed-window chunking loses at headers, table cells, and numeric clauses. This post walks through the structural-plus-semantic chunking strategy we run on past proposals and KB content blocks, with code.
How the Grounded-AI Pledge is enforced in code
The Pledge says every drafted answer links to a source in your KB. Here's how the drafting engine enforces that — with refusals, not with model hygiene.
See the proposal workflow
Take the 5-minute tour, then start a trial workspace when you're ready to run a real pursuit against your own source material.