Citations are the product, a year in
The line from our grounded-AI pledge at month one to how the product is now judged by reviewers. Citations were a feature. Now they're what customers are buying.
A year ago we shipped an internal pledge: every claim the product generates has a citation, and every citation resolves to a source in the customer’s knowledge base. The pledge-in-code post from month two explained how we enforced it at the type-system level.
A year later, the most common sales conversation isn’t about our drafting quality or our workflow UI or our DDQ ingest. It’s about citations. Here’s what changed.
What the pledge committed to
The original pledge had three lines:
- No claim in generated output without a citation.
- Every citation points to a specific span in a specific block.
- Block-level provenance is preserved through every transformation of the draft.
We enforced it with a type that wrapped generated text: a GroundedSpan couldn’t be rendered without its source: BlockRef attached. If the retrieval pass didn’t produce a source, the generator didn’t produce text for that span. The user got a visible gap instead of an unsupported sentence.
What customers do with citations now
We assumed citations would be a reviewer feature — a sanity check for internal QA, useful at gold-team review to verify that every factual claim was grounded. Reviewers do use them that way, and it’s the use case the UI was built for.
What we didn’t predict: citations became the submission artifact buyers asked for.
Three patterns we see now:
Buyers asking vendors to include citation lineage. A procurement team at a large insurer asked our customer to include, alongside the proposal response, the source documents for each material claim. Our customer exported the citation report from PursuitAgent and attached it. The buyer confirmed they now ask every AI-assisted response for this.
Reviewers grading on citation density. Some RFPs evaluate proposals on “completeness of support for technical claims.” One federal-adjacent agency’s scoring rubric we reviewed this year had a line item for “proposals with traceable evidence for each technical assertion” worth 5% of the technical score. Five percent is the margin of victory in a tight pursuit.
Internal reviewers blocking uncited sentences. The gold-team review has become, in mature PursuitAgent customers, explicitly a citation-check pass. The reviewer looks at every claim; the claim either has a green citation badge or it doesn’t ship. This is the behavior the Stanford HAI paper on legal RAG hallucinations argued was necessary — they measured commercial legal RAG tools hallucinating 17–33% of the time even with retrieval, and concluded that human verification of citations was the only reliable guardrail. Our customers independently arrived at the same conclusion through workflow.
The engineering consequence
Citations-as-product has changed what we build. Three examples:
Citation density scoring. Every draft now has a density metric — citations per 100 words — reported to the reviewer at generation time. Below a tenant-configurable threshold, the draft is flagged for rewrite.
Provenance graph queries. The answer-provenance graph work was originally a debugging tool. It’s now a product surface: customers can query “which of my blocks has been cited in the last 90 days” and use the answer for KB hygiene decisions.
Cross-tenant citation hygiene. If two blocks in a tenant both answer the same question and one is preferred (newer, better-cited, higher-weight), the retriever should prefer it consistently. We rebuilt block-scoring to include citation-quality as a factor — blocks whose cited spans have historically survived reviewer checks get scored higher.
Where this is going
The year-one pattern is consistent: a reviewer-side feature became a buyer-side feature. The citation isn’t just internal quality control anymore — it’s what makes a grounded response legible to the external evaluator.
The next move is the one we’re drafting now: exportable citation certificates. A standalone document per proposal, signed, showing every claim and every source, that a buyer can audit after submission. Customers have asked for it; we’re shipping it in Q3. The grounded retrieval pillar post will be updated to cover it when it ships.
The line from day one to today
The pledge was that citations would make the product honest. The unanticipated consequence was that citations would also make the product competitive — because buyers are now asking for them, and the vendors without citation infrastructure can’t produce them. Grounded AI turned out to have a commercial tail we didn’t fully see at month one.