How we're talking about PursuitAgent this quarter
A messaging refresh done in public. The four short lines we kept, the ones we dropped, the new framing around grounded retrieval as a contract, and the reasons for each change. Written at the end of the annual-planning cycle.
Every quarter we do a short messaging review. What are we saying about the product, is it still true, and is it still the most useful thing we could be saying. Q1 2026 is the first review we are doing after the full year-one retrospective, and it is the first time the review has made changes I think are worth writing up publicly.
Three lines we kept. Two we dropped. One new framing that is doing the most work.
The lines we kept
“Grounded-AI proposal software.” This is the shortest accurate description of the product. We have tested it against about a dozen alternatives over the last year and the pattern is consistent: prospects who hear “grounded-AI proposal software” ask follow-up questions that lead to the right conversation. Prospects who hear a variation (“AI-powered,” “intelligent,” “automated”) ask follow-up questions that lead to a confusing conversation. The keeper stays.
“Every paragraph cites its source.” The operational one-liner for the Grounded-AI Pledge. It is concrete, it is verifiable in a demo within 30 seconds, and it passes the test that proposal practitioners use to evaluate AI output. I have watched this line do more work than any other single sentence on the marketing site.
“Your KB, not the model’s training data.” The line that separates us from the category’s default posture. It cuts through the “but which AI do you use” question in one beat: the AI is our orchestration, the answer is your KB. Prospects who care about the difference self-select. Prospects who want the opposite find the conversation short.
The lines we dropped
“The proposal-intelligence platform for teams that ship.” This was on the marketing site for about four months and I never liked it. “Proposal-intelligence platform” is category-speak for proposal software; “teams that ship” does not narrow the audience (all proposal teams ship, eventually). The line was performative. It is gone.
“Write proposals 3x faster.” The problem with speed claims in this category is not that they are false; the problem is that they are noise. Every proposal-AI vendor claims a multiplier. When we measured our own customers in year one, the average time-to-first-draft did go down by a meaningful factor, but the composition of how that time was spent changed even more than the total. Customers were spending less time drafting and more time editing, reviewing, and grounding. Total cycle time was shorter, but “3x faster” was the wrong summary of what was actually happening. We replaced the claim with a specific metric on the relevant page: a breakdown of where the time goes, before and after.
The new framing that is doing the most work
“Grounded retrieval as a contract, not a feature.”
This line is new. It came out of the year-one retrospective, specifically the section on why the pledge is enforced in the code path rather than in a prompt. The framing distinguishes us from vendors who ship “grounded retrieval” as a checkbox on the feature page. A feature can be turned off. A contract cannot.
The line works because it anchors the buyer’s evaluation on the enforcement mechanism rather than the marketing claim. It gives the regulated buyer a specific question to ask: “show me where the grounding is enforced.” That is a question we can answer with an architecture diagram and a test suite. It is a question most competitors cannot answer at all.
The risk of the framing is that it is slightly abstract. “Contract” as a concept takes a second for the reader to land on; it is not a one-beat claim. We tested it in a few first-meeting conversations over the last two weeks and the pattern held — buyers who were evaluating us on the grounded-AI axis picked up the framing easily; buyers who were not did not get much out of it either way. We think that is the right performance profile. The framing is specifically for the buyers we want.
What we left off the site
One framing we like, that was in the draft of this update, that we left off the site.
“For proposal teams who care whether the words are true.”
The version on the voice doc is similar. It reads well and it is a real thing we believe. We left it off the public site because it sounds combative — it implies that other vendors’ customers do not care whether the words are true, which is not the argument we want to make. The implicit claim is better left implicit. The line stays in the voice doc for internal use; the external version of the pitch uses the contract framing.
This is a small example of a broader rule: the strongest version of your pitch for internal use is sometimes not the strongest version for external use. The internal version can be adversarial about the category. The external version has to respect the prospect’s ability to draw their own conclusions.
What does not change
The things we decided not to change, for clarity.
Pricing is on the website. The tiers may evolve as the product does, but the commitment to publishing the numbers stays. I wrote about the economics of that decision in the ninety-day pricing post, and none of the observations there have reversed in the full year.
Comparison pages on /compare stay honest. Every comparison page names at least one thing the other vendor does better than we do. That discipline stays.
The blog stays a field journal. This post is the clearest example of what that means: messaging refresh done in public, with the reasoning visible. If the refresh is wrong, the reasoning is on the record to critique. That is the whole point.
Q2 review is in April.