Blog · Pillar
Category.
61 posts in this archive.
A year of writing about proposals, in 52 weekly notes
Anniversary post. One year, 365 posts, and an honest accounting. Fifty-two weekly distillations, a scorecard on the five rules we set at launch, and what surprised us about writing a field journal in public.
The customer segment we chose not to serve
One segment we declined a year ago. Why the decision held, what we gave up, and the conditions under which we'll revisit it.
A year in public, what we'd do again
Three decisions that compounded over a year of building PursuitAgent in public, one we'd reverse, and one we're still uncertain about. Honest operator notes on what worked and what didn't.
State of Proposal Tools — Wave 2 2026
The second annual research drop. 45 vendors across four archetypes, updated capability matrices, pricing-pattern fractures, the grounded-AI taxonomy, and what changed vs. Wave 1 in August 2025.
The moat question, revisited
Opinion. A year in, where the durable advantage lives for a grounded-AI proposal product — and where it doesn't. Three candidate moats, two I believe in, one I don't.
State of Proposal Tools — Wave 2 preview
A preview of the Wave 2 annual research drop. What we know now that we didn't in August, what the category looks like heading into year two, and which five shifts we're going to document in the full release next Sunday.
A year of ingest pipeline, condensed
Forty changes to the ingest pipeline across a year of shipping. The five that actually mattered, the ones that didn't, and what the pattern says about where to spend the next year's ingest budget.
Reviews watch: what G2 and Capterra said in March
The monthly reviews aggregation. Two incumbents took notable hits on recent feature-release quality; one sub-category showed consistent positive movement. Links, quotes where they clarify the trend, and no speculation beyond the data.
Pricing at one year: what we're changing in Q2
The pricing experiment continues. Two new tier lines, one that's retiring, and the reasoning behind each move after a year of published pricing and real customer usage patterns.
Announcing versus educating, a product-comms pivot
Why we moved from launch-day announcements to continuous smaller posts about how the product works. The shape of the shift, what we gave up, and what we gained as a team.
A 2026 map of enterprise procurement platforms
Coupa, Ariba, Workday, Ivalua, GEP, and the newer entrants. Where AI shows up in each platform's RFP workflow, what's real, what's marketing, and what it means for vendors who have to respond through these systems.
Competing on groundedness, not features
Opinion. The AI proposal category is running a feature race. The only durable edge for an AI-native tool is whether its outputs are traceable — and that is not a feature you ship. It is a posture you hold.
Competitive moves in Q1 2026, a running log
What Loopio, Responsive, and AutogenAI announced in Q1 2026. What we shipped against them. Opinion piece — I'm the founder, this is my read on the market.
The 'we lost on price' excuse, decoded
What 'we lost on price' actually masks. An opinion piece, with evidence from the GAO debrief corpus and the patterns I've watched in our own win-loss data.
Win-loss intelligence starts on day one
Most teams collect the wrong signals after a bid, and the wrong signals compound. An opinion piece on what to capture from the moment the RFP lands, not the day after the award email.
A full-year retrospective on shipping grounded AI
Twelve months of evidence on the grounded-AI thesis. The Stanford hallucination number measured against our corpus, four failure modes and which ones we closed, what changed under the hood, and what I would tell Q1 Bo.
Reviews watch: what G2 and Capterra said in January
Monthly sweep of public review activity on Loopio, Responsive, QorusDocs, and Upland Qvidian. Q4 fallout shows up in January — renewal cycles, budget cuts, product regressions flagged in the wild.
How we're talking about PursuitAgent this quarter
A messaging refresh done in public. The four short lines we kept, the ones we dropped, the new framing around grounded retrieval as a contract, and the reasons for each change. Written at the end of the annual-planning cycle.
A composite customer conversation, and the backlog decision it would drive
A distillation of the conversations we have been having with mid-market proposal teams during product development. Composite, not a transcript. The shape of one discoverability problem and how it moves a decision in the backlog.
The roadmap bet we rejected
One feature customers asked for through most of year one that we declined to build, and the reasoning. A short founder note on saying no to the thing that would have been popular and wrong.
What we learned shipping year one
The post-mortem for twelve months of building PursuitAgent in public. Three things that worked, three things we got wrong, the counterfactuals we wish we had tested, and the bets we are making in year two.
Year in RFPs: 2025 — the data and the narrative
The canonical year-end synthesis. What moved in the RFP category in 2025, what did not, what the public data says about vendors and buyers, and three predictions for 2026 with the evidence behind them. 5,000 words, twenty-six sources.
An end-of-year letter to early customers
Short. What we shipped this year, what we did not, what we learned from you. No CTA, no pitch, no asking for anything. Just an accounting.
The 2025 proposal-tool hype cycle, mapped
Who peaked, who plateaued, who crashed, and who quietly held a line this year in the proposal-tools category. A data-grounded map of vendor posture across 2025, drawn from public reviews and pricing signals.
What 'compounding' means for proposal software
PursuitAgent's tagline is 'every RFP you win makes the next one easier.' What that actually requires: four mechanisms of compounding, why most AI tools aren't compounding tools, and questions to ask a vendor.
What we changed on the pricing page this quarter
Public pricing evolution. The two lines we added, the one we removed, and why the pricing page is a better honesty test than the docs.
Preview: what compounding means for proposal software
The thesis in 900 words. Every RFP you win makes the next one easier — what that actually requires, and why most proposal software is structurally incapable of it.
End-of-year product priorities, in public
What we're building in December and January, why those and not the alternatives. An honest account of the four bets we're making and the three we deliberately deferred.
AutogenAI one year later: follow-up on the August teardown
Revisiting the AutogenAI teardown from August. Three things that changed in their positioning and product, two that didn't, and one thing we got wrong the first time.
The SOC 2 attestation is not the end of the questionnaire
A newly-attested SOC 2 Type II does not stop the questionnaires. Buyers still ask the same 200 questions, and what that tells us about how enterprise trust is actually built.
DDQ fatigue is a security risk, not a productivity problem
Opinion. Rushing a 300-question security questionnaire at 11pm on a Thursday does not just cost time. It degrades real security posture, and the industry keeps framing it as a staffing issue.
Six months of the blog: what readers keep coming back to
Six months in, three posts keep getting shared, two flopped, and one surprised me. Notes from the founder on what the field-journal experiment is teaching us about what proposal practitioners actually want to read.
Loopio at ten: what a decade of reviews tells us
Reading 10 years of public Loopio reviews end-to-end. The trajectory of buyer sentiment from 2016 to 2025, what the product fixed, what it never did, and what the trajectory predicts for incumbent RFP tools generally.
Grounded AI is not a feature, it's a refusal
Opinion. The thing that makes grounded AI different from regular AI is what the system refuses to do — answer when retrieval is empty. Here's what we will not ship even when reviewers ask for it.
The overpriced document repository trap
An opinion piece on why most RFP tools end up unused. The reviews tell a consistent story across Loopio, Responsive, and Qvidian: teams pay for AI features and end up using a search box. We have a theory about why.
Naming the category: proposal intelligence
Why we coined 'proposal intelligence' instead of riding the existing labels — RFP software, response management, content automation. What the phrase has to earn, and what we're explicitly not claiming with it.
SME collaboration is a UI problem
An opinion piece. Why 48% of teams still cite SME wrangling as their #1 problem after five years of vendor promises — and why the answer is not another tool but a better surface for the SME.
What a Forrester Wave on proposal tools would need to evaluate
Forrester has not published a Wave specifically for proposal management. A criterion-by-criterion read of what such a Wave would need to measure — where the generic rubric fits real buyer behavior, where it lags.
State of Proposal Tools — Wave 1 2025
The annual benchmark. What customers say about the incumbents and the challengers, what's true in pricing, where the AI moment lands honestly, and what changes in 2025.
The word 'intelligence' and what it had better mean
Why PursuitAgent calls itself proposal intelligence and what the word obligates us to. A short post about a long word.
What a Magic Quadrant for proposal management would need to evaluate
There is no Gartner Magic Quadrant for proposal-management software. What a hypothetical MQ would need to measure — where the framework translates, where it would have to add new axes for grounded AI.
Feature parity is the wrong competitive goal
Chasing Loopio's feature list would kill us. Here's why we picked a different target — and the product we're building because of it.
QvidianPro reviews, five years in retrospect
Sentiment trajectory across 200+ public reviews of Upland Qvidian. Where reviewer language stayed consistent, where it shifted, and where the product stopped tracking the market.
Reviews watch: what G2 and Capterra said in July
Monthly aggregation of competitor review deltas and our own. What changed in July's review feeds across Loopio, Responsive, Qvidian, AutogenAI, and us.
The AutogenAI teardown: UK-origin RFP AI, two years in
What's public about AutogenAI: UK origin, generation-heavy stack, where they win in EU procurement, where the citation discipline is thin, and what we learned reading their materials.
Why we don't do autonomous proposal agents yet
An opinion piece. What an agentic drafting system would have to guarantee that retrieval doesn't, why we don't think the category is ready, and the work we'd want to see before changing our position.
Q1 — what we got wrong in ninety days
Ninety days into the public phase of the company. Five specific things we got wrong, what we changed, and what is still open. Written in the same spirit as the launch post — if I cannot say it on the blog, the discipline is theater.
Pricing in public, ninety days in
We posted real prices on the marketing site at launch. Ninety days later, here is what changed about the sales conversations, what surprised us, and what is still uncomfortable about it.
Stop announcing features, announce what changed for the reader
An announcement that names a feature is a press release. An announcement that names what changed in the reader's day is a useful one. A short field note on what we'll publish under 'shipped' from now on.
The Responsive teardown: what 'enterprise-grade' means
A feature-by-feature look at Responsive (formerly RFPIO) — content library, AI, workflow, reporting — using public review sites as the primary signal. Where they win, where they don't.
Content library vs. knowledge base is not semantics
The vendors call it a content library. We call it a knowledge base. The two words name two different products. Why I think the distinction is the most important one in this category.
Why we priced in public on day one
Public pricing in enterprise RFP software is a posture signal, not a conversion tactic. A founder's note on why we put numbers on the site before we had a sales team.
Legacy RFP UI is the moat — for the incumbents
Clunky enterprise software isn't a bug for the legacy RFP vendors. It's a switching cost. A founder's note on why the worst UX in B2B is also the most defensible — until something breaks the spell.
The Loopio teardown: what 1,700 customers are actually paying for
Loopio is the category's reference customer. Reviews, pricing signals, and product surface area, taken seriously. What it does well, what it doesn't, and what 1,700 customers are buying when they renew.
The Responsive pricing trail: what three years of leaks reveal
Responsive (formerly RFPIO) lists no prices. Three years of public data — job postings, G2 reviews, customer signals, and indirect references — let us describe the shape without inventing the numbers.
RFP software is a vocabulary problem
The terms vendors use — content library, AI suggestion, workflow automation — are doing too much work. Rename them and the failure modes get obvious. An opinion piece on why the category's marketing language is the bug.
Reviews watch: what G2 and Capterra said this week
Five reviews from G2 and Capterra worth reading if you're shopping the proposal-software category. Loopio, Responsive, QorusDocs, Upland Qvidian — the patterns that recur.
Pricing opacity as a market signal
When a software category prices entirely behind a sales call, the pricing strategy is the product strategy. Here's what 'contact sales' tells you about RFP software in 2025.
The day we stopped saying 'AI' and started saying 'retrieval'
A short note on a vocabulary switch we made internally — and the reason a one-word change settled three months of recurring product debates.
The RFP software category is broken in three specific ways
An opinionated walk through three concrete failure modes in the current RFP software category — generic AI, opaque pricing, and rotting libraries — with citations to the public reviews and research that back each one up.
Why we're writing this blog
This is a field journal on proposal work — the craft, the mechanics, and the grounded-AI we're building to change how it gets done. Here's what we'll cover, who writes, and the rules we won't break.
See the proposal workflow
Take the 5-minute tour, then start a trial workspace when you're ready to run a real pursuit against your own source material.