Our own proposal process, in public
How PursuitAgent responds to its own inbound RFPs. The intake, the bid/no-bid, the writing, the gold team, and the parts of the product that don't help us yet because we haven't built them.
We get inbound RFPs. Mid-market SaaS companies, a few enterprise procurement teams kicking the tires on proposal-intelligence vendors, and the occasional federal subcontract opportunity routed through a partner. The volume is small relative to a Loopio or a Responsive — we are early. But we get enough that we run a real internal proposal process, and I think it is worth describing in public, including the parts where the product we make doesn’t help us yet.
This is a field note, not a playbook. The pattern works for us because we are small and because the team that builds the product is the same team that responds to RFPs about it. Bigger shops will not run it the same way.
Intake
Inbound RFPs route to a single inbox. I see them. We use PursuitAgent for ingest — the multi-doc upload pipeline we shipped in May handles the original PDF plus the addenda plus the attachments and produces an indexed record. The compliance matrix is auto-extracted (the autogen feature shipped in April). I read the intake with the matrix open.
This part works as designed. The intake-to-readable-record cycle takes about 20 minutes, including my own read-through.
Bid/no-bid
The decision is mine on small inbounds and a quick call between me and our head of GTM on larger ones. We use a simplified version of the scoring model I described in Stage 2 of the eight-stage pipeline — strategic fit, probability of win, cost to produce, opportunity cost, deal quality. Five dimensions, scored 1-5, with a written floor.
We say no more than the people I advise expect us to. Most of the inbound RFPs we receive are wired for an incumbent or are exploratory procurement that won’t produce an award this fiscal year. We respond to roughly 30% of inbound. The other 70% get a polite no with a short rationale routed back to the requester.
The product helps us at this stage but not as much as it could. We have not yet built a “bid/no-bid scoring assistant” surface that pulls capture data into the scoring conversation; that is in our roadmap but it is not shipped. Today we score in a Notion doc and reference it later. That is fine for our volume; it would not be fine at higher volume.
Capture
Capture is the smallest stage in our process because we are responding to mid-market opportunities where the buyer landscape is mostly visible from public information. I read the buyer’s website, the LinkedIn profiles of the named contacts in the RFP, and any prior RFPs from the same buyer if they are publicly searchable. Twenty to forty minutes. I write a one-page capture note that becomes the win-theme input for the response.
The product does not help here yet. Capture is the next major surface we are building — a structured capture-plan workspace tied to the buyer record — but it is not shipped. I do this work in a Google Doc.
Compliance matrix
The matrix is auto-generated at intake. I review it for completeness, sometimes adjust the section assignments, and assign an owner per row. Owners are usually one of three people on our team — me, our head of GTM, or our head of customer success. Engineering SMEs get pulled in as ticketed asks (the pattern Sarah covered in Part 2 of her SME series).
This works well. The matrix is the spine of the response. Every section gets written against the matrix; nothing ships unaddressed.
Draft
We draft inside the proposal builder. The drafter pulls from our own KB (which we have been maintaining for the product since we started shipping it). Every drafted sentence is grounded in a KB block and cites it inline. The verifier flags anything that fails citation density or numeric verification. The pattern is the one we describe on the Proposal Builder page.
The honest assessment: about 70% of any given response is drafted by the engine and approved with light edits. About 20% is heavily edited. About 10% is rewritten by hand because the engine refused to draft it (no source) or because the topic is too new to have a KB block yet.
The 10% rewrite-by-hand is where the product doesn’t help yet. The drafter refuses cleanly when no source exists, which is the right behavior, but it leaves the writer staring at a blank section. We are working on a “draft-from-scratch” mode that opens a writing canvas with the candidate evidence and the rubric prefilled. Today the writer opens a separate doc and drafts cold.
Gold team
We run a gold team on every response. Three reviewers — me, head of GTM, and one external reviewer (usually a friend who runs proposal ops at a bigger shop and trades reviews with us). The rubric is the gold rubric Sarah covered in the color-team pillar — compliance, sourcing, cohesion. We run it async, with comments labeled blocker / major / minor / nit, the same scale we use for engineering code review.
This is the stage I would not skip on any response. The external reviewer in particular catches issues internal eyes miss. We pay our friends in the form of reciprocal reviews for their bids; that exchange has been valuable.
Submit
Mechanical. Whoever owns submission for the bid (usually GTM) runs the submission checklist, uploads to the portal, and confirms timestamps and receipts. We have not had a missed deadline or a portal-format failure on the bids we have submitted. That is the floor, not a victory; the floor is what most teams skip.
Post-mortem
We run the 20-minute retro the day after submit. Five questions, one owner per action item. The output goes back into our KB.
Where we have shipped against this loop: post-mortem outputs feed the win-loss capture system we shipped in July. Where we haven’t: the loop from a specific buyer’s debrief into a per-buyer capture record is still manual. We see the road; we have not built it yet.
What this means
Two things.
First, the eat-our-own-cooking standard is the right one. We use the product on real bids; the parts that work, work; the parts that don’t, we feel directly and prioritize accordingly. Most of the gaps in our own process are the next features on our roadmap, and that is the right ordering.
Second, we are honest about the gaps. The capture surface is not built. The draft-from-scratch mode for non-KB-grounded sections is not built. The per-buyer capture record loop is not closed. None of this is a secret. If you are evaluating PursuitAgent, this is the floor you would be evaluating against; the ceiling is what we will ship in the next two quarters.
Operating in public has been a useful discipline. It forces us to write the gaps down, and writing the gaps down forces us to address them. The next field note in this thread will be a status update on which of the gaps named in this post have closed.