Competitive moves in Q1 2026, a running log
What Loopio, Responsive, and AutogenAI announced in Q1 2026. What we shipped against them. Opinion piece — I'm the founder, this is my read on the market.
This is an opinion piece. I’m the founder; this is my read.
Every quarter I write a running log of what our three closest adjacent products shipped or announced, and what we shipped against each one. The point is not to trash-talk. The point is to keep a written record of the market I think we’re in, and to make myself go back a year later and check whether I read it right.
Q4 2025 had a version of this post with similar structure. Q1 2026 is busier than I expected.
Loopio
Loopio announced a “grounded drafting” feature in January. The announcement framed it as RAG with citations, which is the marketing language the whole category now uses. The citation UI I’ve seen in a demo links each sentence to a content-library block. I couldn’t tell from the demo whether the citation is verified at generation time or matched post-hoc. Those are two different things, and they are different in the way that matters most to a compliance-heavy customer — see our grounded retrieval pillar for why.
My read: Loopio is defending the installed base. They have the most customers in the category and the slowest migration velocity away, and the incentive structure is to preserve that with AI features that look competitive without changing what the content library actually does.
What we shipped against it: the claim-level verification pass in mid-January, which operates at a finer grain than sentence-level citation. A customer who cares about the difference is the kind of customer we’d like to win.
What Loopio does better: extraction at enterprise scale. Our compare page is honest about this. If you have 40,000 blocks and 300 users and a 10-year content library, Loopio’s tooling around that specific workload is battle-tested in a way ours isn’t yet.
Responsive
Responsive announced a re-brand of their “StrategicAI” features and a new assistant UI. Neither, as far as I could tell from the public materials, moves the underlying retrieval architecture. The new assistant looks like a chat surface over the existing content library plus the existing drafting flow.
My read: Responsive is in the middle of the category — they’re not fighting on installed base like Loopio and they’re not greenfield-AI like AutogenAI. The pressure is to ship AI features fast enough to keep existing customers from evaluating challengers. A new assistant surface is a reasonable move for that posture.
What we shipped against it: nothing specifically. Our comparison stands; their feature parity is strong on the proposal-management side and weaker on the retrieval discipline side. Our pitch to a Responsive evaluator is the same it was in Q4.
What Responsive does better: SME workflow. Their review routing is nicer than ours right now. We have it on the Q2 roadmap.
AutogenAI
AutogenAI is the most interesting one to watch. In January they shipped a “Signature” feature that claims to adapt tone to a specific writer’s style. I read the docs and watched a demo. The mechanism, as described publicly, is a small fine-tuned model trained on a user’s writing samples, applied as a post-processing pass after drafting.
My read: this is a good product decision and a risky grounding decision. Tone adaptation that happens after drafting can re-word a cited sentence in a way that the citation no longer supports. I don’t know from public materials whether the Signature pass re-runs verification. If it doesn’t, it’s a grounding hole; if it does, they’re doing two LLM passes per sentence and I’d like to see the latency numbers.
Our AutogenAI teardown from last year noted that their copy-generation quality is high. That’s still true. The question for a buyer is whether that quality is worth the grounding tradeoff. For a sales-facing proposal, maybe. For a regulated customer, not yet.
What we shipped against it: nothing specifically in response to Signature. Our position is that the tone-adaptation problem is better solved at the KB block level (write the block in the voice you want; retrieve it) than at the generator level. That’s our thesis; it’s not self-evidently correct.
What I got wrong last quarter
Q4’s version of this post predicted Loopio would sit on their AI story through the end of 2025 and ship incrementally. They shipped sooner than I expected. Not by a lot — I thought April, they shipped in January — but sooner.
I predicted AutogenAI would stay in greenfield sales motion and not chase Responsive’s installed base. Signature is a move in the opposite direction; a tone-adaptation feature is aimed at teams that already have a writing voice they want to preserve, which is mostly existing proposal teams. Will revisit in April.
What we’re watching in Q2
Two things. First, whether any of the three ships per-claim verification at the level we think matters. None have as of this writing. Second, whether pricing moves — there’s been quiet chatter about the mid-market segment getting squeezed on seat licensing, and Loopio in particular has a complex pricing page. If anyone moves to transparent flat pricing, that’s news.
I’ll update this post in a week if anything substantive changes before then. Otherwise the next entry is late April.