Reviews watch: what G2 and Capterra said in January
Monthly sweep of public review activity on Loopio, Responsive, QorusDocs, and Upland Qvidian. Q4 fallout shows up in January — renewal cycles, budget cuts, product regressions flagged in the wild.
This is a monthly note. Every month the research team sweeps public review sites for the four largest incumbent proposal platforms and summarizes what moved. January matters because Q4 fallout surfaces in January — contract-renewal cycles, budget conversations, and product regressions tend to land on the review sites two to six weeks after the quarter closes.
The sweep is boring on purpose. No opinions, no competitive framing. Just what real users wrote and what the directional signal looks like across four incumbents.
Loopio
Capterra activity through January continued the dominant theme the autorfp.ai summary catalogued last year: the “Magic” generation feature works on easy questions and struggles on nuanced content. Reviews in the first half of January reinforced the pattern that suggestions degrade when the content library goes unmaintained — the tool “becomes an overpriced document repository” is phrasing that keeps recurring.
One new pattern worth noting: multiple reviewers now describe the content-library maintenance burden as under-staffed, not under-tooled. That is a shift. In 2025 reviews the complaint was the tool; in 2025 and now 2026, the complaint is that the maintenance work the tool requires is not something the buying org had planned for. Teams are asking for “content-freshness service hours” as a line-item in renewals. Whether Loopio provides that is a question for the sales conversation, not the reviews.
Strengths that continued to show up: user-level question assignment, macro-level dashboards for proposal volume, and the integrations library.
Responsive (formerly RFPIO)
G2 Responsive reviews continued the “the search is terrible” refrain from 2024–2025. January added a specific variant: multiple reviewers mentioned that the new UI rollout made the search experience “LESS intuitive” — the ALL-CAPS emphasis is the reviewer’s, not ours. Keyword-match is still the default; users are asking for true semantic retrieval and getting suggestions that don’t match the actual question.
Secondary theme: a handful of reviewers cited the AI-assist feature as “pales in comparison to basic ad-hoc GenAI” when compared to pasting the same question into ChatGPT or Claude. That is a specific benchmark buyers are now running informally during evaluations.
On the strength side: Responsive still holds the integration-depth lead on SalesForce and HubSpot, and reviewers continued to praise the customer-success team’s responsiveness in renewal windows.
QorusDocs
Capterra QorusDocs reviews continued their cadence through January. The dominant complaint — “very slow” — repeated. Dashboard caps at 10 active pursuits in the default view continues to frustrate teams over that size.
One new complaint worth flagging: the content search “pulls less-relevant results” on queries that contain industry jargon. That is a retrieval-quality signal — the retrieval layer is probably not tuned for proposal-specific vocabulary and is returning generic matches that a keyword-BM25 baseline would also return. The industry-specific retrieval gap is real, and it is an opening for competitors with better domain tuning.
Strengths flagged: Microsoft 365 integration, template library breadth, and the onboarding experience for new customers.
Upland Qvidian
G2 Upland Qvidian reviews remained quiet in January — a handful of new reviews, all mid-score. The UI being “dated” is the most common single comment, with “inadequate AI performance” and “expensive” as the recurring B-side. No major product shift was flagged by reviewers. This is a platform whose review activity is trending down, which is itself a signal about its active user base.
Cross-platform patterns
Three patterns showed up on more than one platform in January:
- Content-library maintenance is being called out as a headcount problem, not a software problem. Buyers are starting to ask about service-level content refresh offerings. This is new. It was implicit in 2025; in 2026 it is being said out loud.
- Semantic retrieval is now a named feature buyers check for. “The search is not semantic” is being written as an explicit complaint. In 2024–2025 reviewers described the symptom; now they name the mechanism.
- GenAI comparison is the new benchmark. Reviewers are running their own evaluations — paste the RFP question into ChatGPT or Claude, compare to the platform’s AI suggestion, decide. The platform AI has to beat the bring-your-own baseline. Many of the incumbents don’t, and reviewers say so.
What this is and isn’t
This is a directional read on public reviews. It is not a market-share signal, not a churn indicator, and not a basis for individual buying decisions. Real buying decisions want structured side-by-side comparisons on the buyer’s actual use cases. We publish a few of those on the /compare/* pages, and when we do we cite specifically what the competitor does well — see the Loopio comparison and the Responsive comparison. The reviews sweep is context, not a verdict.
Next month the sweep runs the same way. If a competitor ships something new or retracts something, the February note will reflect it.
Methodology
Reviews sampled on 2026-01-17 from the G2 and Capterra product pages linked above. We are not publishing individual review IDs; the sweep is directional, not a statistical sample. Phrase-recurrence observations above reflect shared vocabulary across multiple reviewers, not counts drawn from a structured extraction.