Field notes

Reviews watch: what G2 and Capterra said in July

Monthly aggregation of competitor review deltas and our own. What changed in July's review feeds across Loopio, Responsive, Qvidian, AutogenAI, and us.

The PursuitAgent research team 4 min read Research

Continuing the monthly review-watch from May. We sampled the G2 and Capterra reviews posted in July across the four incumbents and our own product, looking for theme shifts rather than counts. Counts are noisy at the per-month level; themes are more stable signals.

Loopio

The dominant July theme is consistent with the teardown observation: users describe the AI suggestion feature as functional on simple repeat questions and unreliable on nuanced content. Several July reviews used the same phrase — “the magic doesn’t work” — that has been recurring on the platform for over a year. One pattern that did shift in July: more reviewers mention the cost specifically, framing Loopio as “expensive for what we get” rather than “expensive but worth it.” The cost framing is a leading indicator we’ll keep watching; the count this month is small but the rhetorical shift is notable.

Two new positive themes: response-library export to PDF was named several times as a workflow improvement after a recent UI update, and the new Microsoft Teams integration drew positive comments from teams whose internal workflows live in Teams.

Responsive (formerly RFPIO)

July reviews on Responsive continue the search-quality complaint we tracked previously. Multiple reviewers describe the keyword-match search as returning loosely related results, requiring manual triage. One specific phrase recurred: “I’m searching for X and it shows me content about Y.” This is the same complaint shape Stanford HAI characterized at the system level in their grounded-AI study — the failure mode of retrieval that surfaces topically adjacent content rather than the specific source the user wanted.

Notable in July: a small but visible cluster of reviews mentioning the recent UI version rollout negatively. Phrasing in the shape of “less intuitive than the previous version” recurred across the cluster. We don’t have enough data points to characterize the rollout as troubled, but the signal exists.

Qvidian (Upland)

Qvidian’s July reviews continued the long-running themes our previous sweep covered: UI dated, AI performance underwhelming, slow load times. We don’t see meaningful change month-over-month. Capterra has a longer review history on Qvidian than G2 does; the longer-window read is a category-aging product whose reviewer sentiment has trended downward over multi-year horizons. We are running a longer-form analysis on this — see the upcoming Day 108 post on QvidianPro’s five-year review trajectory.

QorusDocs

QorusDocs’s July reviews repeated two themes from earlier months: the “very slow” user experience complaint, and the dashboard-pursuit cap. The cap restricts the dashboard view to 10 active pursuits, which several reviewers identified as a blocker for proposal teams running 30+ concurrent bids. We did not see the integration-quality complaints that appeared in earlier months; those may have been addressed in a release we haven’t independently verified.

AutogenAI

AutogenAI’s review-platform footprint remains thin compared to the US incumbents. We saw a small number of new reviews in July, predominantly UK-based and predominantly positive on the long-form generation quality. The recurring caveat — that human review of generated content is required to catch fabrications — surfaced again. This is consistent with AutogenAI’s own public posture on hallucination risk and the architectural read in our recent teardown.

Our own reviews

We are early in the review-platform lifecycle, with a small enough review count that monthly numbers are noisy. The themes that emerged in July: the citation-verify button (shipped Q1) was named positively in a couple of reviews as a “we knew where every claim came from” workflow win, and the freshness alerter on stale KB blocks was cited in one review as having caught an outdated security-attestation answer before submission. We don’t have enough volume to claim a pattern. We are tracking the themes for when the count grows.

One critical theme also surfaced: a reviewer noted that diagram extraction sometimes produces D2 output that is structurally correct but visually different from the source rendering. We have a post on retrieval over diagrams that names this exact failure mode in the engineering description; the customer review surfaces the same observation from the user side. The fix path is queued for the visual-fidelity work we mentioned in the engineering post.

What we’ll watch in August

Three things across the incumbent pool. First, whether the Loopio cost-rhetoric shift continues. Second, whether the Responsive UI-rollout dissatisfaction concentrates or fades. Third, whether AutogenAI’s review volume starts to grow on US platforms — that would be a leading indicator that they are moving from EU-concentrated to broader-market presence.

The next reviews-watch lands at the end of August. The competitor-teardown series closed with AutogenAI, and the next category-data drop is the State of Proposal Tools — Wave 1 2025 benchmark at month-end.

Methodology

Reviews sampled on 2025-08-09 from the G2 and Capterra product pages linked above. We are not publishing individual review IDs; the sweep is directional, not a statistical sample. Phrase-recurrence observations above reflect shared vocabulary we see across multiple reviewers rather than counts drawn from a structured extraction.

Sources

  1. 1. G2 — Loopio reviews
  2. 2. Capterra — Loopio reviews
  3. 3. G2 — Responsive (formerly RFPIO) reviews
  4. 4. G2 — Upland Qvidian reviews
  5. 5. Capterra — QorusDocs reviews
  6. 6. PursuitAgent — Reviews weekly sweep, May
  7. 7. PursuitAgent — Loopio teardown