AI-powered user research and usability testing — unmoderated, moderated, and AI-moderated studies.
The right choice for design and product teams that want continuous, fast, AI-synthesized research without UserTesting's price tag. AI Moderator is genuinely novel — bridging unmoderated cost with moderated depth.
Last verified: April 2026
Sweet spot: a 10–100-person product team where design and research want to ship fast, frequent studies and the org has decided continuous research beats periodic agency engagements. Maze's pricing, AI synthesis, and Figma integration make it the path of least resistance for that workflow. AI Moderator is the genuinely new capability — if you've historically run unmoderated tests because moderated felt too expensive, AI Moderator may legitimately change what research is feasible. Failure modes to know. Teams who treat Maze AI synthesis as finished research output produce shallow insights — the AI clusters themes well but doesn't replace researcher judgment on study design or interpretation. Panel-recruitment quality is real but variable for niche audiences; budget for your own recruiting if you need B2B specialists or regulated-industry participants. The Starter-to-Enterprise feature gap is large (no AI Moderator, no mobile, no interviews on Starter), so growing teams hit a pricing cliff that requires negotiation. Mobile testing is functional but feels secondary vs the web-prototype workflow. What to pilot before signing. Run three real studies on the Free or Starter tier — one prototype test, one survey, one card sort. Have a non-researcher (PM or designer) author and analyze each. If the AI synthesis produces themes you'd defend in a stakeholder readout and the Figma integration removes friction, the value is obvious. If you find yourself manually re-coding every theme or fighting prototype imports, the workflow won't scale and a more researcher-centric tool (Dovetail for synthesis, UserTesting for moderated) may fit better.
Maze is a continuous user-research platform that runs unmoderated usability tests, prototype tests, surveys, card sorts, tree tests, and (newer) AI-moderated interviews — all from one workspace. The core promise: research that used to take a UX agency two weeks and a five-figure budget now runs in two days from a self-serve tool, with AI handling synthesis. It pairs especially well with Figma, where Maze imports prototypes natively and tests them against real users. Where Maze sits in the UX-research category: UserTesting is the incumbent enterprise platform with the deepest panel network and the highest cost — it owns moderated video research at scale. Lookback is the moderated-interview specialist with the best live-session UX. Maze is the unmoderated + AI-moderated leader, undercutting UserTesting on price and beating Lookback on automated test types. The platform's methodology bias is toward fast, frequent, lightweight studies rather than the deep, expensive moderated sessions UserTesting champions. The 2025/2026 AI release defined the product's second act. Maze AI synthesizes test results into themes and recommended actions automatically, replacing a research-ops bottleneck that historically took designers half a day per study. AI Moderator (launched 2024, matured 2025) runs unmoderated interviews where an AI asks follow-up questions in real time based on participant answers — bridging the gap between "cheap unmoderated test" and "expensive moderated interview." Automated reports generate readouts in the Maze format with one click. The AI panel-recruitment improvements help match participants to study criteria more accurately. Pricing has Free, Starter, Organization, and Enterprise tiers. Free covers basic prototype testing with 10 testers/month and one user — useful for solo designers validating fit. Starter is $99/month annual ($1,188/year) with 100 testers/month and core features. Organization is custom-priced for larger teams (typical Vendr median around $12k/year for mid-market). Enterprise unlocks AI Moderator, mobile testing, interview studies, panel recruitment at volume, and the full security/compliance stack.
AI Moderator, mobile testing, and interview studies are gated to Enterprise — Starter and Organization users hit those limits frequently. Maze AI synthesis is fast but produces generic themes on ambiguous data; treat it as a starting draft, not a finished research report. Panel recruitment is solid for common personas (B2C consumer, general product users) but thinner for niche / B2B / regulated audiences vs UserTesting's curated panel. Mobile testing arrived later than the web product and still feels secondary in the UI. AI features genuinely save time but should never replace researcher judgment on study design — the bottleneck shifts from synthesis to study quality.
No reviews yet. Be the first to share your experience.
Sign in to write a review
No questions yet. Ask something about Maze.
Sign in to ask a question
No discussions yet. Start a conversation about Maze.
Sign in to start a discussion