AI search engine for peer-reviewed research — answers yes/no questions with a "consensus meter" across 200M+ studies.
The most accessible AI tool for grounding a question in actual peer-reviewed evidence. The Consensus Meter is gimmicky-sounding but genuinely useful for binary research questions.
Last verified: April 2026
Sweet spot: a non-academic professional — clinician, journalist, policy analyst, founder — who needs evidence-grounded answers but is not running a formal systematic review. The Premium tier at $11.99 is one of the best dollar-for-value AI subscriptions if you do this kind of research weekly. Honest concerns. The Consensus Meter creates a false sense of finality on questions where the literature is genuinely contested or methodologically split. The product nudges you toward a clean answer when sometimes the right answer is "it depends on which studies you trust." For high-stakes decisions, treat Consensus as a starting point, not a verdict. What to pilot. Run 10 questions you already know the literature on through Consensus and judge whether the meter and citations match your own read. If it agrees on 8 of 10 and disagrees defensibly on the rest, it is a trustworthy tool for unfamiliar territory. If it confidently misrepresents areas you know well, calibrate accordingly and treat it as a citation finder rather than a synthesizer.
Consensus is an AI-powered search engine built specifically over the academic literature. It indexes more than 200 million peer-reviewed papers via the Semantic Scholar corpus and uses custom GPTs to extract findings, classify study types, and synthesize answers. The signature feature is the "Consensus Meter": for yes/no questions ("does intermittent fasting reduce blood pressure?", "is remote work linked to lower productivity?"), it shows what proportion of relevant studies say yes, no, or possibly — with each contributing study cited and linked. The audience is broader than other academic-search tools. Researchers and clinicians use it for fast literature scoping. Policy analysts and journalists use it to check what the evidence base actually says before writing. Founders and product people use it when a marketing claim turns on a research question. The interface is consumer-grade, not academic, which is why it has spread well outside universities. Beyond the meter, Consensus shows study-level summaries (population, intervention, finding), filters by sample size and study type (RCT, meta-analysis, observational), and links to the full paper. Free tier covers a meaningful number of searches per month. Premium is $11.99/mo for unlimited searches, GPT-4-class summaries, and the full filter set. Enterprise is custom for institutions. The biggest competitor is Elicit. Consensus optimizes for fast yes/no synthesis and broad-audience usability; Elicit optimizes for systematic-review workflow with structured data extraction. They overlap less than they look.
Coverage is peer-reviewed only — preprints, gray literature, and conference papers are partial. The Consensus Meter is computed from a sample of relevant studies, not a true meta-analysis, and can shift if you re-ask the question. Some fields (engineering, CS) have weaker coverage than biomedicine. The synthesis layer can over-confidently summarize a contested area; always click through to study-level evidence on important questions.
No reviews yet. Be the first to share your experience.
Sign in to write a review
No questions yet. Ask something about Consensus.
Sign in to ask a question
No discussions yet. Start a conversation about Consensus.
Sign in to start a discussion