AI research assistant for systematic reviews — extracts structured data from papers into spreadsheets at scale.
The serious systematic-review tool. If your job involves reading dozens of papers and tracking what they each said, Elicit replaces a week of work with an afternoon.
Last verified: April 2026
Sweet spot: a researcher or analyst whose actual job is to read papers and produce structured outputs from them. Pro at $49/mo is the right tier for most individuals doing this seriously — Plus runs out of credits fast on real reviews, and Team is overkill for solo work. Honest concerns. The extraction layer is impressive but imperfect, and the failure mode (a confidently wrong number in a row of your spreadsheet) is exactly the kind of error that propagates into a published review unnoticed. Treat Elicit's output as a fast first pass that a human still verifies. The pricing also bites: a small lab with 5 people doing reviews lands on Team at $495/mo, which is non-trivial against academic budgets. What to pilot. Pick a recent review you or a colleague did manually. Re-run it through Elicit with the same inclusion criteria. Compare which papers Elicit found that you missed, which it missed, and how accurate the extracted columns are versus your hand-coded ones. If Elicit comes within 90% accuracy and finds papers you missed, it is a real productivity gain; below that, calibrate on what extractions you can trust and which to always recheck.
Elicit is an AI research assistant aimed at the workflow of serious literature review. Where Consensus answers a single research question with a synthesized verdict, Elicit is built for the multi-day, multi-hundred-paper workflow: you describe the population, intervention, and outcome you care about, and Elicit retrieves relevant papers, screens them against your inclusion criteria, and extracts structured columns (study design, sample size, effect size, control group, key finding) into a spreadsheet. That spreadsheet is the entire point. For systematic reviews, evidence syntheses, and any R&D where you are tracking what 50–500 papers actually said, Elicit replaces weeks of grad-student labor. You can add custom extraction columns ("did the authors disclose funding source?", "what dose was used?") and the model fills them across the whole table. Embedding-based "find similar" pulls in adjacent papers you would have missed. Audience: academics doing real systematic reviews, R&D and pharma teams running evidence assessments, government and think-tank analysts producing rapid evidence summaries, and increasingly any team that just needs to read 100 papers and not lose track of what they said. Pricing tiers reflect that audience. Starter is free with limited searches and extractions. Plus is $12/mo for individuals doing occasional reviews. Pro is $49/mo for power users with much higher caps. Team is $99/seat/mo with shared workspaces. Enterprise is custom with on-prem and procurement-grade contracts. The Plus → Pro jump is the most common upgrade path; Team only makes sense once 3+ people are collaborating on the same review.
Extraction quality varies by field — well-structured biomedical papers extract cleanly; humanities and qualitative work struggle. The model can hallucinate data into custom columns when the paper is ambiguous; treat extracted values as a draft to verify, not gospel. Pricing scales fast at the Pro tier for small labs. PDF upload helps for paywalled papers but you must respect publisher licensing.
No reviews yet. Be the first to share your experience.
Sign in to write a review
No questions yet. Ask something about Elicit.
Sign in to ask a question
No discussions yet. Start a conversation about Elicit.
Sign in to start a discussion