Automate in-depth web research and receive structured JSON results with citations.
instructions
are parsed by an LLM that decomposes the task into one or more research steps.
output.schema
) or a detailed markdown report.
citations
object mapping each root-level field to its supporting source passages so you can surface inline sources to your users.
task_id
. You can poll the task until it is complete or failed, or list all tasks to monitor progress in bulk.
Model | p50 Time (seconds) | p90 Time (seconds) |
---|---|---|
exa-research | 20 | 40 |
exa-research-pro | 60 | 90 |
Operation | exa-research | exa-research-pro | Notes |
---|---|---|---|
Search | $5/1,000 searches | $5/1,000 searches | Each unique search query issued by the agent |
Page read | $5/1,000 pages read | $10/1,000 pages read | One “page” = 1,000 tokens from the web |
Reasoning tokens | $5/1M tokens | $5/1M tokens | Specific LLM tokens used for reasoning and synthesis |
exa-research
that performs 6 searches, reads 20 pages of content, and uses 1,000 reasoning tokens would cost:
For exa-research-pro
, the same task would cost:
Who is the Research API for?
How is this different from the /answer endpoint?
/answer
is designed for single-shot Q&A. The Research API handles
long-running, multi-step investigations. It’s suitable for tasks that
require complex reasoning over web data.How long do tasks take?
What are best practices for writing instructions?
Are citations guaranteed?
data
object has a corresponding array of
citation URLs in the citations
object.How large can my output schema be?
What happens if my schema validation fails?