How It Works

The Research API is an asynchronous, multi-step pipeline that transforms open-ended questions into structured JSON answers backed by citations. You provide natural-language instructions (e.g. “Compare the hardware roadmaps of the top GPU manufacturers”) and an optional JSON Schema describing the output you want.

Under the hood, Exa agents perform multiple steps:

  1. Planning – Your natural-language instructions are parsed by an LLM that decomposes the task into one or more research steps.

  2. Searching – Specialized search agents issue semantic and keyword queries to Exa’s search engine, continuously expanding and refining the result set until they can fulfil the request.

  3. Reasoning & synthesis – Reasoning models combine facts across sources and return structured JSON (if you provide output.schema) or a detailed markdown report.

  4. Citations – Generates a citations object mapping each root-level field to its supporting source passages so you can surface inline sources to your users.

Because tasks are asynchronous, you submit a request and immediately receive a task_id. You can poll the task until it is complete or failed, or list all tasks to monitor progress in bulk.

Best Practices

  • Be explicit – Clear, scoped instructions lead to faster tasks and higher-quality answers.
  • Keep schemas small – 1-5 root fields is the sweet spot. If you need more, create multiple tasks.
  • Use enums – Tight schema constraints improve accuracy and reduce hallucinations.

Models

The Research API offers two advanced agentic researcher models that break down your instructions, search the web, extract and reason over facts, and return structured answers with citations.

  • exa-research (default) adapts to the difficulty of the task, using more or less compute for individual steps. Recommended for most use cases.
  • exa-research-pro maximizes quality by using the highest reasoning capability for every step. Recommended for the most complex, multi-step research tasks.

Typical completion times:

Modelp50 Time (seconds)p90 Time (seconds)
exa-research2040
exa-research-pro6090

Pricing

The Research API now uses variable usage-based pricing. You are billed based on how much work and reasoning the research agent does.

NOTE: You are ONLY charged for tasks that complete successfully.

Operationexa-researchexa-research-proNotes
Search$5/1,000 searches$5/1,000 searchesEach unique search query issued by the agent
Page read$5/1,000 pages read$10/1,000 pages readOne “page” = 1,000 tokens from the web
Reasoning tokens$5/1M tokens$5/1M tokensSpecific LLM tokens used for reasoning and synthesis

Example:
A research task with exa-research that performs 6 searches, reads 20 pages of content, and uses 1,000 reasoning tokens would cost:

$0.03 (6 searches × $5/1000)+$0.10 (20 pages × $5/1000)+$0.005 (1,000 reasoning tokens × $5/1,000,000)$0.135\begin{array}{rl} & \$0.03 \text{ (6 searches × \$5/1000)} \\ + & \$0.10 \text{ (20 pages × \$5/1000)} \\ + & \$0.005 \text{ (1{,}000 reasoning tokens × \$5/1{,}000{,}000)} \\ \hline & \$0.135 \end{array}

For exa-research-pro, the same task would cost:

$0.03 (6 searches × $5/1000)+$0.20 (20 pages × $10/1000)+$0.005 (1,000 reasoning tokens × $5/1,000,000)$0.235\begin{array}{rl} & \$0.03 \text{ (6 searches × \$5/1000)} \\ + & \$0.20 \text{ (20 pages × \$10/1000)} \\ + & \$0.005 \text{ (1{,}000 reasoning tokens × \$5/1{,}000{,}000)} \\ \hline & \$0.235 \end{array}

Examples

1. Competitive Landscape Table Goal: Compare the current flagship GPUs from NVIDIA, AMD, and Intel and extract pricing, TDP, and release date.

import os
from exa_py import Exa

exa = Exa(os.environ["EXA_API_KEY"])

instructions = "Compare the current flagship GPUs from NVIDIA, AMD and Intel. Return a table of model name, MSRP USD, TDP watts, and launch date. Include citations for each cell."
schema = {
    "type": "object",
    "required": ["gpus"],
    "properties": {
        "gpus": {
            "type": "array",
            "items": {
                "type": "object",
                "required": ["manufacturer", "model", "msrpUsd", "tdpWatts", "launchDate"],
                "properties": {
                    "manufacturer": {"type": "string"},
                    "model": {"type": "string"},
                    "msrpUsd": {"type": "number"},
                    "tdpWatts": {"type": "integer"},
                    "launchDate": {"type": "string"}
                }
            }
        }
    },
    "additionalProperties": False
}

task = exa.research.create_task(
    model="exa-research",
    instructions=instructions,
    output_schema=schema
)

# Poll until completion
result = exa.research.poll_task(task.id)
print(result)

2. Market Size Estimate Goal: Estimate the total global market size (USD) for battery recycling in 2030 with a clear methodology.

import os
from exa_py import Exa

exa = Exa(os.environ["EXA_API_KEY"])

instructions = "Estimate the global market size for battery recycling in 2030. Provide reasoning steps and cite sources."
schema = {
    "type": "object",
    "required": ["estimateUsd", "methodology"],
    "properties": {
        "estimateUsd": {"type": "number"},
        "methodology": {"type": "string"}
    },
    "additionalProperties": False
}

task = exa.research.create_task(
    model="exa-research",
    instructions=instructions,
    output_schema=schema
)

# Poll until completion
result = exa.research.poll_task(task.id)
print(result)

3. Timeline of Key Events Goal: Build a timeline of major OpenAI product releases from 2015 – 2023.

import os
from exa_py import Exa

exa = Exa(os.environ["EXA_API_KEY"])

instructions = "Create a chronological timeline (year, month, brief description) of major OpenAI product releases from 2015 to 2023."
schema = {
    "type": "object",
    "required": ["events"],
    "properties": {
        "events": {
            "type": "array",
            "items": {
                "type": "object",
                "required": ["date", "description"],
                "properties": {
                    "date": {"type": "string"},
                    "description": {"type": "string"}
                }
            }
        }
    },
    "additionalProperties": False
}

task = exa.research.create_task(
    model="exa-research",
    instructions=instructions,
    output_schema=schema
)

# Poll until completion
result = exa.research.poll_task(task.id)
print(result)

FAQs