A quick-start guide on how to add Exa retrieval to agents in your CrewAI crew

What this doc covers

  • How to set up a crew, agents and a custom Exa-powered tool
  • Designing these in service of generating a newsletter based on Exa search results

Guide

1. Pre-requisites and installation

Install the crewAI core, crewAI tools and Exa Python SDK libraries

pip install crewai 'crewai[tools]' exa_py

2. Defining a custom Exa-based tool in crewAI

In the following code, we set up a custom tool using the crewAI @tool decorator . Within the tool, we initialize the Exa class from the Exa Python SDK, make a request, and return a parsed out result.

from crewai_tools import tool
from exa_py import Exa
import os

exa_api_key = os.getenv("EXA_API_KEY")

@tool("Exa search and get contents")
def search_and_get_contents_tool(question: str) -> str:
    """Tool using Exa's Python SDK to run semantic search and return result highlights."""

    exa = Exa(exa_api_key)

    response = exa.search_and_contents(
        question,
        type="neural",
        use_autoprompt=True,
        num_results=3,
        highlights=True
    )

    parsedResult = ''.join([f'<Title id={idx}>{eachResult.title}</Title><URL id={idx}>{eachResult.url}</URL><Highlight id={idx}>{"".join(eachResult.highlights)}</Highlight>' for (idx, eachResult) in enumerate(response.results)])

    return parsedResult

Make sure your API keys are initialized properly. For this demonstration, the environment variable names are OPENAI_API_KEY and EXA_API_KEY for OpenAI and Exa keys respectively.

Get Exa API Key

3. Setting up CrewAI agent and kicking off

First import the relevant crewAI modules. We explicitly define exa_tools here per the above method definition for use by our agents

from crewai import Task, Crew, Agent

exa_tools = search_and_get_contents_tool

We then set up two agents - one to research with Exa (providing the custom tool defined above), another to write a newsletter as an output (using an LLM), and place them in a crew together.

# Creating a senior researcher agent with memory and verbose mode
researcher = Agent(
  role='Researcher',
  goal='Get the latest research on {topic}',
  verbose=True,
  memory=True,
  backstory=(
    "Driven by curiosity, you're at the forefront of"
    "innovation, eager to explore and share knowledge that could change"
    "the world."
  ),
  tools=[exa_tools],
  allow_delegation=False
)

article_writer = Agent(
  role='Researcher',
  goal='Write a great newsletter article on {topic}',
  verbose=True,
  memory=True,
  backstory=(
    "Driven by a love of writing and passion for"
    "innovation, you are eager to share knowledge with"
    "the world."
  ),
  tools=[exa_tools],
  allow_delegation=False
)

We define tasks for them respectively and define the crew overall based on all of the components above.

research_task = Task(
  description=(
    "Identify the latest research in {topic}."
    "Your final report should clearly articulate the key points,"
  ),
  expected_output='A comprehensive 3 paragraphs long report on the {topic}.',
  tools=[exa_tools],
  agent=researcher,
)

write_article = Task(
  description=(
    "Write a newsletter article on the latest research in {topic}."
    "Your article should be engaging, informative, and accurate."
    "The article should address the audience with a greeting to the newsletter audience \"Hi readers!\", plus a similar signoff"
  ),
  expected_output='A comprehensive 3 paragraphs long newsletter article on the {topic}.',
  agent=article_writer,
)

crew = Crew(
  agents=[researcher, article_writer],
  tasks=[research_task, write_article],
  memory=True,
  cache=True,
  max_rpm=100,
  share_crew=True
)

Lastly, we kick off the crew's operation with an input query which specifies the research topic to cover.

response = crew.kickoff(inputs={'topic': 'Latest AI research'})

print(response)

Output from the above code:

[... Prior output truncated ...]

> Finished chain.
Hi readers!

As we step into the promising arena of 2024, we bring you some of the most significant advancements in the field of AI research. The year witnessed a considerable focus on the development of AI agents and LLMs (Large Language Models). Adept, a frontrunner in the space, showcased an agent that can find apartments on Redfin, input information into Salesforce, and interact with Google Sheets using natural language. While there is no clear winner on the commercial front yet, this development promises a future where AI can perform tasks for us.

The year also saw a continued focus on LLMs, with efforts directed towards matching the text performance of GPT-4 with smaller models. An interesting outcome of these efforts was the Falcon 7B model, which matches the performance of the 8B PaLM model. This model, interestingly, uses 100% web data for pretraining. It's worth mentioning that LLMs were also used to generate imitation models, which mimic the style of upstream LLMs. One study found that these models are highly rated by crowd workers.

In the field of computer vision, there were numerous developments. One noteworthy mention is the ASSET paper that introduced an architecture capable of modifying an input high-resolution image according to a user's edits on its semantic segmentation map. This advancement points to the possibility of synthesizing interesting phenomena in scenes, which has the potential to revolutionize the way we interact with digital imagery.

As we continue to explore the ever-evolving landscape of AI, we hope to bring you more such exciting updates. Stay tuned and until next time, keep exploring!

Best,
[Your Name]

As you can see, the output generation is enriched with the context of our Exa Search query result!