We’ll build a hallucination detection system using Exa’s search capabilities to verify AI-generated claims. The system works in three steps:
- Extract claims from text
- Search for evidence using Exa
- Verify claims against evidence
Get Started
1
Pre-requisites and installation
Install the required packages:Set up your API keys:
You’ll need both an Exa API key and an Anthropic API key to run this example. You can get your Anthropic API key here.
Get your Exa API key
Python
2
Create the claim extractor
First, we’ll create functions to extract factual claims from the text:
Python
We include a regex-based fallback method in case the LLM response isn’t properly formatted. This ensures our system remains robust even if the LLM output is unexpected.
3
Set up Exa search
Create a function to search for evidence using Exa:
Python
We format each source with its URL and content for easy reference in the verification step. The print statements help with debugging and understanding the search process.
4
Create the claim verifier
Build a function to analyze the evidence and assess each claim:
Python
The verifier includes robust error handling and defaults to “Insufficient information” if there are issues with the LLM response or source processing.
5
Create the workflow
Set up the LangGraph workflow to orchestrate the process:
Python
6
Test the system
Let’s try it with a sample text about the Eiffel Tower:Sample output:Through this combination of Exa’s search capabilities and LangGraph’s workflow management, we’ve created a powerful system for identifying and verifying claims in any text. The system successfully identified both true claims (structure and location) and false claims (construction date and purpose) about the Eiffel Tower.
Python