Building a Hallucination Checker
Learn how to build an AI-powered system that identifies and verifies claims using Exa and LangGraph.
We’ll build a hallucination detection system using Exa’s search capabilities to verify AI-generated claims. The system works in three steps:
- Extract claims from text
- Search for evidence using Exa
- Verify claims against evidence
This combines RAG with LangGraph to fact-check AI outputs and reduce hallucinations by grounding claims in real-world data.
Get Started
Pre-requisites and installation
Install the required packages:
Get your Exa API key
Set up your API keys:
Create the claim extractor
First, we’ll create functions to extract factual claims from the text:
Set up Exa search
Create a function to search for evidence using Exa:
We format each source with its URL and content for easy reference in the verification step. The print statements help with debugging and understanding the search process.
Create the claim verifier
Build a function to analyze the evidence and assess each claim:
The verifier includes robust error handling and defaults to “Insufficient information” if there are issues with the LLM response or source processing.
Create the workflow
Set up the LangGraph workflow to orchestrate the process:
Test the system
Let’s try it with a sample text about the Eiffel Tower:
Sample output:
Through this combination of Exa’s search capabilities and LangGraph’s workflow management, we’ve created a powerful system for identifying and verifying claims in any text. The system successfully identified both true claims (structure and location) and false claims (construction date and purpose) about the Eiffel Tower.