Exa-powered Hallucination Detector
A live demo that detects hallucinations in content using Exa’s search.
We built a live hallucination detector that uses Exa to verify LLM-generated content. When you input text, the app breaks it into individual claims, searches for evidence to verify each one, and returns relevant sources with a verification confidence score.
A claim is a single, verifiable statement that can be proven true or false - like “The Eiffel Tower is in Paris” or “It was built in 1822.”
Click here to try it out.
This document explains the functions behind the three steps of the fact-checker:
- The LLM extracts verifiable claims from your text
- Exa searches for relevant sources for each claim
- The LLM evaluates each claim against its sources, returning whether or not its true, along with a confidence score.
Function breakdown
Extracting claims
The extract_claims
function uses an LLM (Anthropic’s, in this case) to identify distinct, verifiable statements from your inputted text, returning these claims as a JSON array of strings.
Searching for evidence
The exa_search
function uses Exa search to find evidence for each extracted claim. For every claim, it retrieves the 5 most relevant sources, formats them with their URLs and content (text
), passing them to the next function for verification.
Verifying claims
The verify_claim
function checks each claim against the sources from exa_search
. It uses an LLM to determine if the sources support or refute the claim and returns a decision with a confidence score. If no sources are found, it returns “insufficient information”.
Using LLMs to extract claims and verify them against Exa search sources is a simple way to detect hallucinations in content. If you’d like to recreate it, the full documentation for the script is here and the github repo is here.