A live demo that detects hallucinations in content using Exa’s search.
Extracting claims
extract_claims
function uses an LLM (Anthropic’s, in this case) to identify distinct, verifiable statements from your inputted text, returning these claims as a JSON array of strings.Searching for evidence
exa_search
function uses Exa search to find evidence for each extracted claim. For every claim, it retrieves the 5 most relevant sources, formats them with their URLs and content (text
), passing them to the next function for verification.Verifying claims
verify_claim
function checks each claim against the sources from exa_search
. It uses an LLM to determine if the sources support or refute the claim and returns a decision with a confidence score. If no sources are found, it returns “insufficient information”.