Hallucination Scorer
The createHallucinationScorer() function evaluates whether an LLM generates factually correct information by comparing its output against the provided context. This scorer measures hallucination by identifying direct contradictions between the context and the output.
Parameters
The createHallucinationScorer() function accepts a single options object with the following properties:
model:
scale:
This function returns an instance of the MastraScorer class. The .run() method accepts the same input as other scorers (see the MastraScorer reference), but the return value includes LLM-specific fields as documented below.
.run() Returns
runId:
preprocessStepResult:
preprocessPrompt:
analyzeStepResult:
analyzePrompt:
score:
reason:
generateReasonPrompt:
Scoring Details
The scorer evaluates hallucination through contradiction detection and unsupported claim analysis.
Scoring Process
- Analyzes factual content:
- Extracts statements from context
- Identifies numerical values and dates
- Maps statement relationships
- Analyzes output for hallucinations:
- Compares against context statements
- Marks direct conflicts as hallucinations
- Identifies unsupported claims as hallucinations
- Evaluates numerical accuracy
- Considers approximation context
- Calculates hallucination score:
- Counts hallucinated statements (contradictions and unsupported claims)
- Divides by total statements
- Scales to configured range
Final score: (hallucinated_statements / total_statements) * scale
Important Considerations
- Claims not present in context are treated as hallucinations
- Subjective claims are hallucinations unless explicitly supported
- Speculative language ("might", "possibly") about facts IN context is allowed
- Speculative language about facts NOT in context is treated as hallucination
- Empty outputs result in zero hallucinations
- Numerical evaluation considers:
- Scale-appropriate precision
- Contextual approximations
- Explicit precision indicators
Score interpretation
A hallucination score between 0 and 1:
- 0.0: No hallucination — all claims match the context.
- 0.3–0.4: Low hallucination — a few contradictions.
- 0.5–0.6: Mixed hallucination — several contradictions.
- 0.7–0.8: High hallucination — many contradictions.
- 0.9–1.0: Complete hallucination — most or all claims contradict the context.
Note: The score represents the degree of hallucination - lower scores indicate better factual alignment with the provided context
Example
Evaluate agent responses for hallucinations against provided context:
import { runEvals } from "@mastra/core/evals";
import { createHallucinationScorer } from "@mastra/evals/scorers/prebuilt";
import { myAgent } from "./agent";
// Context is typically populated from agent tool calls or RAG retrieval
const scorer = createHallucinationScorer({
model: "openai/gpt-4o",
});
const result = await runEvals({
data: [
{
input: "When was the first iPhone released?",
},
{
input: "Tell me about the original iPhone announcement.",
},
],
scorers: [scorer],
target: myAgent,
onItemComplete: ({ scorerResults }) => {
console.log({
score: scorerResults[scorer.id].score,
reason: scorerResults[scorer.id].reason,
});
},
});
console.log(result.scores);
For more details on runEvals, see the runEvals reference.
To add this scorer to an agent, see the Scorers overview guide.