Hallucination Scorer
The createHallucinationScorer()
function evaluates whether an LLM generates factually correct information by comparing its output against the provided context. This scorer measures hallucination by identifying direct contradictions between the context and the output.
Parameters
The createHallucinationScorer()
function accepts a single options object with the following properties:
model:
scale:
This function returns an instance of the MastraScorer class. The .run()
method accepts the same input as other scorers (see the MastraScorer reference), but the return value includes LLM-specific fields as documented below.
.run() Returns
runId:
preprocessStepResult:
preprocessPrompt:
analyzeStepResult:
analyzePrompt:
score:
reason:
generateReasonPrompt:
Scoring Details
The scorer evaluates hallucination through contradiction detection and unsupported claim analysis.
Scoring Process
- Analyzes factual content:
- Extracts statements from context
- Identifies numerical values and dates
- Maps statement relationships
- Analyzes output for hallucinations:
- Compares against context statements
- Marks direct conflicts as hallucinations
- Identifies unsupported claims as hallucinations
- Evaluates numerical accuracy
- Considers approximation context
- Calculates hallucination score:
- Counts hallucinated statements (contradictions and unsupported claims)
- Divides by total statements
- Scales to configured range
Final score: (hallucinated_statements / total_statements) * scale
Important Considerations
- Claims not present in context are treated as hallucinations
- Subjective claims are hallucinations unless explicitly supported
- Speculative language (“might”, “possibly”) about facts IN context is allowed
- Speculative language about facts NOT in context is treated as hallucination
- Empty outputs result in zero hallucinations
- Numerical evaluation considers:
- Scale-appropriate precision
- Contextual approximations
- Explicit precision indicators
Score interpretation
A hallucination score between 0 and 1:
- 0.0: No hallucination — all claims match the context.
- 0.3–0.4: Low hallucination — a few contradictions.
- 0.5–0.6: Mixed hallucination — several contradictions.
- 0.7–0.8: High hallucination — many contradictions.
- 0.9–1.0: Complete hallucination — most or all claims contradict the context.
Note: The score represents the degree of hallucination - lower scores indicate better factual alignment with the provided context
Examples
No hallucination example
In this example, the response is fully aligned with the provided context. All claims are factually correct and directly supported by the source material, resulting in a low hallucination score.
import { openai } from "@ai-sdk/openai";
import { createHallucinationScorer } from "@mastra/evals/scorers/llm";
const scorer = createHallucinationScorer({ model: openai("gpt-4o-mini"), options: {
context: [
"The iPhone was first released in 2007.",
"Steve Jobs unveiled it at Macworld.",
"The original model had a 3.5-inch screen."
]
});
const query = "When was the first iPhone released?";
const response = "The iPhone was first released in 2007, when Steve Jobs unveiled it at Macworld. The original iPhone featured a 3.5-inch screen.";
const result = await scorer.run({
input: [{ role: 'user', content: query }],
output: { text: response },
});
console.log(result);
No hallucination output
The response receives a score of 0 because there are no contradictions. Every statement is consistent with the context, and no new or fabricated information has been introduced.
{
score: 0,
reason: 'The score is 0 because none of the statements from the context were contradicted by the output.'
}
Mixed hallucination example
In this example, the response includes both accurate and inaccurate claims. Some details align with the context, while others directly contradict it—such as inflated numbers or incorrect locations. These contradictions increase the hallucination score.
import { openai } from "@ai-sdk/openai";
import { createHallucinationScorer } from "@mastra/evals/scorers/llm";
const scorer = createHallucinationScorer({ model: openai("gpt-4o-mini"), options: {
context: [
"The first Star Wars movie was released in 1977.",
"It was directed by George Lucas.",
"The film earned $775 million worldwide.",
"The movie was filmed in Tunisia and England."
]
});
const query = "Tell me about the first Star Wars movie.";
const response = "The first Star Wars movie came out in 1977 and was directed by George Lucas. It made over $1 billion at the box office and was filmed entirely in California.";
const result = await scorer.run({
input: [{ role: 'user', content: query }],
output: { text: response },
});
console.log(result);
Mixed hallucination output
The Scorer assigns a mid-range score because parts of the response conflict with the context. While some facts are correct, others are inaccurate or fabricated, reducing overall reliability.
{
score: 0.5,
reason: 'The score is 0.5 because two out of four statements from the output were contradicted by claims in the context, indicating a balance of accurate and inaccurate information.'
}
Complete hallucination example
In this example, the response contradicts every key fact in the context. None of the claims can be verified, and all presented details are factually incorrect.
import { openai } from "@ai-sdk/openai";
import { createHallucinationScorer } from "@mastra/evals/scorers/llm";
const scorer = createHallucinationScorer({ model: openai("gpt-4o-mini"), options: {
context: [
"The Wright brothers made their first flight in 1903.",
"The flight lasted 12 seconds.",
"It covered a distance of 120 feet."
]
});
const query = "When did the Wright brothers first fly?";
const response = "The Wright brothers achieved their historic first flight in 1908. The flight lasted about 2 minutes and covered nearly a mile.";
const result = await scorer.run({
input: [{ role: 'user', content: query }],
output: { text: response },
});
console.log(result);
Complete hallucination output
The Scorer assigns a score of 1 because every statement in the response conflicts with the context. The details are fabricated or inaccurate across the board.
{
score: 1,
reason: 'The score is 1.0 because all three statements from the output directly contradict the context: the first flight was in 1903, not 1908; it lasted 12 seconds, not about 2 minutes; and it covered 120 feet, not nearly a mile.'
}