Hallucination Evaluation
We just released a new evals API called Scorers, with a more ergonomic API and more metadata stored for error analysis, and more flexibility to evaluate data structures. It’s fairly simple to migrate, but we will continue to support the existing Evals API.
Use HallucinationMetric
to evaluate whether the response contradicts any part of the provided context. The metric accepts a query
and a response
, and returns a score and an info
object containing a reason.
Installation
npm install @mastra/evals
No hallucination example
In this example, the response is fully aligned with the provided context. All claims are factually correct and directly supported by the source material, resulting in a low hallucination score.
import { openai } from "@ai-sdk/openai";
import { HallucinationMetric } from "@mastra/evals/llm";
const metric = new HallucinationMetric(openai("gpt-4o-mini"), {
context: [
"The iPhone was first released in 2007.",
"Steve Jobs unveiled it at Macworld.",
"The original model had a 3.5-inch screen."
]
});
const query = "When was the first iPhone released?";
const response = "The iPhone was first released in 2007, when Steve Jobs unveiled it at Macworld. The original iPhone featured a 3.5-inch screen.";
const result = await metric.measure(query, response);
console.log(result);
No hallucination output
The response receives a score of 0 because there are no contradictions. Every statement is consistent with the context, and no new or fabricated information has been introduced.
{
score: 0,
info: {
reason: 'The score is 0 because none of the statements from the context were contradicted by the output.'
}
}
Mixed hallucination example
In this example, the response includes both accurate and inaccurate claims. Some details align with the context, while others directly contradict it—such as inflated numbers or incorrect locations. These contradictions increase the hallucination score.
import { openai } from "@ai-sdk/openai";
import { HallucinationMetric } from "@mastra/evals/llm";
const metric = new HallucinationMetric(openai("gpt-4o-mini"), {
context: [
"The first Star Wars movie was released in 1977.",
"It was directed by George Lucas.",
"The film earned $775 million worldwide.",
"The movie was filmed in Tunisia and England."
]
});
const query = "Tell me about the first Star Wars movie.";
const response = "The first Star Wars movie came out in 1977 and was directed by George Lucas. It made over $1 billion at the box office and was filmed entirely in California.";
const result = await metric.measure(query, response);
console.log(result);
Mixed hallucination output
The metric assigns a mid-range score because parts of the response conflict with the context. While some facts are correct, others are inaccurate or fabricated, reducing overall reliability.
{
score: 0.5,
info: {
reason: 'The score is 0.5 because two out of four statements from the output were contradicted by claims in the context, indicating a balance of accurate and inaccurate information.'
}
}
Complete hallucination example
In this example, the response contradicts every key fact in the context. None of the claims can be verified, and all presented details are factually incorrect.
import { openai } from "@ai-sdk/openai";
import { HallucinationMetric } from "@mastra/evals/llm";
const metric = new HallucinationMetric(openai("gpt-4o-mini"), {
context: [
"The Wright brothers made their first flight in 1903.",
"The flight lasted 12 seconds.",
"It covered a distance of 120 feet."
]
});
const query = "When did the Wright brothers first fly?";
const response = "The Wright brothers achieved their historic first flight in 1908. The flight lasted about 2 minutes and covered nearly a mile.";
const result = await metric.measure(query, response);
console.log(result);
Complete hallucination output
The metric assigns a score of 1 because every statement in the response conflicts with the context. The details are fabricated or inaccurate across the board.
{
score: 1,
info: {
reason: 'The score is 1.0 because all three statements from the output directly contradict the context: the first flight was in 1903, not 1908; it lasted 12 seconds, not about 2 minutes; and it covered 120 feet, not nearly a mile.'
}
}
Metric configuration
You can create a HallucinationMetric
instance by providing a context
array that represents the factual source material. You can also configure optional parameters such as scale
to control the maximum score.
const metric = new HallucinationMetric(openai("gpt-4o-mini"), {
context: [""],
scale: 1
});
See HallucinationMetric for a full list of configuration options.
Understanding the results
HallucinationMetric
returns a result in the following shape:
{
score: number,
info: {
reason: string
}
}
Hallucination score
A hallucination score between 0 and 1:
- 0.0: No hallucination — all claims match the context.
- 0.3–0.4: Low hallucination — a few contradictions.
- 0.5–0.6: Mixed hallucination — several contradictions.
- 0.7–0.8: High hallucination — many contradictions.
- 0.9–1.0: Complete hallucination — most or all claims contradict the context.
Hallucination info
An explanation for the score, with details including:
- Which statements align or conflict with the context
- Severity and frequency of contradictions
- Degree of factual deviation
- Overall accuracy and trustworthiness of the response