SummarizationMetric
We just released a new evals API called Scorers, with a more ergonomic API and more metadata stored for error analysis, and more flexibility to evaluate data structures. It’s fairly simple to migrate, but we will continue to support the existing Evals API.
The SummarizationMetric
evaluates how well an LLM’s summary captures the original text’s content while maintaining factual accuracy. It combines two aspects: alignment (factual correctness) and coverage (inclusion of key information), using the minimum scores to ensure both qualities are necessary for a good summary.
Basic Usage
import { openai } from "@ai-sdk/openai";
import { SummarizationMetric } from "@mastra/evals/llm";
// Configure the model for evaluation
const model = openai("gpt-4o-mini");
const metric = new SummarizationMetric(model);
const result = await metric.measure(
"The company was founded in 1995 by John Smith. It started with 10 employees and grew to 500 by 2020. The company is based in Seattle.",
"Founded in 1995 by John Smith, the company grew from 10 to 500 employees by 2020.",
);
console.log(result.score); // Score from 0-1
console.log(result.info); // Object containing detailed metrics about the summary
Constructor Parameters
model:
options?:
SummarizationMetricOptions
scale?:
measure() Parameters
input:
output:
Returns
score:
info:
reason:
alignmentScore:
coverageScore:
Scoring Details
The metric evaluates summaries through two essential components:
-
Alignment Score: Measures factual correctness
- Extracts claims from the summary
- Verifies each claim against the original text
- Assigns “yes”, “no”, or “unsure” verdicts
-
Coverage Score: Measures inclusion of key information
- Generates key questions from the original text
- Check if the summary answers these questions
- Checks information inclusion and assesses comprehensiveness
Scoring Process
-
Calculates alignment score:
- Extracts claims from summary
- Verifies against source text
- Computes:
supported_claims / total_claims
-
Determines coverage score:
- Generates questions from source
- Checks summary for answers
- Evaluates completeness
- Calculates:
answerable_questions / total_questions
Final score: min(alignment_score, coverage_score) * scale
Score interpretation
(0 to scale, default 0-1)
- 1.0: Perfect summary - completely factual and covers all key information
- 0.7-0.9: Strong summary with minor omissions or slight inaccuracies
- 0.4-0.6: Moderate quality with significant gaps or inaccuracies
- 0.1-0.3: Poor summary with major omissions or factual errors
- 0.0: Invalid summary - either completely inaccurate or missing critical information
Example with Analysis
import { openai } from "@ai-sdk/openai";
import { SummarizationMetric } from "@mastra/evals/llm";
// Configure the model for evaluation
const model = openai("gpt-4o-mini");
const metric = new SummarizationMetric(model);
const result = await metric.measure(
"The electric car company Tesla was founded in 2003 by Martin Eberhard and Marc Tarpenning. Elon Musk joined in 2004 as the largest investor and became CEO in 2008. The company's first car, the Roadster, was launched in 2008.",
"Tesla, founded by Elon Musk in 2003, revolutionized the electric car industry starting with the Roadster in 2008.",
);
// Example output:
// {
// score: 0.5,
// info: {
// reason: "The score is 0.5 because while the coverage is good (0.75) - mentioning the founding year,
// first car model, and launch date - the alignment score is lower (0.5) due to incorrectly
// attributing the company's founding to Elon Musk instead of Martin Eberhard and Marc Tarpenning.
// The final score takes the minimum of these two scores to ensure both factual accuracy and
// coverage are necessary for a good summary."
// alignmentScore: 0.5,
// coverageScore: 0.75,
// }
// }