ContextualRecallMetric
We just released a new evals API called Scorers, with a more ergonomic API and more metadata stored for error analysis, and more flexibility to evaluate data structures. It’s fairly simple to migrate, but we will continue to support the existing Evals API.
The ContextualRecallMetric
class evaluates how effectively an LLM’s response incorporates all relevant information from the provided context. It measures whether important information from the reference documents was successfully included in the response, focusing on completeness rather than precision.
Basic Usage
import { openai } from "@ai-sdk/openai";
import { ContextualRecallMetric } from "@mastra/evals/llm";
// Configure the model for evaluation
const model = openai("gpt-4o-mini");
const metric = new ContextualRecallMetric(model, {
context: [
"Product features: cloud synchronization capability",
"Offline mode available for all users",
"Supports multiple devices simultaneously",
"End-to-end encryption for all data",
],
});
const result = await metric.measure(
"What are the key features of the product?",
"The product includes cloud sync, offline mode, and multi-device support.",
);
console.log(result.score); // Score from 0-1
Constructor Parameters
model:
options:
ContextualRecallMetricOptions
scale?:
context:
measure() Parameters
input:
output:
Returns
score:
info:
reason:
Scoring Details
The metric evaluates recall through comparison of response content against relevant context items.
Scoring Process
-
Evaluates information recall:
- Identifies relevant items in context
- Tracks correctly recalled information
- Measures completeness of recall
-
Calculates recall score:
- Counts correctly recalled items
- Compares against total relevant items
- Computes coverage ratio
Final score: (correctly_recalled_items / total_relevant_items) * scale
Score interpretation
(0 to scale, default 0-1)
- 1.0: Perfect recall - all relevant information included
- 0.7-0.9: High recall - most relevant information included
- 0.4-0.6: Moderate recall - some relevant information missed
- 0.1-0.3: Low recall - significant information missed
- 0.0: No recall - no relevant information included
Example with Custom Configuration
import { openai } from "@ai-sdk/openai";
import { ContextualRecallMetric } from "@mastra/evals/llm";
// Configure the model for evaluation
const model = openai("gpt-4o-mini");
const metric = new ContextualRecallMetric(model, {
scale: 100, // Use 0-100 scale instead of 0-1
context: [
"All data is encrypted at rest and in transit",
"Two-factor authentication (2FA) is mandatory",
"Regular security audits are performed",
"Incident response team available 24/7",
],
});
const result = await metric.measure(
"Summarize the company's security measures",
"The company implements encryption for data protection and requires 2FA for all users.",
);
// Example output:
// {
// score: 50, // Only half of the security measures were mentioned
// info: {
// reason: "The score is 50 because only half of the security measures were mentioned
// in the response. The response missed the regular security audits and incident
// response team information."
// }
// }