SystemPromptScrubber
The SystemPromptScrubber is an output processor that detects and handles system prompts, instructions, and other revealing information that could introduce security vulnerabilities. This processor helps maintain security by identifying various types of system prompts and providing flexible strategies for handling them, including multiple redaction methods to ensure sensitive information is properly sanitized.
Usage example
import { openai } from "@ai-sdk/openai";
import { SystemPromptScrubber } from "@mastra/core/processors";
const processor = new SystemPromptScrubber({
model: openai("gpt-4.1-nano"),
strategy: "redact",
redactionMethod: "mask",
includeDetections: true
});
Constructor parameters
options:
Options
Configuration options for system prompt detection and handling
Options
model:
MastraModelConfig
Model configuration for the detection agent
strategy?:
'block' | 'warn' | 'filter' | 'redact'
Strategy when system prompts are detected: 'block' rejects with error, 'warn' logs warning but allows through, 'filter' removes flagged messages, 'redact' replaces with redacted versions
customPatterns?:
string[]
Custom patterns to detect system prompts (regex strings)
includeDetections?:
boolean
Whether to include detection details in warnings. Useful for debugging and monitoring
instructions?:
string
Custom instructions for the detection agent. If not provided, uses default instructions
redactionMethod?:
'mask' | 'placeholder' | 'remove'
Redaction method for system prompts: 'mask' replaces with asterisks, 'placeholder' replaces with placeholder text, 'remove' removes entirely
placeholderText?:
string
Custom placeholder text for redaction when redactionMethod is 'placeholder'
Returns
name:
string
Processor name set to 'system-prompt-scrubber'
processOutputStream:
(args: { part: ChunkType; streamParts: ChunkType[]; state: Record<string, any>; abort: (reason?: string) => never; tracingContext?: TracingContext }) => Promise<ChunkType | null>
Processes streaming output parts to detect and handle system prompts during streaming
processOutputResult:
(args: { messages: MastraMessageV2[]; abort: (reason?: string) => never }) => Promise<MastraMessageV2[]>
Processes final output results to detect and handle system prompts in non-streaming scenarios
Extended usage example
When using SystemPromptScrubber as an output processor, it's recommended to combine it with BatchPartsProcessor to optimize performance. The BatchPartsProcessor batches stream chunks together before passing them to the scrubber, reducing the number of LLM calls required for detection.
src/mastra/agents/scrubbed-agent.ts
import { Agent } from "@mastra/core/agent";
import { BatchPartsProcessor, SystemPromptScrubber } from "@mastra/core/processors";
export const agent = new Agent({
name: "scrubbed-agent",
instructions: "You are a helpful assistant",
model: "openai/gpt-4o-mini",
outputProcessors: [
// Batch stream parts first to reduce LLM calls
new BatchPartsProcessor({
batchSize: 10,
}),
// Then apply system prompt detection on batched content
new SystemPromptScrubber({
model: "openai/gpt-4.1-nano",
strategy: "redact",
customPatterns: ["system prompt", "internal instructions"],
includeDetections: true,
redactionMethod: "placeholder",
placeholderText: "[REDACTED]"
}),
]
});