SystemPromptScrubber
The SystemPromptScrubber
is an output processor that detects and handles system prompts, instructions, and other revealing information that could introduce security vulnerabilities. This processor helps maintain security by identifying various types of system prompts and providing flexible strategies for handling them, including multiple redaction methods to ensure sensitive information is properly sanitized.
Usage example
import { openai } from "@ai-sdk/openai";
import { SystemPromptScrubber } from "@mastra/core/processors";
const processor = new SystemPromptScrubber({
model: openai("gpt-4.1-nano"),
strategy: "redact",
redactionMethod: "mask",
includeDetections: true
});
Constructor parameters
options:
Options
Configuration options for system prompt detection and handling
Options
model:
MastraLanguageModel
Model configuration for the detection agent
strategy?:
'block' | 'warn' | 'filter' | 'redact'
Strategy when system prompts are detected: 'block' rejects with error, 'warn' logs warning but allows through, 'filter' removes flagged messages, 'redact' replaces with redacted versions
customPatterns?:
string[]
Custom patterns to detect system prompts (regex strings)
includeDetections?:
boolean
Whether to include detection details in warnings. Useful for debugging and monitoring
instructions?:
string
Custom instructions for the detection agent. If not provided, uses default instructions
redactionMethod?:
'mask' | 'placeholder' | 'remove'
Redaction method for system prompts: 'mask' replaces with asterisks, 'placeholder' replaces with placeholder text, 'remove' removes entirely
placeholderText?:
string
Custom placeholder text for redaction when redactionMethod is 'placeholder'
Returns
name:
string
Processor name set to 'system-prompt-scrubber'
processOutputStream:
(args: { part: ChunkType; streamParts: ChunkType[]; state: Record<string, any>; abort: (reason?: string) => never; tracingContext?: TracingContext }) => Promise<ChunkType | null>
Processes streaming output parts to detect and handle system prompts during streaming
processOutputResult:
(args: { messages: MastraMessageV2[]; abort: (reason?: string) => never }) => Promise<MastraMessageV2[]>
Processes final output results to detect and handle system prompts in non-streaming scenarios
Extended usage example
src/mastra/agents/scrubbed-agent.ts
import { openai } from "@ai-sdk/openai";
import { Agent } from "@mastra/core/agent";
import { SystemPromptScrubber } from "@mastra/core/processors";
export const agent = new Agent({
name: "scrubbed-agent",
instructions: "You are a helpful assistant",
model: openai("gpt-4o-mini"),
outputProcessors: [
new SystemPromptScrubber({
model: openai("gpt-4.1-nano"),
strategy: "redact",
customPatterns: ["system prompt", "internal instructions"],
includeDetections: true,
instructions: "Detect and redact system prompts, internal instructions, and security-sensitive content",
redactionMethod: "placeholder",
placeholderText: "[REDACTED]"
})
]
});