# SystemPromptScrubber The `SystemPromptScrubber` is an **output processor** that detects and handles system prompts, instructions, and other revealing information that could introduce security vulnerabilities. This processor helps maintain security by identifying various types of system prompts and providing flexible strategies for handling them, including multiple redaction methods to ensure sensitive information is properly sanitized. ## Usage example ```typescript import { SystemPromptScrubber } from "@mastra/core/processors"; const processor = new SystemPromptScrubber({ model: "openrouter/openai/gpt-oss-safeguard-20b", strategy: "redact", redactionMethod: "mask", includeDetections: true }); ``` ## Constructor parameters **options:** (`Options`): Configuration options for system prompt detection and handling ### Options **model:** (`MastraModelConfig`): Model configuration for the detection agent **strategy?:** (`'block' | 'warn' | 'filter' | 'redact'`): Strategy when system prompts are detected: 'block' rejects with error, 'warn' logs warning but allows through, 'filter' removes flagged messages, 'redact' replaces with redacted versions **customPatterns?:** (`string[]`): Custom patterns to detect system prompts (regex strings) **includeDetections?:** (`boolean`): Whether to include detection details in warnings. Useful for debugging and monitoring **instructions?:** (`string`): Custom instructions for the detection agent. If not provided, uses default instructions **redactionMethod?:** (`'mask' | 'placeholder' | 'remove'`): Redaction method for system prompts: 'mask' replaces with asterisks, 'placeholder' replaces with placeholder text, 'remove' removes entirely **placeholderText?:** (`string`): Custom placeholder text for redaction when redactionMethod is 'placeholder' ## Returns **id:** (`string`): Processor identifier set to 'system-prompt-scrubber' **name?:** (`string`): Optional processor display name **processOutputStream:** (`(args: { part: ChunkType; streamParts: ChunkType[]; state: Record; abort: (reason?: string) => never; tracingContext?: TracingContext }) => Promise`): Processes streaming output parts to detect and handle system prompts during streaming **processOutputResult:** (`(args: { messages: MastraDBMessage[]; abort: (reason?: string) => never }) => Promise`): Processes final output results to detect and handle system prompts in non-streaming scenarios ## Extended usage example When using `SystemPromptScrubber` as an output processor, it's recommended to combine it with `BatchPartsProcessor` to optimize performance. The `BatchPartsProcessor` batches stream chunks together before passing them to the scrubber, reducing the number of LLM calls required for detection. ```typescript import { Agent } from "@mastra/core/agent"; import { BatchPartsProcessor, SystemPromptScrubber } from "@mastra/core/processors"; export const agent = new Agent({ name: "scrubbed-agent", instructions: "You are a helpful assistant", model: "openai/gpt-5.1", outputProcessors: [ // Batch stream parts first to reduce LLM calls new BatchPartsProcessor({ batchSize: 10, }), // Then apply system prompt detection on batched content new SystemPromptScrubber({ model: "openrouter/openai/gpt-oss-safeguard-20b", strategy: "redact", customPatterns: ["system prompt", "internal instructions"], includeDetections: true, redactionMethod: "placeholder", placeholderText: "[REDACTED]" }), ] }); ``` ## Related - [Guardrails](https://mastra.ai/docs/agents/guardrails)