# PIIDetector The `PIIDetector` is a **hybrid processor** that can be used for both input and output processing to detect and redact personally identifiable information (PII) for privacy compliance. This processor helps maintain privacy by identifying various types of PII and providing flexible strategies for handling them, including multiple redaction methods to ensure compliance with GDPR, CCPA, HIPAA, and other privacy regulations. ## Usage example ```typescript import { PIIDetector } from "@mastra/core/processors"; const processor = new PIIDetector({ model: "openrouter/openai/gpt-oss-safeguard-20b", threshold: 0.6, strategy: "redact", detectionTypes: ["email", "phone", "credit-card", "ssn"] }); ``` ## Constructor parameters **options:** (`Options`): Configuration options for PII detection and redaction ### Options **model:** (`MastraModelConfig`): Model configuration for the detection agent **detectionTypes?:** (`string[]`): PII types to detect. If not specified, uses default types **threshold?:** (`number`): Confidence threshold for flagging (0-1). PII is flagged if any category score exceeds this threshold **strategy?:** (`'block' | 'warn' | 'filter' | 'redact'`): Strategy when PII is detected: 'block' rejects with error, 'warn' logs warning but allows through, 'filter' removes flagged messages, 'redact' replaces PII with redacted versions **redactionMethod?:** (`'mask' | 'hash' | 'remove' | 'placeholder'`): Redaction method for PII: 'mask' replaces with asterisks, 'hash' replaces with SHA256 hash, 'remove' removes entirely, 'placeholder' replaces with type placeholder **instructions?:** (`string`): Custom detection instructions for the agent. If not provided, uses default instructions based on detection types **includeDetections?:** (`boolean`): Whether to include detection details in logs. Useful for compliance auditing and debugging **preserveFormat?:** (`boolean`): Whether to preserve PII format during redaction. When true, maintains structure like \*\*\*-\*\*-1234 for phone numbers **providerOptions?:** (`ProviderOptions`): Provider-specific options passed to the internal detection agent. Use this to control model behavior like reasoning effort for thinking models (e.g., \`{ openai: { reasoningEffort: 'low' } }\`) ## Returns **id:** (`string`): Processor identifier set to 'pii-detector' **name?:** (`string`): Optional processor display name **processInput:** (`(args: { messages: MastraDBMessage[]; abort: (reason?: string) => never; tracingContext?: TracingContext }) => Promise`): Processes input messages to detect and redact PII before sending to LLM **processOutputStream:** (`(args: { part: ChunkType; streamParts: ChunkType[]; state: Record; abort: (reason?: string) => never; tracingContext?: TracingContext }) => Promise`): Processes streaming output parts to detect and redact PII during streaming ## Extended usage example ### Input processing ```typescript import { Agent } from "@mastra/core/agent"; import { PIIDetector } from "@mastra/core/processors"; export const agent = new Agent({ name: "private-agent", instructions: "You are a helpful assistant", model: "openai/gpt-5.1", inputProcessors: [ new PIIDetector({ model: "openrouter/openai/gpt-oss-safeguard-20b", detectionTypes: ["email", "phone", "credit-card", "ssn"], threshold: 0.6, strategy: "redact", redactionMethod: "mask", instructions: "Detect and redact personally identifiable information while preserving message intent", includeDetections: true, preserveFormat: true }) ] }); ``` ### Output processing with batching When using `PIIDetector` as an output processor, it's recommended to combine it with `BatchPartsProcessor` to optimize performance. The `BatchPartsProcessor` batches stream chunks together before passing them to the PII detector, reducing the number of LLM calls required for detection. ```typescript import { Agent } from "@mastra/core/agent"; import { BatchPartsProcessor, PIIDetector } from "@mastra/core/processors"; export const agent = new Agent({ name: "output-pii-agent", instructions: "You are a helpful assistant", model: "openai/gpt-5.1", outputProcessors: [ // Batch stream parts first to reduce LLM calls new BatchPartsProcessor({ batchSize: 10, }), // Then apply PII detection on batched content new PIIDetector({ model: "openrouter/openai/gpt-oss-safeguard-20b", strategy: "redact", }) ] }); ``` ## Related - [Guardrails](https://mastra.ai/docs/agents/guardrails)