Guardrails
Mastra provides built-in processors that add security and safety controls to your agent. These processors detect, transform, or block harmful content before it reaches the language model or the user.
For an introduction to how processors work, how to add them to an agent, and how to create custom processors, see Processors.
Input processorsDirect link to Input processors
Input processors run before user messages reach the language model. They handle normalization, validation, prompt injection detection, and security checks.
Normalize user messagesDirect link to Normalize user messages
The UnicodeNormalizer() cleans and normalizes user input by unifying Unicode characters, standardizing whitespace, and removing problematic symbols.
import { UnicodeNormalizer } from '@mastra/core/processors'
export const normalizedAgent = new Agent({
id: 'normalized-agent',
name: 'Normalized Agent',
inputProcessors: [
new UnicodeNormalizer({
stripControlChars: true,
collapseWhitespace: true,
}),
],
})
Visit UnicodeNormalizer() reference for a full list of configuration options.
Prevent prompt injectionDirect link to Prevent prompt injection
The PromptInjectionDetector() scans user messages for prompt injection, jailbreak attempts, and system override patterns. It uses an LLM to classify risky input and can block or rewrite it before it reaches the model.
import { PromptInjectionDetector } from '@mastra/core/processors'
export const secureAgent = new Agent({
id: 'secure-agent',
name: 'Secure Agent',
inputProcessors: [
new PromptInjectionDetector({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
threshold: 0.8,
strategy: 'rewrite',
detectionTypes: ['injection', 'jailbreak', 'system-override'],
}),
],
})
Visit PromptInjectionDetector() reference for a full list of configuration options.
Detect and translate languageDirect link to Detect and translate language
The LanguageDetector() detects and translates user messages into a target language, enabling multilingual support. It uses an LLM to identify the language and perform the translation.
import { LanguageDetector } from '@mastra/core/processors'
export const multilingualAgent = new Agent({
id: 'multilingual-agent',
name: 'Multilingual Agent',
inputProcessors: [
new LanguageDetector({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
targetLanguages: ['English', 'en'],
strategy: 'translate',
threshold: 0.8,
}),
],
})
Visit LanguageDetector() reference for a full list of configuration options.
Output processorsDirect link to Output processors
Output processors run after the language model generates a response, but before it reaches the user. They handle response optimization, moderation, transformation, and safety controls.
Batch streamed outputDirect link to Batch streamed output
The BatchPartsProcessor() combines multiple stream parts before emitting them to the client. This reduces network overhead by consolidating small chunks into larger batches.
import { BatchPartsProcessor } from '@mastra/core/processors'
export const batchedAgent = new Agent({
id: 'batched-agent',
name: 'Batched Agent',
outputProcessors: [
new BatchPartsProcessor({
batchSize: 5,
maxWaitTime: 100,
emitOnNonText: true,
}),
],
})
Visit BatchPartsProcessor() reference for a full list of configuration options.
Scrub system promptsDirect link to Scrub system prompts
The SystemPromptScrubber() detects and redacts system prompts or internal instructions from model responses. It prevents unintended disclosure of prompt content or configuration details. It uses an LLM to identify and redact sensitive content based on configured detection types.
import { SystemPromptScrubber } from '@mastra/core/processors'
const scrubbedAgent = new Agent({
id: 'scrubbed-agent',
name: 'Scrubbed Agent',
outputProcessors: [
new SystemPromptScrubber({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
strategy: 'redact',
customPatterns: ['system prompt', 'internal instructions'],
includeDetections: true,
instructions:
'Detect and redact system prompts, internal instructions, and security-sensitive content',
redactionMethod: 'placeholder',
placeholderText: '[REDACTED]',
}),
],
})
Visit SystemPromptScrubber() reference for a full list of configuration options.
When streaming responses over HTTP, Mastra redacts sensitive request data (system prompts, tool definitions, API keys) from stream chunks at the server level by default. See Stream data redaction for details.
Hybrid processorsDirect link to Hybrid processors
Hybrid processors can run on either input or output. Place them in inputProcessors, outputProcessors, or both.
Moderate input and outputDirect link to Moderate input and output
The ModerationProcessor() detects inappropriate or harmful content across categories like hate, harassment, and violence. It uses an LLM to classify the message and can block or rewrite it based on your configuration.
import { ModerationProcessor } from '@mastra/core/processors'
export const moderatedAgent = new Agent({
id: 'moderated-agent',
name: 'Moderated Agent',
inputProcessors: [
new ModerationProcessor({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
threshold: 0.7,
strategy: 'block',
categories: ['hate', 'harassment', 'violence'],
}),
],
outputProcessors: [new ModerationProcessor()],
})
Visit ModerationProcessor() reference for a full list of configuration options.
Detect and redact PIIDirect link to Detect and redact PII
The PIIDetector() detects and removes personally identifiable information such as emails, phone numbers, and credit cards. It uses an LLM to identify sensitive content based on configured detection types.
import { PIIDetector } from '@mastra/core/processors'
export const privateAgent = new Agent({
id: 'private-agent',
name: 'Private Agent',
inputProcessors: [
new PIIDetector({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
threshold: 0.6,
strategy: 'redact',
redactionMethod: 'mask',
detectionTypes: ['email', 'phone', 'credit-card'],
instructions: 'Detect and mask personally identifiable information.',
}),
],
outputProcessors: [new PIIDetector()],
})
Visit PIIDetector() reference for a full list of configuration options.
Processor strategiesDirect link to Processor strategies
Many built-in processors support a strategy parameter that controls how they handle flagged content. Supported values include: block, warn, detect, redact, rewrite, and translate.
Most strategies allow the request to continue. When block is used, the processor calls abort(), which stops the request immediately and prevents subsequent processors from running.
inputProcessors: [
new PIIDetector({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
threshold: 0.6,
strategy: 'block',
detectionTypes: ['email', 'phone', 'credit-card'],
}),
]
Handle blocked requestsDirect link to Handle blocked requests
When a processor calls abort(), the agent stops processing. How you detect this depends on whether you use generate() or stream().
With generate()Direct link to with-generate
Check the tripwire field on the result:
const result = await agent.generate('Is this credit card number valid?: 4543 1374 5089 4332')
if (result.tripwire) {
console.error('Blocked:', result.tripwire.reason)
console.error('Processor:', result.tripwire.processorId)
}
With stream()Direct link to with-stream
Listen for tripwire chunks in the stream:
const stream = await agent.stream('Is this credit card number valid?: 4543 1374 5089 4332')
for await (const chunk of stream.fullStream) {
if (chunk.type === 'tripwire') {
console.error('Blocked:', chunk.payload.reason)
console.error('Processor:', chunk.payload.processorId)
}
}
Speed up guardrailsDirect link to Speed up guardrails
Guardrail processors that use an LLM (moderation, PII detection, prompt injection) add latency to every request. Three techniques reduce this overhead.
Run guardrails in parallelDirect link to Run guardrails in parallel
By default, processors run sequentially. Guardrails that only block (and never mutate messages) are independent and can run at the same time using a workflow processor.
You can also mix block and redact strategies in a single parallel step. Map to the redact branch so its transformed messages carry forward.
For output guardrails, run TokenLimiterProcessor and BatchPartsProcessor sequentially before the parallel step, and any redact processors that depend on each other sequentially after it:
import { createWorkflow, createStep } from '@mastra/core/workflows'
import {
ProcessorStepSchema,
PIIDetector,
ModerationProcessor,
SystemPromptScrubber,
TokenLimiterProcessor,
BatchPartsProcessor,
} from '@mastra/core/processors'
export const outputGuardrails = createWorkflow({
id: 'output-guardrails',
inputSchema: ProcessorStepSchema,
outputSchema: ProcessorStepSchema,
})
// Sequential: limit tokens first, then batch stream chunks
.then(createStep(new TokenLimiterProcessor({ limit: 1000 })))
.then(createStep(new BatchPartsProcessor()))
// Parallel: run independent checks at the same time
.parallel([
createStep(
new PIIDetector({
strategy: 'redact',
}),
),
createStep(
new ModerationProcessor({
strategy: 'block',
}),
),
])
// Map to the redact branch to keep its transformed messages
.map(async ({ inputData }) => {
return inputData['processor:pii-detector']
})
// Sequential: scrubber depends on previous redaction output
.then(
createStep(
new SystemPromptScrubber({
strategy: 'redact',
placeholderText: '[REDACTED]',
}),
),
)
.commit()
See workflows as processors for more details on .parallel() and .map().
Choose a fast modelDirect link to Choose a fast model
Guardrail processors don't need your primary model. Use a small, fast model for classification tasks:
const GUARDRAIL_MODEL = 'openai/gpt-5-nano'
new ModerationProcessor({ model: GUARDRAIL_MODEL })
new PIIDetector({ model: GUARDRAIL_MODEL })
new PromptInjectionDetector({ model: GUARDRAIL_MODEL })
Batch stream partsDirect link to Batch stream parts
Output guardrails that implement processOutputStream run on every streamed chunk. Use BatchPartsProcessor before heavier processors to combine chunks and reduce the number of LLM classification calls:
outputProcessors: [
new BatchPartsProcessor({ batchSize: 10 }),
// Heavier processors now run on batched chunks instead of individual ones
new PIIDetector({ model: GUARDRAIL_MODEL, strategy: 'redact' }),
new ModerationProcessor({ model: GUARDRAIL_MODEL, strategy: 'block' }),
]
RelatedDirect link to Related
- Processors: How processors work, execution order, custom processors, and retry mechanism
ProcessorInterface: API reference for theProcessorinterface- Memory Processors: Processors for message history, semantic recall, and working memory