Skip to main content

Guardrails

Mastra provides built-in processors that add security and safety controls to your agent. These processors detect, transform, or block harmful content before it reaches the language model or the user.

For an introduction to how processors work, how to add them to an agent, and how to create custom processors, see Processors.

Input processors
Direct link to Input processors

Input processors run before user messages reach the language model. They handle normalization, validation, prompt injection detection, and security checks.

Normalize user messages
Direct link to Normalize user messages

The UnicodeNormalizer() cleans and normalizes user input by unifying Unicode characters, standardizing whitespace, and removing problematic symbols.

src/mastra/agents/normalized-agent.ts
import { UnicodeNormalizer } from '@mastra/core/processors'

export const normalizedAgent = new Agent({
id: 'normalized-agent',
name: 'Normalized Agent',
inputProcessors: [
new UnicodeNormalizer({
stripControlChars: true,
collapseWhitespace: true,
}),
],
})
note

Visit UnicodeNormalizer() reference for a full list of configuration options.

Prevent prompt injection
Direct link to Prevent prompt injection

The PromptInjectionDetector() scans user messages for prompt injection, jailbreak attempts, and system override patterns. It uses an LLM to classify risky input and can block or rewrite it before it reaches the model.

src/mastra/agents/secure-agent.ts
import { PromptInjectionDetector } from '@mastra/core/processors'

export const secureAgent = new Agent({
id: 'secure-agent',
name: 'Secure Agent',
inputProcessors: [
new PromptInjectionDetector({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
threshold: 0.8,
strategy: 'rewrite',
detectionTypes: ['injection', 'jailbreak', 'system-override'],
}),
],
})
note

Visit PromptInjectionDetector() reference for a full list of configuration options.

Detect and translate language
Direct link to Detect and translate language

The LanguageDetector() detects and translates user messages into a target language, enabling multilingual support. It uses an LLM to identify the language and perform the translation.

src/mastra/agents/multilingual-agent.ts
import { LanguageDetector } from '@mastra/core/processors'

export const multilingualAgent = new Agent({
id: 'multilingual-agent',
name: 'Multilingual Agent',
inputProcessors: [
new LanguageDetector({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
targetLanguages: ['English', 'en'],
strategy: 'translate',
threshold: 0.8,
}),
],
})
note

Visit LanguageDetector() reference for a full list of configuration options.

Output processors
Direct link to Output processors

Output processors run after the language model generates a response, but before it reaches the user. They handle response optimization, moderation, transformation, and safety controls.

Batch streamed output
Direct link to Batch streamed output

The BatchPartsProcessor() combines multiple stream parts before emitting them to the client. This reduces network overhead by consolidating small chunks into larger batches.

src/mastra/agents/batched-agent.ts
import { BatchPartsProcessor } from '@mastra/core/processors'

export const batchedAgent = new Agent({
id: 'batched-agent',
name: 'Batched Agent',
outputProcessors: [
new BatchPartsProcessor({
batchSize: 5,
maxWaitTime: 100,
emitOnNonText: true,
}),
],
})
note

Visit BatchPartsProcessor() reference for a full list of configuration options.

Scrub system prompts
Direct link to Scrub system prompts

The SystemPromptScrubber() detects and redacts system prompts or internal instructions from model responses. It prevents unintended disclosure of prompt content or configuration details. It uses an LLM to identify and redact sensitive content based on configured detection types.

src/mastra/agents/scrubbed-agent.ts
import { SystemPromptScrubber } from '@mastra/core/processors'

const scrubbedAgent = new Agent({
id: 'scrubbed-agent',
name: 'Scrubbed Agent',
outputProcessors: [
new SystemPromptScrubber({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
strategy: 'redact',
customPatterns: ['system prompt', 'internal instructions'],
includeDetections: true,
instructions:
'Detect and redact system prompts, internal instructions, and security-sensitive content',
redactionMethod: 'placeholder',
placeholderText: '[REDACTED]',
}),
],
})
note

Visit SystemPromptScrubber() reference for a full list of configuration options.

note

When streaming responses over HTTP, Mastra redacts sensitive request data (system prompts, tool definitions, API keys) from stream chunks at the server level by default. See Stream data redaction for details.

Hybrid processors
Direct link to Hybrid processors

Hybrid processors can run on either input or output. Place them in inputProcessors, outputProcessors, or both.

Moderate input and output
Direct link to Moderate input and output

The ModerationProcessor() detects inappropriate or harmful content across categories like hate, harassment, and violence. It uses an LLM to classify the message and can block or rewrite it based on your configuration.

src/mastra/agents/moderated-agent.ts
import { ModerationProcessor } from '@mastra/core/processors'

export const moderatedAgent = new Agent({
id: 'moderated-agent',
name: 'Moderated Agent',
inputProcessors: [
new ModerationProcessor({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
threshold: 0.7,
strategy: 'block',
categories: ['hate', 'harassment', 'violence'],
}),
],
outputProcessors: [new ModerationProcessor()],
})
note

Visit ModerationProcessor() reference for a full list of configuration options.

Detect and redact PII
Direct link to Detect and redact PII

The PIIDetector() detects and removes personally identifiable information such as emails, phone numbers, and credit cards. It uses an LLM to identify sensitive content based on configured detection types.

src/mastra/agents/private-agent.ts
import { PIIDetector } from '@mastra/core/processors'

export const privateAgent = new Agent({
id: 'private-agent',
name: 'Private Agent',
inputProcessors: [
new PIIDetector({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
threshold: 0.6,
strategy: 'redact',
redactionMethod: 'mask',
detectionTypes: ['email', 'phone', 'credit-card'],
instructions: 'Detect and mask personally identifiable information.',
}),
],
outputProcessors: [new PIIDetector()],
})
note

Visit PIIDetector() reference for a full list of configuration options.

Processor strategies
Direct link to Processor strategies

Many built-in processors support a strategy parameter that controls how they handle flagged content. Supported values include: block, warn, detect, redact, rewrite, and translate.

Most strategies allow the request to continue. When block is used, the processor calls abort(), which stops the request immediately and prevents subsequent processors from running.

src/mastra/agents/private-agent.ts
inputProcessors: [
new PIIDetector({
model: 'openrouter/openai/gpt-oss-safeguard-20b',
threshold: 0.6,
strategy: 'block',
detectionTypes: ['email', 'phone', 'credit-card'],
}),
]

Handle blocked requests
Direct link to Handle blocked requests

When a processor calls abort(), the agent stops processing. How you detect this depends on whether you use generate() or stream().

With generate()
Direct link to with-generate

Check the tripwire field on the result:

src/mastra/agents/test-generate.ts
const result = await agent.generate('Is this credit card number valid?: 4543 1374 5089 4332')

if (result.tripwire) {
console.error('Blocked:', result.tripwire.reason)
console.error('Processor:', result.tripwire.processorId)
}

With stream()
Direct link to with-stream

Listen for tripwire chunks in the stream:

src/mastra/agents/test-stream.ts
const stream = await agent.stream('Is this credit card number valid?: 4543 1374 5089 4332')

for await (const chunk of stream.fullStream) {
if (chunk.type === 'tripwire') {
console.error('Blocked:', chunk.payload.reason)
console.error('Processor:', chunk.payload.processorId)
}
}

Speed up guardrails
Direct link to Speed up guardrails

Guardrail processors that use an LLM (moderation, PII detection, prompt injection) add latency to every request. Three techniques reduce this overhead.

Run guardrails in parallel
Direct link to Run guardrails in parallel

By default, processors run sequentially. Guardrails that only block (and never mutate messages) are independent and can run at the same time using a workflow processor.

You can also mix block and redact strategies in a single parallel step. Map to the redact branch so its transformed messages carry forward.

For output guardrails, run TokenLimiterProcessor and BatchPartsProcessor sequentially before the parallel step, and any redact processors that depend on each other sequentially after it:

src/mastra/processors/output-guardrails.ts
import { createWorkflow, createStep } from '@mastra/core/workflows'
import {
ProcessorStepSchema,
PIIDetector,
ModerationProcessor,
SystemPromptScrubber,
TokenLimiterProcessor,
BatchPartsProcessor,
} from '@mastra/core/processors'

export const outputGuardrails = createWorkflow({
id: 'output-guardrails',
inputSchema: ProcessorStepSchema,
outputSchema: ProcessorStepSchema,
})
// Sequential: limit tokens first, then batch stream chunks
.then(createStep(new TokenLimiterProcessor({ limit: 1000 })))
.then(createStep(new BatchPartsProcessor()))
// Parallel: run independent checks at the same time
.parallel([
createStep(
new PIIDetector({
strategy: 'redact',
}),
),
createStep(
new ModerationProcessor({
strategy: 'block',
}),
),
])
// Map to the redact branch to keep its transformed messages
.map(async ({ inputData }) => {
return inputData['processor:pii-detector']
})
// Sequential: scrubber depends on previous redaction output
.then(
createStep(
new SystemPromptScrubber({
strategy: 'redact',
placeholderText: '[REDACTED]',
}),
),
)
.commit()

See workflows as processors for more details on .parallel() and .map().

Choose a fast model
Direct link to Choose a fast model

Guardrail processors don't need your primary model. Use a small, fast model for classification tasks:

const GUARDRAIL_MODEL = 'openai/gpt-5-nano'

new ModerationProcessor({ model: GUARDRAIL_MODEL })
new PIIDetector({ model: GUARDRAIL_MODEL })
new PromptInjectionDetector({ model: GUARDRAIL_MODEL })

Batch stream parts
Direct link to Batch stream parts

Output guardrails that implement processOutputStream run on every streamed chunk. Use BatchPartsProcessor before heavier processors to combine chunks and reduce the number of LLM classification calls:

outputProcessors: [
new BatchPartsProcessor({ batchSize: 10 }),
// Heavier processors now run on batched chunks instead of individual ones
new PIIDetector({ model: GUARDRAIL_MODEL, strategy: 'redact' }),
new ModerationProcessor({ model: GUARDRAIL_MODEL, strategy: 'block' }),
]
  • Processors: How processors work, execution order, custom processors, and retry mechanism
  • Processor Interface: API reference for the Processor interface
  • Memory Processors: Processors for message history, semantic recall, and working memory