StreamErrorRetryProcessor
StreamErrorRetryProcessor is an error processor that retries transient errors emitted after an LLM stream starts. It includes built-in matching for OpenAI Responses stream errors and supports additional matchers for other provider-specific stream error shapes.
The processor isn't enabled by default in core. Add it to errorProcessors for agents that need stream-error retry handling.
Usage exampleDirect link to Usage example
Add StreamErrorRetryProcessor to errorProcessors:
import { Agent } from '@mastra/core/agent'
import { StreamErrorRetryProcessor } from '@mastra/core/processors'
export const agent = new Agent({
name: 'openai-agent',
instructions: 'You are a helpful assistant.',
model: 'openai/gpt-5',
errorProcessors: [new StreamErrorRetryProcessor()],
})
How it worksDirect link to How it works
The processor checks the error and its cause chain for:
- Provider retry metadata:
isRetryable === true - Built-in OpenAI Responses stream error matching
- Matcher results: Any configured matcher that returns
true
When the error is retryable, the processor returns { retry: true }. It doesn't mutate messages.
Default OpenAI Responses matcherDirect link to Default OpenAI Responses matcher
isRetryableOpenAIResponsesStreamError matches OpenAI Responses stream error chunks with type: 'error' or type: 'response.failed'. It retries known transient OpenAI error codes and, as a fallback, errors with explicit retry guidance such as You can retry your request.
StreamErrorRetryProcessor includes this matcher by default. You can also import it and reuse it in custom retry logic.