Stream Text

Text generation can sometimes take a long time to complete, especially when you’re generating a couple of paragraphs. By streaming the response, you can display partial results as they arrive, providing immediate feedback to users. This example shows how to stream text responses in real-time.

import { Mastra } from '@mastra/core';
 
const mastra = new Mastra();
 
const llm = mastra.LLM({
  provider: 'OPEN_AI',
  name: 'gpt-4',
});
 
const response = await llm.stream('Tell me about christmas and it"s traditions');
 
for await (const chunk of response.textStream) {
  process.stdout.write(chunk);
}





View Example on GitHub

MIT 2025 © Nextra.