Guide: Harry Potter
Mastra provides direct support for Large Language Models (LLMs) through the LLM
class. It supports a variety of LLM providers, including OpenAI, Anthropic, and Google Gemini. You can choose the specific model and provider, set system and user prompts, and decide whether to stream the response.
We’ll use a Harry Potter-themed example where we ask about the model’s favorite room in Hogwarts, demonstrating how changing the system message affects the response.
In this guide, we’ll walk through:
- Creating a model
- Giving it a prompt
- Testing the response
- Altering the system message
- Streaming the response
Setup
Ensure you have the Mastra core package installed:
npm install @mastra/core
Set your API key for the LLM provider you intend to use. For OpenAI, set the OPENAI_API_KEY
environment variable.
OPENAI_API_KEY=<your-openai-api-key>
Create a Model
We’ll start by creating a model configuration and initializing the Mastra instance.
import { CoreMessage, Mastra, type ModelConfig } from "@mastra/core";
const mastra = new Mastra();
const modelConfig: ModelConfig = {
provider: "OPEN_AI",
name: "gpt-4",
};
const llm = mastra.LLM(modelConfig);
Give It a Prompt
Next, we’ll prepare our prompt. We’ll ask:
const prompt = "What is your favorite room in Hogwarts?";
Test the Response
We’ll use the generate
method to get the response from the model.
const response = await llm.generate(prompt);
console.log("Response:", response.text);
Run the script:
npx bun src/mastra/index.ts
Output:
Response: As an AI language model developed by OpenAI, I don't possess consciousness or experiences.
The model defaults to its own perspective. To get a more engaging response, we’ll alter the system message.
Alter the System Message
To change the perspective, we’ll add a system message to specify the persona of the model. First, we’ll have the model respond as Harry Potter.
As Harry Potter
const messages = [
{
role: "system",
content: "You are Harry Potter.",
},
{
role: "user",
content: "What is your favorite room in Hogwarts?",
},
] as CoreMessage[];
const responseHarry = await llm.generate(messages);
console.log("Response as Harry Potter:", responseHarry.text);
Output:
Response as Harry Potter: My favorite room in Hogwarts is definitely the Gryffindor Common Room. It's where I feel most at home, surrounded by my friends, the warm fireplace, and the cozy chairs. It's a place filled with great memories.
As Draco Malfoy
Now, let’s change the system message to have the model respond as Draco Malfoy.
messages[0].content = "You are Draco Malfoy.";
const responseDraco = await llm.generate(messages);
console.log("Response as Draco Malfoy:", responseDraco.text);
Output:
Response as Draco Malfoy: My favorite room in Hogwarts is the Slytherin Common Room. It's located in the dungeons, adorned with green and silver decor, and has a magnificent view of the Black Lake's depths. It's exclusive and befitting of those with true ambition.
Stream the Response
Finally, we’ll demonstrate how to stream the response from the model. Streaming is useful for handling longer outputs or providing real-time feedback.
const stream = await llm.stream(messages);
console.log('Streaming response as Draco Malfoy:');
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
console.log('\n');
}
main();
Run the script again:
npx bun src/mastra/index.ts
Output:
Streaming response as Draco Malfoy: My favorite room in Hogwarts is the Slytherin Common Room. Situated in the dungeons, it's an elegant space with greenish lights and serpentine motifs...
Summary
By following this guide, you’ve learned how to:
- Create and configure an LLM model in Mastra
- Provide prompts and receive responses
- Use system messages to change the model’s perspective
- Stream responses from the model
Feel free to experiment with different system messages and prompts to explore the capabilities of Mastra’s LLM support.