Introduction
In this guide, you’ll learn how Mastra helps you build workflows with LLMs.
We’ll walk through creating a workflow that gathers information from a candidate’s resume, then branches to either a technical or behavioral question based on the candidate’s profile. Along the way, you’ll see how to structure workflow steps, handle branching, and integrate LLM calls.
Below is a concise version of the workflow. It starts by importing the necessary modules, sets up Mastra, defines steps to extract and classify candidate data, and then asks suitable follow-up questions. Each code block is followed by a short explanation of what it does and why it’s useful.
1. Imports and Setup
You need to import Mastra tools and Zod to handle workflow definitions and data validation.
import { Step, Workflow, Mastra } from "@mastra/core";
import { z } from "zod";
Add your OPENAI_API_KEY
to the .env
file.
OPENAI_API_KEY=<your-openai-key>
2. Step One: Gather Candidate Info
You want to extract candidate details from the resume text and classify them as technical or non-technical. This step calls an LLM to parse the resume and return structured JSON, including the name, technical status, specialty, and the original resume text. The code reads resumeText from trigger data, prompts the LLM, and returns organized fields for use in subsequent steps.
const gatherCandidateInfo = new Step({
id: "gatherCandidateInfo",
inputSchema: z.object({
resumeText: z.string(),
}),
outputSchema: z.object({
candidateName: z.string(),
isTechnical: z.boolean(),
specialty: z.string(),
resumeText: z.string(),
}),
execute: async ({ context, mastra }) => {
if (!mastra?.llm) {
throw new Error("Mastra instance is required to run this step");
}
const resumeText = context.machineContext?.getStepPayload<{
resumeText: string;
}>("trigger")?.resumeText;
const llm = mastra.llm({ provider: "OPEN_AI", name: "gpt-4o" });
const prompt = `
You are given this resume text:
"${resumeText}"
`;
const res = await llm.generate(prompt, {
output: z.object({
candidateName: z.string(),
isTechnical: z.boolean(),
specialty: z.string(),
resumeText: z.string(),
}),
});
return res.object;
},
});
3. Technical Question Step
This step prompts a candidate who is identified as technical for more information about how they got into their specialty. It uses the entire resume text so the LLM can craft a relevant follow-up question. The code generates a question about the candidate’s specialty.
interface CandidateInfo {
candidateName: string;
isTechnical: boolean;
specialty: string;
resumeText: string;
}
const askAboutSpecialty = new Step({
id: "askAboutSpecialty",
outputSchema: z.object({
question: z.string(),
}),
execute: async ({ context, mastra }) => {
if (!mastra?.llm) {
throw new Error("Mastra instance is required to run this step");
}
const candidateInfo = context.machineContext?.getStepPayload<CandidateInfo>(
"gatherCandidateInfo",
);
const llm = mastra.llm({ provider: "OPEN_AI", name: "gpt-4o" });
const prompt = `
You are a recruiter. Given the resume below, craft a short question
for ${candidateInfo?.candidateName} about how they got into "${candidateInfo?.specialty}".
Resume: ${candidateInfo?.resumeText}
`;
const res = await llm.generate(prompt);
return { question: res?.text?.trim() || "" };
},
});
4. Behavioral Question Step
If the candidate is non-technical, you want a different follow-up question. This step asks what interests them most about the role, again referencing their complete resume text. The code solicits a role-focused query from the LLM.
const askAboutRole = new Step({
id: "askAboutRole",
outputSchema: z.object({
question: z.string(),
}),
execute: async ({ context, mastra }) => {
if (!mastra?.llm) {
throw new Error("Mastra instance is required to run this step");
}
const candidateInfo = context.machineContext?.getStepPayload<CandidateInfo>(
"gatherCandidateInfo",
);
const llm = mastra.llm({ provider: "OPEN_AI", name: "gpt-4o" });
const prompt = `
You are a recruiter. Given the resume below, craft a short question
for ${candidateInfo?.candidateName} asking what interests them most about this role.
Resume: ${candidateInfo?.resumeText}
`;
const res = await llm.generate(prompt);
return { question: res?.text?.trim() || "" };
},
});
5. Define the Workflow
You now combine the steps to implement branching logic based on the candidate’s technical status. The workflow first gathers candidate data, then either asks about their specialty or about their role, depending on isTechnical. The code chains gatherCandidateInfo with askAboutSpecialty and askAboutRole, and commits the workflow.
const candidateWorkflow = new Workflow({
name: "candidate-workflow",
triggerSchema: z.object({
resumeText: z.string(),
}),
});
candidateWorkflow
.step(gatherCandidateInfo)
.then(askAboutSpecialty, {
when: { "gatherCandidateInfo.isTechnical": true },
})
.after(gatherCandidateInfo)
.step(askAboutRole, {
when: { "gatherCandidateInfo.isTechnical": false },
});
candidateWorkflow.commit();
6. Execute the Workflow
const mastra = new Mastra({
workflows: {
candidateWorkflow,
},
});
(async () => {
const { runId, start } = mastra.getWorkflow("candidateWorkflow").createRun();
console.log("Run", runId);
const runResult = await start({
triggerData: { resumeText: "Simulated resume content..." },
});
console.log("Final output:", runResult.results);
})();
You’ve just built a workflow to parse a resume and decide which question to ask based on the candidate’s technical abilities. Congrats and happy hacking!