Blog/

examples

Building an AI Research Assistant with vNext Workflows

·

Jun 6, 2025

Introducing MastraManus, an AI Research Assistant

We recently built an AI-powered research assistant called MastraManus. We wanted MastraManus to be able to handle complex research tasks while keeping humans in the control loop. Here's a video of it in action:


This project was also the perfect opportunity to explore Mastra's vNext workflows capabilities.

In this post, we'll walk through how we built MastraManus using:

  • Nested workflows to encapsulate repeatable logic
  • Human-in-the-loop interactions using vNext's suspend and resume mechanism
  • doWhile loops for conditional workflow execution
  • Exa API for high-quality web search results

Let's dive in!

The Architecture: A Workflow Within a Workflow

The most interesting aspect of our application is its nested workflow architecture. MastraManus has two workflows:

  1. Research Workflow: Handles user query collection, research execution, and approval
  2. Main Workflow: Orchestrates the research workflow and report generation

This approach gives us a clean separation of tasks, making the code more maintainable and even reusable for other applications.

 1// Main workflow orchestrates everything
 2export const mainWorkflow = createWorkflow({
 3  id: "main-workflow",
 4  steps: [researchWorkflow, processResearchResultStep],
 5  inputSchema: z.object({}),
 6  outputSchema: z.object({
 7    reportPath: z.string().optional(),
 8    completed: z.boolean(),
 9  }),
10});
11
12// The key pattern: using doWhile to conditionally repeat the research workflow
13mainWorkflow
14  .dowhile(researchWorkflow, async ({ inputData }) => {
15    const isCompleted = inputData.approved;
16    return isCompleted !== true;
17  })
18  .then(processResearchResultStep)
19  .commit();

Human-in-the-Loop with Suspend and Resume

One of the most helpful features of vNext workflows is its built-in support for suspend and resume operations. This enabled us to create intuitive human-in-the-loop interactions (minus the complexity of state management).

Here's how we implemented the user query step:

 1const getUserQueryStep = createStep({
 2  id: "get-user-query",
 3  // Schemas defined for input, output, resume, and suspend
 4  execute: async ({ resumeData, suspend }) => {
 5    if (resumeData) {
 6      return {
 7        ...resumeData,
 8        query: resumeData.query || "",
 9        depth: resumeData.depth || 2,
10        breadth: resumeData.breadth || 2,
11      };
12    }
13
14    await suspend({
15      message: {
16        query: "What would you like to research?",
17        depth: "Please provide the depth of the research [1-3] (default: 2): ",
18        breadth:
19          "Please provide the breadth of the research [1-3] (default: 2): ",
20      },
21    });
22
23    // Unreachable but needed
24    return {
25      query: "",
26      depth: 2,
27      breadth: 2,
28    };
29  },
30});

And in our main application, we handle this suspension with Node's readline:

 1// Handle user query step
 2if (result.suspended[0].includes("get-user-query")) {
 3  const suspendData = getSuspendData(result, "research-workflow");
 4
 5  const message =
 6    suspendData.message?.query || "What would you like to research?";
 7  const depthPrompt =
 8    suspendData.message?.depth || "Research depth (1-3, default: 2):";
 9  const breadthPrompt =
10    suspendData.message?.breadth || "Research breadth (1-5, default: 2):";
11
12  const userQuery = await question(message + " ");
13  const depth = await question(depthPrompt + " ");
14  const breadth = await question(breadthPrompt + " ");
15
16  console.log(
17    "\\nStarting research process. This may take a minute or two...\\n",
18  );
19
20  result = await run.resume({
21    step: ["research-workflow", "get-user-query"],
22    resumeData: {
23      query: userQuery,
24      depth: parseInt(depth) || 2,
25      breadth: parseInt(breadth) || 2,
26    },
27  });
28}

This gives us flexibility. The workflow itself doesn't care about how the user interaction happens — which could be via a command line, a web form, or even a voice assistant. The workflow simply suspends and the application layer handles the appropriate UI interaction.

Enhancing Research Quality with Exa API

To give our assistant high-quality search capabilities, we integrated MastraManus with the Exa API - a search engine designed specifically for AI applications. The implementation was straightforward:

 1import { createTool } from "@mastra/core/tools";
 2import { z } from "zod";
 3import Exa from "exa-js";
 4
 5// Initialize Exa client
 6const exa = new Exa(process.env.EXA_API_KEY);
 7
 8export const webSearchTool = createTool({
 9  id: "web-search",
10  description: "Search the web for information on a specific query",
11  inputSchema: z.object({
12    query: z.string().describe("The search query to run"),
13  }),
14  execute: async ({ context }) => {
15    const { query } = context;
16
17    // ... error handling
18
19    const { results } = await exa.searchAndContents(query, {
20      livecrawl: "always", // Important for fresh results
21      numResults: 5,
22    });
23
24    // ... process and return results
25  },
26});

The Exa API was a perfect fit for our application as it:

  • Provides up-to-date information from across the web
  • Returns the full content of pages, not just snippets
  • Supports live crawling to ensure we have the most recent information

The Feedback Loop: doWhile in Action

Another pattern we implemented was using vNext's doWhile loop to enable iterative research refinement:

 1mainWorkflow
 2  .dowhile(researchWorkflow, async ({ inputData }) => {
 3    const isCompleted = inputData.approved;
 4    return isCompleted !== true;
 5  })
 6  .then(processResearchResultStep)
 7  .commit();

This solution solves a tricky workflow problem: allowing users to keep refining their research until they're satisfied with the results. The doWhile loop keeps executing the research workflow until the user approves the results.

What's Next?

We have several ideas for future improvements:

  • Adding memory to remember past research sessions
  • Implementing cross-device persistence using Mastra's storage capabilities
  • Creating a web UI for even more user-friendly interaction

For now, you can check out MastraManus on Github.

Stay up to date