Blog

Building an AI Research Assistant with vNext Workflows

Introducing MastraManus, an AI Research Assistant

We recently built an AI-powered research assistant called MastraManus. We wanted MastraManus to be able to handle complex research tasks while keeping humans in the control loop. Here's a video of it in action:

This project was also the perfect opportunity to explore Mastra's vNext workflows capabilities.

In this post, we'll walk through how we built MastraManus using:

  • Nested workflows to encapsulate repeatable logic
  • Human-in-the-loop interactions using vNext's suspend and resume mechanism
  • doWhile loops for conditional workflow execution
  • Exa API for high-quality web search results

Let's dive in!

The Architecture: A Workflow Within a Workflow

The most interesting aspect of our application is its nested workflow architecture. MastraManus has two workflows:

  1. Research Workflow: Handles user query collection, research execution, and approval
  2. Main Workflow: Orchestrates the research workflow and report generation

This approach gives us a clean separation of tasks, making the code more maintainable and even reusable for other applications.

// Main workflow orchestrates everything
export const mainWorkflow = createWorkflow({
  id: "main-workflow",
  steps: [researchWorkflow, processResearchResultStep],
  inputSchema: z.object({}),
  outputSchema: z.object({
    reportPath: z.string().optional(),
    completed: z.boolean(),
  }),
});

// The key pattern: using doWhile to conditionally repeat the research workflow
mainWorkflow
  .dowhile(researchWorkflow, async ({ inputData }) => {
    const isCompleted = inputData.approved;
    return isCompleted !== true;
  })
  .then(processResearchResultStep)
  .commit();

Human-in-the-Loop with Suspend and Resume

One of the most helpful features of vNext workflows is its built-in support for suspend and resume operations. This enabled us to create intuitive human-in-the-loop interactions (minus the complexity of state management).

Here's how we implemented the user query step:

const getUserQueryStep = createStep({
  id: "get-user-query",
  // Schemas defined for input, output, resume, and suspend
  execute: async ({ resumeData, suspend }) => {
    if (resumeData) {
      return {
        ...resumeData,
        query: resumeData.query || "",
        depth: resumeData.depth || 2,
        breadth: resumeData.breadth || 2,
      };
    }

    await suspend({
      message: {
        query: "What would you like to research?",
        depth: "Please provide the depth of the research [1-3] (default: 2): ",
        breadth:
          "Please provide the breadth of the research [1-3] (default: 2): ",
      },
    });

    // Unreachable but needed
    return {
      query: "",
      depth: 2,
      breadth: 2,
    };
  },
});

And in our main application, we handle this suspension with Node's readline:

// Handle user query step
if (result.suspended[0].includes("get-user-query")) {
  const suspendData = getSuspendData(result, "research-workflow");

  const message =
    suspendData.message?.query || "What would you like to research?";
  const depthPrompt =
    suspendData.message?.depth || "Research depth (1-3, default: 2):";
  const breadthPrompt =
    suspendData.message?.breadth || "Research breadth (1-5, default: 2):";

  const userQuery = await question(message + " ");
  const depth = await question(depthPrompt + " ");
  const breadth = await question(breadthPrompt + " ");

  console.log(
    "\\nStarting research process. This may take a minute or two...\\n",
  );

  result = await run.resume({
    step: ["research-workflow", "get-user-query"],
    resumeData: {
      query: userQuery,
      depth: parseInt(depth) || 2,
      breadth: parseInt(breadth) || 2,
    },
  });
}

This gives us flexibility. The workflow itself doesn't care about how the user interaction happens — which could be via a command line, a web form, or even a voice assistant. The workflow simply suspends and the application layer handles the appropriate UI interaction.

Enhancing Research Quality with Exa API

To give our assistant high-quality search capabilities, we integrated MastraManus with the Exa API - a search engine designed specifically for AI applications. The implementation was straightforward:

import { createTool } from "@mastra/core/tools";
import { z } from "zod";
import Exa from "exa-js";

// Initialize Exa client
const exa = new Exa(process.env.EXA_API_KEY);

export const webSearchTool = createTool({
  id: "web-search",
  description: "Search the web for information on a specific query",
  inputSchema: z.object({
    query: z.string().describe("The search query to run"),
  }),
  execute: async ({ context }) => {
    const { query } = context;

    // ... error handling

    const { results } = await exa.searchAndContents(query, {
      livecrawl: "always", // Important for fresh results
      numResults: 5,
    });

    // ... process and return results
  },
});

The Exa API was a perfect fit for our application as it:

  • Provides up-to-date information from across the web
  • Returns the full content of pages, not just snippets
  • Supports live crawling to ensure we have the most recent information

The Feedback Loop: doWhile in Action

Another pattern we implemented was using vNext's doWhile loop to enable iterative research refinement:

mainWorkflow
  .dowhile(researchWorkflow, async ({ inputData }) => {
    const isCompleted = inputData.approved;
    return isCompleted !== true;
  })
  .then(processResearchResultStep)
  .commit();

This solution solves a tricky workflow problem: allowing users to keep refining their research until they're satisfied with the results. The doWhile loop keeps executing the research workflow until the user approves the results.

What's Next?

We have several ideas for future improvements:

  • Adding memory to remember past research sessions
  • Implementing cross-device persistence using Mastra's storage capabilities
  • Creating a web UI for even more user-friendly interaction

For now, you can check out MastraManus on Github.

Share

Stay up to date