Mastra Workshops
Join the Mastra team and expert guests for live, hands-on workshops that teach you how to build and deploy AI agents and applications.
Upcoming Workshops
Past Workshops
April 23, 2026
60 minutesBuild an OpenClaw Agent with Mastra
Most agents just talk. They summarize, they suggest, they apologize -- but they don't actually _do_ anything. The OpenClaw vision flips that: every agent becomes a claw, reaching into the real world to control browsers, run code, manage files, remember context, and show up wherever your users already are. This workshop walks you through building that kind of agent with Mastra, mapping OpenClaw's capabilities to concrete framework features you can wire up in TypeScript. **How OpenClaw maps to Mastra** In this workshop, you’ll build an agent that can operate independently in real environments using Mastra: * Persistent Memory → Observational memory and working memory * Full System Access → Workspace sandboxes with filesystem and shell tools * Browser Control → Mastra browser support * Skills & Plugins → Workspace skills and tools * Any Chat App → Slack, Discord, and Telegram channel adapters **What you'll learn** * How to give your agent a persistent environment where it can read files, run code, and browse the web autonomously * How to layer in memory so it retains context and gets better over time * How to deploy it to messaging platforms so it meets users where they already are OpenClaw is the lens, but these are general-purpose Mastra primitives. Whatever agent you're building, you'll walk away knowing how to make it do more than talk.
April 16, 2026
60 minutesBuilding Agents With Economics That Scale
Your agent works great — until finance asks why the API bill is $63,000 and you can't explain which feature spent what. Model API spending is growing fast across the industry, and that's a sign AI features are delivering value. The problem isn't the spend itself — it's spending without visibility. Most cost-reduction advice jumps straight to tactics like "use a smaller model," but without instrumentation, those are guesses. This session starts where it actually matters: seeing what you're spending and why. You'll learn how to use Mastra's built-in observability — tracing, automatic token metrics, and cost estimation — to attribute spend to specific agents, tools, and workflow steps. Once you can see the breakdown, every optimization becomes a deliberate tradeoff instead of a shot in the dark. But cheaper doesn't matter if it breaks things. You'll see how to set up Mastra's scorers and datasets so you can measure quality before and after any change, giving you a clear floor to optimize against. From there, the session covers the practical patterns that make the biggest difference: routing queries to the cheapest model that clears your quality bar, compressing long-running context with Observational Memory so you send far fewer tokens per turn, trimming conversation history, caching aggressively, and designing smarter handoffs between agents. You'll also learn how to build guardrails that keep costs predictable — processors that limit token output, delegation hooks that cap subagent iterations, and patterns for circuit-breaking before runaway spend happens. Walk away with a framework for making your agent economics scale with your product, not ahead of it. *** **Hosted by** * Alex Booker, Mastra * [https://x.com/bookercodes](https://x.com/bookercodes) * [https://mastra.ai](https://mastra.ai) * Abhi Aiyer, Mastra * [https://x.com/abhiaiyer](https://x.com/abhiaiyer) * [https://mastra.ai](https://mastra.ai) _Recording and code examples will be available to everyone who registers._
April 2, 2026
60 minutesTrack Agent Cost and Performance with Mastra Metrics
"How much is this agent costing me per run?" If you're running agents in production, you've probably asked this — and logs and traces alone can't answer it. They're great for debugging a single run, but they don't show you the trends: whether costs are climbing, error rates are spiking, or latency is creeping up over time. Mastra Studio now has a metrics dashboard — the third observability pillar alongside traces and logs. In this workshop, you'll get a hands-on walkthrough of the new feature, which tracks model costs, token usage, latency percentiles, error rates, and eval scores across your agents, tools, and workflows. You'll see how to read the dashboard and use it to spot problems before your users do. You'll also learn how to set it up. Metrics are powered by the same instrumentation pipeline you may already be using, but they require columnar storage to handle the volume of data efficiently. We'll walk through configuring DuckDB as your metrics backend so you can add observability without touching your existing storage setup. We'll wrap up with a look at what's coming next, including ClickHouse support for high-traffic production environments and per-agent, per-model cost breakdowns. Whether you're trying to keep API spend under control or prove to your team that your agent is actually getting better, this session will give you the tools to back it up with data. **Hosted by** * Alex Booker, Developer Experience @ Mastra * [https://x.com/bookercodes](https://x.com/bookercodes) * [https://www.linkedin.com/in/bookercodes/](https://www.linkedin.com/in/bookercodes/) * Eric Pinzur, Engineer @ Mastra * [https://www.linkedin.com/in/epinzur/](https://www.linkedin.com/in/epinzur/) * Joel Smith, Head of Product @ Mastra * [https://x.com/jsumnersmith](https://x.com/jsumnersmith) * [https://www.linkedin.com/in/jsumnersmith/](https://www.linkedin.com/in/jsumnersmith/) _Recording and code examples will be available to everyone who registers._
March 26, 2026
60 minutesBuilding Multi-Agent Architectures with Mastra
Trying to make one agent do everything is a fast way to end up with brittle, confusing results. You'll learn how to design multi-agent systems in Mastra so different agents can collaborate, delegate, and combine their strengths to handle bigger, messier tasks. You'll explore how multi-agent coordination works in practice: defining clear roles, giving each agent the right instructions and descriptions, and deciding when one agent should hand work off to another. You'll get a grounded view of how Mastra supports this with agent composition, delegation, memory, and orchestration patterns. You'll also dig into the tradeoffs that matter when you move from a single agent to a networked system. That includes how to keep coordination understandable, how to control what context gets passed around, and how to build flows that are easier to guide, debug, and trust. By the end, you'll have a practical understanding of how to structure multi-agent networks in Mastra for real-world tasks that are too broad for a single prompt. If you've been wondering how to turn a collection of agents into a system that actually works together, this workshop is for you. **Hosted by** * Alex Booker, Developer Experience @ Mastra * x.com/bookercodes * [https://mastra.ai](https://mastra.ai) * Lennart Jörgens, Senior Engineer @ Mastra * [https://www.lekoarts.de/](https://www.lekoarts.de/) * Grayson Hicks, Senior Customer Engineer @ Mastra * x.com/graysonhicks * [https://mastra.ai](https://mastra.ai) _Recording and code examples will be available to everyone who registers._
March 19, 2026
60 minutesAgent Harness: What it is, why it matters, and what it enables
You've probably seen "agent harness" popping up in conversations lately — but what does it actually mean, and why should you care? We're going to break it down. The agent harness is the control layer around your agent that manages lifecycle and product-level concerns — things like shared state, conversation threading, switchable agent modes, permission systems, and human-in-the-loop flows like tool approval and interactive Q&A. While building [Mastra Code](https://code.mastra.ai/), we created a harness primitive inside Mastra itself, and in this workshop we'll share our latest thinking on what it should become. Consider it a sneak peek at a concept we're actively shaping — and we'll bring it to life with real code along the way. *** **Hosted by** * Alex Booker, Mastra * [https://x.com/bookercodes](https://x.com/bookercodes) * [https://mastra.ai](https://mastra.ai) * Abhi Aiyer, Mastra * [https://x.com/abhiaiyer](https://x.com/abhiaiyer) * [https://mastra.ai](https://mastra.ai) _Recording and code examples will be available to everyone who registers._
March 12, 2026
60 minutesGuardrails and beyond: Control the agent loop with Mastra processors
Relying on “good prompts” alone to keep an agent safe and consistent doesn’t scale—real apps need a dependable way to shape what goes in and what comes out. In this workshop, you’ll learn how Mastra processors give you that control by running **input processors** before the LLM sees a message and **output processors** before a response reaches your users. We’ll implement practical **input processors** to normalize and validate requests (think: cleaning up text, blocking unsafe content, trimming token-heavy context, or adjusting system messages). You’ll see where input processors sit in the pipeline, how execution order affects outcomes, and how to use per-step processing when you need to make decisions during the agentic loop (like switching models or changing tool availability mid-run). Then we’ll build **output processors** that make responses safer and more reliable—filtering or transforming final messages, attaching helpful metadata, and handling streaming output as it arrives. We’ll also cover how output processors can **abort or request retries** to enforce quality and guardrails, so your app doesn’t ship broken or non-compliant responses. You’ll leave with a clear mental model of when to use input vs. output processors, plus working patterns you can reuse to add consistency, safety, and cost control to any Mastra agent. **Hosted by** * Alex Booker, Developer Experience at Mastra * [x.com/bookercodes](https://x.com/bookercodes) * [linkedin.com/in/bookercodes](https://linkedin.com/in/bookercodes) * Daniel Lew, Staff Software Engineer at Mastra * [https://x.com/ShlomesLew](https://x.com/ShlomesLew) * [https://www.linkedin.com/in/dslew/](https://www.linkedin.com/in/dslew/) _Recording and code examples will be available to everyone who registers._
March 5, 2026
60 minutesBeyond vibes: Measuring your agent with evals
Relying on “vibes” to tell if your agent works doesn’t scale—especially once you’re iterating on prompts, models, tools, and RAG. In this workshop, you’ll learn how to turn subjective spot-checking into a clear, repeatable signal for real agent quality using evals (Mastra scorers). We’ll break down what evals are (and why they’re different from traditional pass/fail tests), then walk through concrete, production-shaped examples. You’ll see practical patterns for getting started, including LLM-as-a-judge, plus how to run live evaluations with sampling so you can monitor quality without slowing down your app. Then we’ll connect evals to the workflow that actually makes them useful: datasets and experiments. You’ll learn how to build a versioned set of test cases, run the same dataset against different agent versions, and compare results to catch regressions and measure improvements with confidence. Expect a grounded, no-hype look at where automated evals are powerful, where they can mislead, and how to pair scores with lightweight analysis so you can keep improving your agent over time—with time for questions throughout and at the end. **Hosted by** * Alex Booker, Developer Experience at Mastra * [x.com/bookercodes](https://x.com/bookercodes) * [linkedin.com/in/bookercodes](https://linkedin.com/in/bookercodes) * Yujohn Nattrass, Software Engineer at Mastra * [x.com/YujohnNatt](https://x.com/YujohnNatt) * [linkedin.com/in/yujohn-nattrass/](https://www.linkedin.com/in/yujohn-nattrass/) _Recording and code examples will be available to everyone who registers._
February 19, 2026
60 minutesBuild your own coding agent
Custom coding agents are having a moment. [Ramp](https://www.infoq.com/news/2026/01/ramp-coding-agent-platform/) built one, so did [Stripe](https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-coding-agents), and now you can too! I**n this workshop, you’ll learn how to build a Claude Code and OpenCode-inspired coding agent from first principles using** **[Mastra](https://mastra.ai)****.** We’re hosting this workshop to share what we learned from building Mastra Code. Yeah dawg, it’s a coding agent by Mastra, built with Mastra, and we’re using it to build Mastra 😂. It’s not quite ready to announce (more on that next week), but we just couldn’t wait to share what we’ve learned building it. We’ll walk through the Mastra building blocks that make this possible: * **Memory**, so the agent can remember important context without overfilling the context window * **Workspaces**, so it can read and modify files and execute commands safely * A **harness**, the layer that lets you run the agent, steer it, and see what it’s doing while it works We’ll also show you how we built the streaming **TUI** that lets you watch the agent in real time instead of treating it like a black box. Hosted by * Alex Booker, Developer Experience at Mastra * [x.com/bookercodes](https://x.com/bookercodes) * [linkedin.com/in/bookercodes](https://linkedin.com/in/bookercodes) * Abhi Aiyer, Founder and CTO at Mastra * [https://x.com/abhiaiyer](https://x.com/abhiaiyer) * [https://www.linkedin.com/in/abhi-aiyer-aa41bb42/](https://www.linkedin.com/in/abhi-aiyer-aa41bb42/) * Tyler Barnes, Founding Engineer at Mastra * [https://x.com/tylbar](https://x.com/tylbar) * [https://www.linkedin.com/in/tyler-barnes-882a61193/](https://www.linkedin.com/in/tyler-barnes-882a61193/) _Recording and code examples will be available to everyone who registers._
February 12, 2026
60 minutesBuild agents with human-like memory
We have been obsessing over agent memory because it is the line between chatbots that forget everything and assistants that actually feel persistent and useful. In this workshop, we will walk you through Mastra’s newly improved memory system, why we built it the way we did, and what we learned along the way, including the work that led to an industry-leading high score on LongMemEval. We will explain the different kinds of memory in Mastra and when to use each one: working memory for tracking user preferences and characteristics, semantic recall for long-term conversation history, and our newly released observational memory. We will share the real tradeoffs, the mistakes we made, and the practical techniques that came out of burning through billions of tokens to get this right. You will see how to build agents that remember users across sessions, recall relevant context from months of history, and reason correctly about time. We will walk through concrete implementation examples and talk about why RAG is still very much alive for agent memory, how formatting and retrieval choices affect accuracy, and what actually matters for performance and cost in production. This is a live, hands-on workshop focused on giving you the mental models and patterns you need to build agents with state-of-the-art memory, not just theory. **Hosted by** * Alex Booker, Developer Experience at Mastra * [x.com/bookercodes](https://x.com/bookercodes) * [linkedin.com/in/bookercodes](https://linkedin.com/in/bookercodes) * Tyler Barnes, Founding Engineer at Mastra * [https://x.com/tylbar](https://x.com/tylbar) * [https://www.linkedin.com/in/tyler-barnes-882a61193/](https://www.linkedin.com/in/tyler-barnes-882a61193/) _Recording and code examples will be available to everyone who registers._
