Learn how Mastra is optimized for modern AI systems, and how to implement a similar approach.
Datasets and experiments for evaluations, workspace and filesystem lifecycle improvements with safer introspection, and workflow foreach progress streaming
Observational memory async buffering (default-on) with new structured streaming events, workspace mounts with CompositeFilesystem for unified multi-provider file access
A human-inspired memory system that scores ~95% on LongMemEval — with a completely stable context window.
Mastra Skills teach coding agents about Mastra through structured knowledge files, giving any compatible agent the context it needs to help you build.
File access, command execution, and reusable skills for agents that actually get things done.
Observational Memory, skills.sh integrations, plus improved tracing and safer error handling.
Agents can pause before tool calls. Workflows can suspend at steps. But when they're working together, where's the right place for a human-in-the-loop?
Unified Workspace API, observability and server adapters improvements.
When should an agent ask for permission? How to think about approval, suspension, and the trust spectrum in your agent design.
APIs are locked, server adapters are the new default, and we've shipped thread cloning, composite storage, AI SDK v6 support, and more.