Mastra Studio now has a metrics dashboard that tracks model costs, latency percentiles, scores, and error counts for all your agents, tools, and workflows.
This release also introduces a new logging system that saves your logs to the observability store and makes them searchable in Studio.
With traces already in place, Mastra now supports all three pillars of observability — giving you aggregate visibility into cost, errors, and latency from metrics, and a clear debugging path from logs tied directly to traces. You get confidence that your agents are working in production, and an easy way to drill in should something go wrong.
Why we built this
Since we hit 1.0 in January, more teams are deploying Mastra into production, and the number one thing we keep hearing is: "I need more visibility into how things are running."
Traces are great for debugging a single run, but they don't answer the bigger questions — the ones that only emerge when you look across runs:
- How much is this agent costing me over time?
- Are errors going up or down this week?
- Is latency getting worse on a specific tool call?
- What did my agent actually do when a user reported something went wrong?
Metrics give you that aggregate view of cost, error volume, and latency across your entire application.
Logs give you a searchable record of what happened, with rich context attached to every entry — including trace and span IDs, entity type, root entity, user and request IDs. This makes it easy to filter by trace, tool, agent, or end-user and jump straight to the full execution context.
Get Started
Metrics and logs both require the same setup. Upgrade to @mastra/core 1.20.0 or later and install:
1npm install @mastra/observability @mastra/duckdb1. Add a columnar store for observability
Metrics and logs need a columnar store optimized for aggregation and time-range queries. We're launching with DuckDB support, with ClickHouse on the way:
1// src/mastra/index.ts
2
3import { Mastra } from '@mastra/core/mastra';
4import { MastraCompositeStore } from '@mastra/core/storage';
5import { LibSQLStore } from '@mastra/libsql';
6import { DuckDBStore } from '@mastra/duckdb';
7import { PinoLogger } from '@mastra/loggers';
8
9export const mastra = new Mastra({
10 // ...
11 logger: new PinoLogger(),
12 storage: new MastraCompositeStore({
13 id: 'composite-storage',
14 default: new LibSQLStore({
15 id: 'mastra-storage',
16 url: 'file:./mastra.db',
17 }),
18 domains: {
19 observability: new DuckDBStore().observability,
20 },
21 }),
22});If you already have storage configured for agent memory, you don't need to replace it. MastraCompositeStore lets you run different stores for different domains. Above, DuckDB takes the observability domain while LibSQLStore handles everything else.
2. Wire up the Exporter
Mastra automatically creates spans from agent runs, tool calls, and workflows — you don't need to instrument your code. The DefaultExporter buffers those span events, extracts metrics from them, and writes everything to the storage provider you configured above.
1// src/mastra/index.ts
2
3import { Mastra } from '@mastra/core';
4import { Observability, DefaultExporter } from '@mastra/observability';
5
6export const mastra = new Mastra({
7 // ...
8 observability: new Observability({
9 configs: {
10 default: {
11 serviceName: 'my-app',
12 logging: { enabled: true, level: 'info' }, // optional — forwards logs from your existing Pino pipeline into the observability store
13 exporters: [new DefaultExporter()],
14 },
15 },
16 }),
17});3. Add logging to your tools and workflows
As described in the Mastra logger docs, you can use logger.info(), logger.warn(), and similar calls anywhere in your tools, agents, or workflows. They still flow through your existing Pino pipeline, and when Observability is enabled with logging.enabled: true, they’re also stored in the observability store so you can view them in Studio and filter by trace, entity, user, and more.
1// inside any tool, agent, or workflow
2mastra.logger.info(`Fetching weather for ${context.city}`);Once all three are in place, open Studio and the metrics dashboard and logs page will start populating.
How it works
When an agent run, a model call, or a tool invocation completes, Mastra automatically extracts numeric metrics like duration, token counts, and estimated cost, and writes them as separate events to storage. The dashboard then runs aggregation queries against these pre-computed metric events. This is why both metrics and logs need a columnar store like DuckDB (which is optimized for aggregating and querying large volumes of events) rather than a row-oriented database like PostgreSQL.
Logs work similarly. When you call logger.info() or logger.warn() inside a running trace, the log entry is automatically enriched with the current trace ID, span ID, entity type and name, parent and root entities, and any user or request IDs. These enriched log events are then flushed to the observability store, where Studio can query and filter them across any of those dimensions.
Both metrics and logs only accumulate from the point you enable the feature — there is no backfill from existing traces stored in another provider.
What gets captured automatically
Mastra automatically captures duration, token usage, and estimated cost for every agent run, tool call, workflow, and model invocation. See the full list of automatic metrics for details.
The dashboard surfaces the most useful aggregations today, but all of this raw data is stored in DuckDB — which means we have a foundation to build richer metrics views over time. And if you want to go deeper right now, you can always query DuckDB directly.
ClickHouse support is next
We're launching metrics and logs with DuckDB, but ClickHouse is next. The infrastructure is already in place, and ClickHouse's OLAP engine is a natural fit for the aggregation queries metrics rely on.
Resources
The metrics dashboard and logs page are available in Studio from 1.20.0. Metrics give you the whole picture — model costs, latency percentiles, eval scores, and error counts — while logs give you a searchable, filterable history tied directly to your traces. To start collecting data, configure Observability with a DefaultExporter and add DuckDBStore as your observability storage provider.
For full setup instructions, see the metrics docs and logs docs.
