Memory with LibSQL
This example demonstrates how to use Mastra’s memory system with LibSQL as the storage backend.
Prerequisites
This example uses the openai
model. Make sure to add OPENAI_API_KEY
to your .env
file.
OPENAI_API_KEY=<your-api-key>
And install the following package:
npm install @mastra/libsql
Adding memory to an agent
To add LibSQL memory to an agent use the Memory
class and create a new storage
key using LibSQLStore
. The url
can either by a remote location, or a local file system resource.
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { LibSQLStore } from "@mastra/libsql";
export const libsqlAgent = new Agent({
name: "libsql-agent",
instructions: "You are an AI agent with the ability to automatically recall memories from previous interactions.",
model: openai("gpt-4o"),
memory: new Memory({
storage: new LibSQLStore({
url: "file:libsql-agent.db"
}),
options: {
threads: {
generateTitle: true
}
}
})
});
Local embeddings with fastembed
Embeddings are numeric vectors used by memory’s semanticRecall
to retrieve related messages by meaning (not keywords). This setup uses @mastra/fastembed
to generate vector embeddings.
Install fastembed
to get started:
npm install @mastra/fastembed
Add the following to your agent:
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
import { fastembed } from "@mastra/fastembed";
export const libsqlAgent = new Agent({
name: "libsql-agent",
instructions: "You are an AI agent with the ability to automatically recall memories from previous interactions.",
model: openai("gpt-4o"),
memory: new Memory({
storage: new LibSQLStore({
url: "file:libsql-agent.db"
}),
vector: new LibSQLVector({
connectionUrl: "file:libsql-agent.db"
}),
embedder: fastembed,
options: {
lastMessages: 10,
semanticRecall: {
topK: 3,
messageRange: 2
},
threads: {
generateTitle: true
}
}
})
});
Usage example
Use memoryOptions
to scope recall for this request. Set lastMessages: 5
to limit recency-based recall, and use semanticRecall
to fetch the topK: 3
most relevant messages, including messageRange: 2
neighboring messages for context around each match.
import "dotenv/config";
import { mastra } from "./mastra";
const threadId = "123";
const resourceId = "user-456";
const agent = mastra.getAgent("libsqlAgent");
const message = await agent.stream("My name is Mastra", {
memory: {
thread: threadId,
resource: resourceId
}
});
await message.textStream.pipeTo(new WritableStream());
const stream = await agent.stream("What's my name?", {
memory: {
thread: threadId,
resource: resourceId
},
memoryOptions: {
lastMessages: 5,
semanticRecall: {
topK: 3,
messageRange: 2
}
}
});
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}