Storing Embeddings in A Vector Database
After generating embeddings, you need to store them in a database that supports vector similarity search. Mastra provides a consistent interface for storing and querying embeddings across different vector databases.
Supported Databases
import { PgVector } from '@mastra/pg';
const store = new PgVector(process.env.POSTGRES_CONNECTION_STRING)
await store.createIndex({
indexName: "myCollection",
dimension: 1536,
});
await store.upsert({
indexName: "myCollection",
vectors: embeddings,
metadata: chunks.map(chunk => ({ text: chunk.text })),
});
Using PostgreSQL with pgvector
PostgreSQL with the pgvector extension is a good solution for teams already using PostgreSQL who want to minimize infrastructure complexity. For detailed setup instructions and best practices, see the official pgvector repository.
Using Vector Storage
Once initialized, all vector stores share the same interface for creating indexes, upserting embeddings, and querying.
Creating Indexes
Before storing embeddings, you need to create an index with the appropriate dimension size for your embedding model:
// Create an index with dimension 1536 (for text-embedding-3-small)
await store.createIndex({
indexName: 'myCollection',
dimension: 1536,
});
// For other models, use their corresponding dimensions:
// - text-embedding-3-large: 3072
// - text-embedding-ada-002: 1536
// - cohere-embed-multilingual-v3: 1024
The dimension size must match the output dimension of your chosen embedding model. Common dimension sizes are:
- OpenAI text-embedding-3-small: 1536 dimensions
- OpenAI text-embedding-3-large: 3072 dimensions
- Cohere embed-multilingual-v3: 1024 dimensions
Naming Rules for Databases
Each vector database enforces specific naming conventions for indexes and collections to ensure compatibility and prevent conflicts.
Index names must:
- Start with a letter or underscore
- Contain only letters, numbers, and underscores
- Example:
my_index_123
is valid - Example:
my-index
is not valid (contains hyphen)
Upserting Embeddings
After creating an index, you can store embeddings along with their basic metadata:
// Store embeddings with their corresponding metadata
await store.upsert({
indexName: 'myCollection', // index name
vectors: embeddings, // array of embedding vectors
metadata: chunks.map(chunk => ({
text: chunk.text, // The original text content
id: chunk.id // Optional unique identifier
}))
});
The upsert operation:
- Takes an array of embedding vectors and their corresponding metadata
- Updates existing vectors if they share the same ID
- Creates new vectors if they don’t exist
- Automatically handles batching for large datasets
For complete examples of upserting embeddings in different vector stores, see the Upsert Embeddings guide.
Adding Metadata
Vector stores support rich metadata (any JSON-serializable fields) for filtering and organization. Since metadata is stored with no fixed schema, use consistent field naming to avoid unexpected query results.
Important: Metadata is crucial for vector storage - without it, you’d only have numerical embeddings with no way to return the original text or filter results. Always store at least the source text as metadata.
// Store embeddings with rich metadata for better organization and filtering
await store.upsert({
indexName: "myCollection",
vectors: embeddings,
metadata: chunks.map((chunk) => ({
// Basic content
text: chunk.text,
id: chunk.id,
// Document organization
source: chunk.source,
category: chunk.category,
// Temporal metadata
createdAt: new Date().toISOString(),
version: "1.0",
// Custom fields
language: chunk.language,
author: chunk.author,
confidenceScore: chunk.score,
})),
});
Key metadata considerations:
- Be strict with field naming - inconsistencies like ‘category’ vs ‘Category’ will affect queries
- Only include fields you plan to filter or sort by - extra fields add overhead
- Add timestamps (e.g., ‘createdAt’, ‘lastUpdated’) to track content freshness
Best Practices
- Create indexes before bulk insertions
- Use batch operations for large insertions (the upsert method handles batching automatically)
- Only store metadata you’ll query against
- Match embedding dimensions to your model (e.g., 1536 for
text-embedding-3-small
)