ExamplesRAGInsert Embedding in Pinecone

Insert Embedding in Pinecone

After generating embeddings, you need to store them in a vector database for similarity search. The PineconeVector class provides methods to create indexes and insert embeddings into Pinecone, a managed vector database service. This example shows how to store embeddings in Pinecone for later retrieval.

import { openai } from '@ai-sdk/openai';
import { PineconeVector } from '@mastra/pinecone';
import { MDocument } from '@mastra/rag';
import { embedMany } from 'ai';
 
const doc = MDocument.fromText('Your text content...');
 
const chunks = await doc.chunk();
 
const { embeddings } = await embedMany({
  values: chunks.map(chunk => chunk.text),
  model: openai.embedding('text-embedding-3-small'),
});
 
const pinecone = new PineconeVector('your-api-key');
 
await pinecone.createIndex('test_index', 1536);
 
await pinecone.upsert(
  'test_index',
  embeddings,
  chunks?.map(chunk => ({ text: chunk.text })),
);





View Example on GitHub