ExamplesRAGInsert Embedding in Pinecone

Insert Embedding in Pinecone

After generating embeddings, you need to store them in a vector database for similarity search. The PineconeVector class provides methods to create indexes and insert embeddings into Pinecone, a managed vector database service. This example shows how to store embeddings in Pinecone for later retrieval.

import { openai } from '@ai-sdk/openai';
import { PineconeVector } from '@mastra/pinecone';
import { MDocument } from '@mastra/rag';
import { embedMany } from 'ai';
 
const doc = MDocument.fromText('Your text content...');
 
const chunks = await doc.chunk();
 
const { embeddings } = await embedMany({
  values: chunks.map(chunk => chunk.text),
  model: openai.embedding('text-embedding-3-small'),
});
 
const pinecone = new PineconeVector(process.env.PINECONE_API_KEY!);
 
await pinecone.createIndex({
  indexName: 'testindex',
  dimension: 1536,
});
 
await pinecone.upsert({
  indexName: 'testindex',
  vectors: embeddings,
  metadata: chunks?.map(chunk => ({ text: chunk.text })),
});





View Example on GitHub