DocsReferenceRAGVector Query Search

vectorQuerySearch()

The vectorQuerySearch() function performs semantic search queries against a vector store, handling both the embedding generation and vector search process. This is a core utility used internally by Mastra’s RAG tools.

Basic Usage

import { vectorQuerySearch } from "@mastra/rag";
 
const searchResults = await vectorQuerySearch({
  indexName: "docs",
  vectorStore: myVectorStore,
  queryText: "How do I set up authentication?",
  options: {
    provider: "OPEN_AI",
    model: "text-embedding-ada-002",
    maxRetries: 3,
  },
  topK: 5
});

Parameters

indexName:

string
Name of the vector store index to search

vectorStore:

MastraVector
Vector store instance to perform the search against

queryText:

string
The text query to search for

options:

EmbeddingOptions
Configuration options for embedding generation

queryFilter?:

any
Optional filter criteria for the search

topK:

number
Maximum number of results to return

includeVectors?:

boolean
Whether to include embedding vectors in results

Returns

results:

QueryResult[]
Array of matching documents with their metadata and scores

queryEmbedding:

number[]
The generated embedding vector for the query text

Example with Filters

const { results, queryEmbedding } = await vectorQuerySearch({
  indexName: "documentation",
  vectorStore,
  queryText: "deployment guide",
  options: {
    provider: "OPEN_AI",
    model: "text-embedding-ada-002",
    maxRetries: 3,
  },
  queryFilter: {
    category : { "eq" : "technical" },
  },
  topK: 3,
  includeVectors: true
});
 
// Process results
results.forEach(result => {
  console.log(`Match: ${result.metadata.text}`);
  if (result.vector) {
    console.log(`Vector dimensions: ${result.vector.length}`);
  }
});

MIT 2025 © Nextra.