Skip to main content
For the complete documentation index for agents and LLMs, see llms.txt.

MultiQueryEmbeddingRetriever

Retrieve documents using multiple text queries in parallel with an embedding-based retriever. It improves retrieval recall by finding documents relevant to multiple query variations simultaneously.

Key Features

  • Processes multiple queries in parallel using a thread pool.
  • Converts each text query to embeddings using a configurable query embedder.
  • Combines results from all queries, deduplicates by content, and sorts by relevance score.
  • Works best with QueryExpander to generate semantically similar query variations.
  • Helps find relevant documents that a single query formulation might miss.

Configuration

  1. Drag the MultiQueryEmbeddingRetriever component onto the canvas from the Component Library.
  2. Click the component to open the configuration panel.
  3. On the General tab:
    1. Select the embedding-based retriever to use for document retrieval.
    2. Select the query embedder to convert text queries to embeddings.
  4. Go to the Advanced tab to configure max_workers for parallel processing.

Connections

MultiQueryEmbeddingRetriever accepts a list of queries (strings) and an optional retriever_kwargs dictionary as inputs. It outputs documents — a deduplicated, relevance-sorted list of retrieved documents.

Typically, you connect QueryExpander to the queries input to provide expanded query variations. Send the documents output to a Ranker or DocumentJoiner for further processing.

Usage Example

Here's an example that combines QueryExpander with MultiQueryEmbeddingRetriever. You could then send the retrieved documents to a Ranker or DocumentJoiner component to combine the results:

components:
query_expander:
type: haystack.components.query.query_expander.QueryExpander
init_parameters:
n_expansions: 3
include_original_query: true

chat_generator:
type: haystack_integrations.components.generators.anthropic.chat.chat_generator.AnthropicChatGenerator
init_parameters: {}
multi_query_retriever:
type: haystack.components.retrievers.multi_query_embedding_retriever.MultiQueryEmbeddingRetriever
init_parameters:
query_embedder:
type: haystack.components.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder
init_parameters:
model: sentence-transformers/all-MiniLM-L6-v2
retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
top_k: 5
max_workers: 3

connections:
- sender: query_expander.queries
receiver: multi_query_retriever.queries

max_runs_per_component: 100

metadata: {}

inputs:
query:
- query_expander.query

Parameters

Inputs

ParameterTypeDescription
queriesList[str]List of text queries to process.
retriever_kwargsOptional[Dict[str, Any]]Optional dictionary of arguments for the retriever.

Outputs

ParameterTypeDescription
documentsList[Document]List of retrieved documents sorted by relevance score, deduplicated by content.

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
retrieverEmbeddingRetrieverThe embedding-based retriever to use for document retrieval. Must implement the EmbeddingRetriever protocol.
query_embedderTextEmbedderThe query embedder to convert text queries to embeddings. Must implement the TextEmbedder protocol.
max_workersintthreeMaximum number of worker threads for parallel processing.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
queriesList[str]List of text queries to process.
retriever_kwargsOptional[Dict[str, Any]]NoneOptional dictionary of arguments to pass to the retriever's run method (for example, filters, top_k).