Skip to main content
For the complete documentation index for agents and LLMs, see llms.txt.

OpenSearchHybridRetriever

Retrieve documents from OpenSearch using a combination of BM25 keyword search and embedding-based semantic search. This hybrid approach typically provides better retrieval quality than using either method alone.

Key Features

  • Combines BM25 keyword search and embedding-based semantic search in a single component.
  • Runs both retrieval methods in parallel.
  • Merges results using a configurable join strategy, including Reciprocal Rank Fusion (RRF).
  • Supports separate filter policies for BM25 and embedding retrieval.
  • Supports custom OpenSearch queries for both retrieval methods.

Configuration

  1. Drag the OpenSearchHybridRetriever component onto the canvas from the Component Library.
  2. Click the component to open the configuration panel.
  3. On the General tab:
    1. Select the document store. The document store determines where documents are retrieved from.
    2. Select the text embedder to use for embedding the query for semantic search.
  4. Go to the Advanced tab to configure top_k, filters, fuzziness, join mode, weights, and filter policies for both BM25 and embedding retrieval.

Connections

OpenSearchHybridRetriever accepts a query string and optional filters_bm25, filters_embedding, and top_k override as inputs. It outputs documents — a combined, ranked list of documents from both retrieval methods.

Typically, you connect the query input from the pipeline Input component and send documents to a PromptBuilder, Ranker, or answer builder.

Usage Example

This is an example RAG pipeline with OpenSearchHybridRetriever combining BM25 and embedding-based retrieval:

components:
hybrid_retriever:
type: haystack_integrations.components.retrievers.opensearch.open_search_hybrid_retriever.OpenSearchHybridRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'default'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2
filters_bm25:
fuzziness: AUTO
top_k_bm25: 20
scale_score: false
all_terms_must_match: false
filter_policy_bm25: replace
custom_query_bm25:
filters_embedding:
top_k_embedding: 20
filter_policy_embedding: replace
custom_query_embedding:
join_mode: reciprocal_rank_fusion
weights:
top_k: 10
sort_by_score: true

ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8

meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm

PromptBuilder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: " You are a technical expert.\n You answer questions truthfully based on provided documents.\n If the answer exists in several documents, summarize them.\n Ignore documents that don't contain the answer to the question.\n Only answer based on the documents provided. Don't make things up.\n If no information related to the question can be found in the document, say so.\n Always use references in the form [NUMBER OF DOCUMENT] when using information from a document, e.g. [3] for Document [3] .\n Never name the documents, only enter a number in square brackets as a reference.\n The reference must only refer to the number that comes in square brackets after the document.\n Otherwise, do not use brackets in your answer and reference ONLY the number of the document without mentioning the word document.\n\n These are the documents:\n {%- if documents|length > 0 %}\n {%- for document in documents %}\n Document [{{ loop.index }}] :\n Name of Source File: {{ document.meta.file_name }}\n {{ document.content }}\n {% endfor -%}\n {%- else %}\n No relevant documents found.\n Respond with \"Sorry, no matching documents were found, please adjust the filters or try a different question.\"\n {% endif %}\n\n Question: {{ question }}\n Answer:"

required_variables:
variables:
OpenAIGenerator:
type: haystack.components.generators.openai.OpenAIGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- OPENAI_API_KEY
strict: false
model: gpt-5-mini
streaming_callback:
api_base_url:
organization:
system_prompt:
generation_kwargs:
timeout:
max_retries:
http_client_kwargs:

connections:
- sender: hybrid_retriever.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: PromptBuilder.documents
- sender: PromptBuilder.prompt
receiver: OpenAIGenerator.prompt
- sender: OpenAIGenerator.replies
receiver: answer_builder.replies

inputs:
query:
- "hybrid_retriever.query"
- "ranker.query"
- "PromptBuilder.question"
- "answer_builder.query"

outputs:
documents: "meta_field_grouping_ranker.documents"
answers: "answer_builder.answers"

max_runs_per_component: 100

metadata: {}

Parameters

Inputs

ParameterTypeDefaultDescription
querystrThe query string to search for.
filters_bm25Optional[Dict[str, Any]]NoneFilters to apply during BM25 retrieval.
filters_embeddingOptional[Dict[str, Any]]NoneFilters to apply during embedding retrieval.
top_kOptional[int]NoneMaximum number of documents to return from the combined results.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]Documents retrieved and ranked using hybrid search.

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
document_storeOpenSearchDocumentStoreAn instance of OpenSearchDocumentStore to use with the retriever.
embedderTextEmbedderA TextEmbedder component to embed the query for semantic search.
filters_bm25Optional[Dict[str, Any]]NoneDefault filters for BM25 retrieval.
fuzzinessUnion[int, str]"AUTO"The fuzziness setting for BM25 retrieval.
top_k_bm25int10Number of documents to return from BM25 retrieval.
scale_scoreboolFalseWhether to scale the BM25 scores.
all_terms_must_matchboolFalseWhether all query terms must match in BM25 retrieval.
filter_policy_bm25Union[str, FilterPolicy]"replace"How to apply runtime filters for BM25. Options: "replace", "merge".
custom_query_bm25Optional[Dict[str, Any]]NoneA custom OpenSearch query for BM25 retrieval.
filters_embeddingOptional[Dict[str, Any]]NoneDefault filters for embedding retrieval.
top_k_embeddingint10Number of documents to return from embedding retrieval.
filter_policy_embeddingUnion[str, FilterPolicy]"replace"How to apply runtime filters for embedding retrieval. Options: "replace", "merge".
custom_query_embeddingOptional[Dict[str, Any]]NoneA custom OpenSearch query for embedding retrieval.
join_modeUnion[str, JoinMode]"reciprocal_rank_fusion"How to combine results from both retrievers. Options: "concatenate", "merge", "reciprocal_rank_fusion", "distribution_based_rank_fusion".
weightsOptional[List[float]]NoneWeights for the joiner when combining results.
top_kOptional[int]NoneFinal number of documents to return after combining results.
sort_by_scoreboolTrueWhether to sort the final results by score.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
querystrThe query string to search for.
filters_bm25Optional[Dict[str, Any]]NoneFilters to apply during BM25 retrieval. The way filters are applied depends on the filter_policy_bm25 setting.
filters_embeddingOptional[Dict[str, Any]]NoneFilters to apply during embedding retrieval. The way filters are applied depends on the filter_policy_embedding setting.
top_kOptional[int]NoneMaximum number of documents to return. Overrides the value set at initialization.