Skip to main content

FastembedSparseTextEmbedder

Compute sparse text embeddings using Fastembed sparse models.

Basic Information

  • Type: haystack_integrations.components.embedders.fastembed.fastembed_sparse_text_embedder.FastembedSparseTextEmbedder
  • Components it can connect with:
    • Input:Receives a query string as input in a query pipeline.
    • Retrievers: Sends the computed sparse embedding to a sparse retriever.

Inputs

ParameterTypeDefaultDescription
textstrA string to embed.

Outputs

ParameterTypeDefaultDescription
sparse_embeddingSparseEmbeddingA sparse embedding representing the input text.

Overview

FastembedSparseTextEmbedder computes sparse text embeddings using Fastembed sparse models like SPLADE. Sparse embeddings are useful for hybrid search scenarios where you want to combine semantic search with keyword-based retrieval.

Use this component in query pipelines alongside a dense text embedder for sparse retrieval. Make sure to use the same sparse embedding model as the one used to embed the documents in the document store.

Compatible Models

You can find the supported models in the FastEmbed documentation.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Usage Example

This query pipeline uses FastembedSparseTextEmbedder for sparse retrieval:

components:
FastembedSparseTextEmbedder:
type: haystack_integrations.components.embedders.fastembed.fastembed_sparse_text_embedder.FastembedSparseTextEmbedder
init_parameters:
model: prithivida/Splade_PP_en_v1
cache_dir:
threads:
progress_bar: true
parallel:
local_files_only: false
model_kwargs:

bm25_retriever:
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'default'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20

ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- role: system
content: "You are a helpful assistant answering questions based on the provided documents."
- role: user
content: "Documents:\n{% for doc in documents %}\n{{ doc.content }}\n{% endfor %}\n\nQuestion: {{ query }}"

OpenAIChatGenerator:
type: haystack.components.generators.chat.openai.OpenAIChatGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- OPENAI_API_KEY
strict: false
model: gpt-4o-mini

OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm

connections:
- sender: bm25_retriever.documents
receiver: ChatPromptBuilder.documents
- sender: ChatPromptBuilder.prompt
receiver: OpenAIChatGenerator.messages
- sender: OpenAIChatGenerator.replies
receiver: OutputAdapter.replies
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: bm25_retriever.documents
receiver: answer_builder.documents

inputs:
query:
- bm25_retriever.query
- ChatPromptBuilder.query
- answer_builder.query
filters:
- bm25_retriever.filters

outputs:
documents: bm25_retriever.documents
answers: answer_builder.answers

max_runs_per_component: 100

metadata: {}

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrprithivida/Splade_PP_en_v1Local path or name of the model in Fastembed's model hub, such as prithivida/Splade_PP_en_v1.
cache_dirOptional[str]NoneThe path to the cache directory. Can be set using the FASTEMBED_CACHE_PATH env variable. Defaults to fastembed_cache in the system's temp directory.
threadsOptional[int]NoneThe number of threads single onnxruntime session can use.
progress_barboolTrueIf True, displays progress bar during embedding.
parallelOptional[int]NoneIf > 1, data-parallel encoding is used, recommended for offline encoding of large datasets. If 0, use all available cores. If None, don't use data-parallel processing, use default onnxruntime threading instead.
local_files_onlyboolFalseIf True, only use the model files in the cache_dir.
model_kwargsOptional[Dict[str, Any]]NoneDictionary containing model parameters such as k, b, avg_len, language.

Run Method Parameters

These are the parameters you can configure for the run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.

ParameterTypeDefaultDescription
textstrA string to embed.