Skip to main content
For the complete documentation index for agents and LLMs, see llms.txt.

SentenceTransformersSparseDocumentEmbedder

Calculate sparse embeddings for documents using Sentence Transformers sparse models. The model runs locally, so no external API calls are made during embedding. Use this component in indexing pipelines to add sparse embeddings to documents before writing them to a document store.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Key Features

  • Downloads and runs sparse Sentence Transformers models locally — no external API required.
  • Stores sparse embeddings in the sparse_embedding field of each document.
  • Supports embedding document metadata fields alongside document text.
  • Compatible with SPLADE-based sparse retrieval models.
  • Supports a wide range of backends: PyTorch, ONNX, and OpenVINO.

Configuration

  1. Drag the SentenceTransformersSparseDocumentEmbedder component onto the canvas from the Component Library.
  2. Click the component to open the configuration panel.
  3. On the General tab:
    1. Enter the model name. Specify the path to a local model or the ID of the model on Hugging Face.
  4. Go to the Advanced tab to configure device, token, batch_size, prefix, suffix, trust_remote_code, model_kwargs, tokenizer_kwargs, config_kwargs, backend, and revision.

Connections

SentenceTransformersSparseDocumentEmbedder accepts a list of documents as input. It outputs documents — the same documents with sparse embeddings added in the sparse_embedding field.

Typically, you place this component in an indexing pipeline after a document splitter and before a DocumentWriter.

Usage Example

This index uses SentenceTransformersSparseDocumentEmbedder to create sparse embeddings for documents:

components:
FileTypeRouter:
type: haystack.components.routers.file_type_router.FileTypeRouter
init_parameters:
mime_types:
- text/plain
- application/pdf
- text/markdown

TextFileToDocument:
type: haystack.components.converters.txt.TextFileToDocument
init_parameters:
encoding: utf-8
store_full_path: false

PDFMinerToDocument:
type: haystack.components.converters.pdfminer.PDFMinerToDocument
init_parameters:
store_full_path: false

MarkdownToDocument:
type: haystack.components.converters.markdown.MarkdownToDocument
init_parameters:
store_full_path: false

DocumentJoiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate
sort_by_score: false

DocumentSplitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: word
split_length: 250
split_overlap: 30
respect_sentence_boundary: true
language: en

SparseDocumentEmbedder:
type: haystack.components.embedders.sentence_transformers_sparse_document_embedder.SentenceTransformersSparseDocumentEmbedder
init_parameters:
model: prithivida/Splade_PP_en_v2
batch_size: 32
progress_bar: true

DocumentWriter:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: ''
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
policy: OVERWRITE

connections:
- sender: FileTypeRouter.text/plain
receiver: TextFileToDocument.sources
- sender: FileTypeRouter.application/pdf
receiver: PDFMinerToDocument.sources
- sender: FileTypeRouter.text/markdown
receiver: MarkdownToDocument.sources
- sender: TextFileToDocument.documents
receiver: DocumentJoiner.documents
- sender: PDFMinerToDocument.documents
receiver: DocumentJoiner.documents
- sender: MarkdownToDocument.documents
receiver: DocumentJoiner.documents
- sender: DocumentJoiner.documents
receiver: DocumentSplitter.documents
- sender: DocumentSplitter.documents
receiver: SparseDocumentEmbedder.documents
- sender: SparseDocumentEmbedder.documents
receiver: DocumentWriter.documents

max_runs_per_component: 100

metadata: {}

inputs:
files:
- FileTypeRouter.sources

Parameters

Inputs

ParameterTypeDefaultDescription
documentsList[Document]Documents to embed.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]Documents with sparse embeddings added in the sparse_embedding field.

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrprithivida/Splade_PP_en_v2The model to use for calculating sparse embeddings. Pass a local path or ID of the model on Hugging Face.
deviceOptional[ComponentDevice]NoneThe device to use for loading the model. Overrides the default device.
tokenOptional[Secret]The API token to download private models from Hugging Face.
prefixstr""A string to add at the beginning of each document text.
suffixstr""A string to add at the end of each document text.
batch_sizeint32Number of documents to embed at once.
progress_barboolTrueIf True, shows a progress bar when embedding documents.
meta_fields_to_embedOptional[List[str]]NoneList of metadata fields to embed along with the document text.
embedding_separatorstr"\n"Separator used to concatenate the metadata fields to the document text.
trust_remote_codeboolFalseIf True, allows custom models and scripts.
local_files_onlyboolFalseIf True, only looks at local files without downloading from Hugging Face Hub.
model_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoModelForSequenceClassification.from_pretrained when loading the model. Refer to specific model documentation for available kwargs.
tokenizer_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoTokenizer.from_pretrained when loading the tokenizer. Refer to specific model documentation for available kwargs.
config_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoConfig.from_pretrained when loading the model configuration.
backendLiteral["torch", "onnx", "openvino"]torchThe backend to use for the Sentence Transformers model. Choose from torch, onnx, or openvino. Refer to the Sentence Transformers documentation for more information on acceleration and quantization options.
revisionOptional[str]NoneThe specific model version to use. It can be a branch name, a tag name, or a commit ID for a stored model on Hugging Face.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
documentsList[Document]Documents to embed.