FastembedSparseDocumentEmbedder
Compute sparse document embeddings using Fastembed sparse models.
Basic Information
- Type:
haystack_integrations.components.embedders.fastembed.fastembed_sparse_document_embedder.FastembedSparseDocumentEmbedder - Components it can connect with:
- Preprocessors: Receives documents from
ConvertersorDocumentSplitterin an index. DocumentWriter: Sends embedded documents toDocumentWriterfor storage.
- Preprocessors: Receives documents from
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | List of Documents to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | List of Documents with each Document's sparse_embedding field set to the computed embeddings. |
Overview
FastembedSparseDocumentEmbedder computes sparse document embeddings using Fastembed sparse models like SPLADE. Sparse embeddings are useful for hybrid search scenarios where you want to combine semantic search with keyword-based retrieval.
The sparse embedding of each document is stored in the sparse_embedding field of the Document. Sparse embeddings are useful for performing sparse embedding retrieval on a set of documents. During retrieval, the sparse embedding representing the query is compared to sparse embeddings of the documents to identify the most similar or relevant ones.
Compatible Models
You can find the supported models in the FastEmbed documentation.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Usage Example
This index uses FastembedSparseDocumentEmbedder to create sparse embeddings:
components:
TextFileToDocument:
type: haystack.components.converters.txt.TextFileToDocument
init_parameters:
encoding: utf-8
store_full_path: false
DocumentSplitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: sentence
split_length: 5
split_overlap: 1
FastembedSparseDocumentEmbedder:
type: haystack_integrations.components.embedders.fastembed.fastembed_sparse_document_embedder.FastembedSparseDocumentEmbedder
init_parameters:
model: prithivida/Splade_PP_en_v1
cache_dir:
threads:
batch_size: 32
progress_bar: true
parallel:
local_files_only: false
meta_fields_to_embed:
embedding_separator: "\n"
model_kwargs:
DocumentWriter:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'sparse-index'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
policy: OVERWRITE
connections:
- sender: TextFileToDocument.documents
receiver: DocumentSplitter.documents
- sender: DocumentSplitter.documents
receiver: FastembedSparseDocumentEmbedder.documents
- sender: FastembedSparseDocumentEmbedder.documents
receiver: DocumentWriter.documents
inputs:
files:
- TextFileToDocument.sources
max_runs_per_component: 100
metadata: {}
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | prithivida/Splade_PP_en_v1 | Local path or name of the model in Hugging Face's model hub, such as prithivida/Splade_PP_en_v1. |
| cache_dir | Optional[str] | None | The path to the cache directory. Can be set using the FASTEMBED_CACHE_PATH env variable. Defaults to fastembed_cache in the system's temp directory. |
| threads | Optional[int] | None | The number of threads single onnxruntime session can use. |
| batch_size | int | 32 | Number of strings to encode at once. |
| progress_bar | bool | True | If True, displays progress bar during embedding. |
| parallel | Optional[int] | None | If > 1, data-parallel encoding is used, recommended for offline encoding of large datasets. If 0, use all available cores. If None, don't use data-parallel processing, use default onnxruntime threading instead. |
| local_files_only | bool | False | If True, only use the model files in the cache_dir. |
| meta_fields_to_embed | Optional[List[str]] | None | List of meta fields that should be embedded along with the Document content. |
| embedding_separator | str | \n | Separator used to concatenate the meta fields to the Document content. |
| model_kwargs | Optional[Dict[str, Any]] | None | Dictionary containing model parameters such as k, b, avg_len, language. |
Run Method Parameters
These are the parameters you can configure for the run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | List of Documents to embed. |
Was this page helpful?