SentenceTransformersSparseTextEmbedder
Embed text strings, such as queries, using sparse embedding models from Sentence Transformers.
Basic Information
- Type:
haystack.components.embedders.sentence_transformers_sparse_text_embedder.SentenceTransformersSparseTextEmbedder - Components it can connect with:
- Any component that produces
text. It's usually used in query pipelines to embed the query it receives from theInputcomponent. . - Any component that consumes
sparse_embedding, such asSparseEmbeddingRetriever.
- Any component that produces
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| sparse_embedding | SparseEmbedding | The sparse embedding of the input text. |
Overview
SentenceTransformersSparseTextEmbedder transforms strings into sparse vectors using sparse embedding models from Sentence Transformers. You can use it to embed user queries and send them to a sparse embedding retriever.
When performing sparse retrieval, use SentenceTransformersSparseTextEmbedder to embed your query and send it to a retriever. The retriever then uses the sparse vector to search for relevant documents.
Compatible Models
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Compatible models are based on SPLADE (SParse Lexical AnD Expansion), a technique for producing sparse representations for text, where each non-zero value in the embedding is the importance weight of a term in the vocabulary. This approach combines the benefits of learned sparse representations with the efficiency of traditional sparse retrieval methods. For more information, see Pipeline Components.
Authentication
To use private models from Hugging Face, connect deepset AI Platform to Hugging Face first. For details, see Use Hugging Face Models.
Usage Example
components:
sparse_text_embedder:
type: haystack.components.embedders.sentence_transformers_sparse_text_embedder.SentenceTransformersSparseTextEmbedder
init_parameters:
model: prithivida/Splade_PP_en_v2 # SPLADE model for sparse embeddings
prefix: ""
suffix: ""
sparse_retriever:
type: haystack_integrations.components.retrievers.qdrant.retriever.QdrantSparseEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.qdrant.document_store.QdrantDocumentStore
init_parameters:
location: ${QDRANT_HOST}
api_key: ${QDRANT_API_KEY}
index: default
use_sparse_embeddings: true
return_embedding: false
top_k: 10
scale_score: false
prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |-
You are a helpful assistant.
Answer the question based on the provided documents.
If the documents don't contain enough information, say so.
Documents:
{% for document in documents %}
Document[{{ loop.index }}]:
{{ document.content }}
{% endfor %}
Question: {{ question }}
Answer:
llm:
type: haystack.components.generators.openai.OpenAIGenerator
init_parameters:
api_key: {"type": "env_var", "env_vars": ["OPENAI_API_KEY"], "strict": false}
model: gpt-4o
generation_kwargs:
max_tokens: 500
temperature: 0.0
answer_builder:
type: haystack.components.builders.answer_builder.AnswerBuilder
init_parameters: {}
connections:
- sender: sparse_text_embedder.sparse_embedding
receiver: sparse_retriever.query_sparse_embedding
- sender: sparse_retriever.documents
receiver: prompt_builder.documents
- sender: sparse_retriever.documents
receiver: answer_builder.documents
- sender: prompt_builder.prompt
receiver: llm.prompt
- sender: llm.replies
receiver: answer_builder.replies
max_runs_per_component: 100
inputs:
query:
- sparse_text_embedder.text
- prompt_builder.question
- answer_builder.query
filters:
- sparse_retriever.filters
outputs:
documents: sparse_retriever.documents
answers: answer_builder.answers
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | prithivida/Splade_PP_en_v2 | The model to use for calculating sparse embeddings. Specify the path to a local model or the ID of the model on Hugging Face. For available models, check Hugging Face. |
| device | Optional[ComponentDevice] | None | Overrides the default device used to load the model. |
| token | Optional[Secret] | An API token to use private models from Hugging Face. | |
| prefix | str | "" | A string to add at the beginning of each text to be embedded. Some models may benefit from adding a prefix or suffix to the text before embedding. For example, the prithivida/Splade_PP_en_v2 model may benefit from adding a prefix of "query: " to the text before embedding. |
| suffix | str | "" | A string to add at the end of each text to embed. Some models may benefit from adding a prefix or suffix to the text before embedding. |
| trust_remote_code | bool | False | If True, permits custom models and scripts. |
| local_files_only | bool | False | If True, only looks at local files without downloading from Hugging Face Hub. |
| model_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for AutoModelForSequenceClassification.from_pretrained when loading the model. Refer to specific model documentation for available kwargs. |
| tokenizer_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for AutoTokenizer.from_pretrained when loading the tokenizer. Refer to specific model documentation for available kwargs. |
| config_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for AutoConfig.from_pretrained when loading the model configuration. |
| backend | Literal["torch", "onnx", "openvino"] | torch | The backend to use for the Sentence Transformers model. Choose from torch, onnx, or openvino. Refer to the Sentence Transformers documentation for more information on acceleration and quantization options. |
| revision | Optional[str] | None | The specific model version to use. It can be a branch name, a tag name, or a commit ID for a stored model on Hugging Face. |
Run Method Parameters
These are the parameters you can configure for the component's run() method.
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to embed. |
Was this page helpful?