Skip to main content

SentenceTransformersTextEmbedder

Embed strings, such as user queries, using Sentence Transformers models.

Basic Information

  • Type: haystack_integrations.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder
  • Components it can connect with:
    • Input: SentenceTransformersTextEmbedder can receive a string to embed from the Input component.
    • Retrievers: SentenceTransformersTextEmbedder can send the embedded text to Retrievers that use the embeddings to retrieve documents from a document store.

Inputs

ParameterTypeDefaultDescription
textstrText to embed.

Outputs

ParameterTypeDefaultDescription
embeddingList[float]A dictionary with the following keys: - embedding: The embedding of the input text.

Overview

Use SentenceTransformersTextEmbedder to calculate embeddings for strings, such as user queries, using Sentence Transformers models. This component is used in query pipelines when you want to perform semantic search.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Usage Example

This is a query pipeline that uses SentenceTransformersTextEmbedder to embed a query and retrieve documents:

components:
query_embedder:
type: haystack.components.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder
init_parameters:
model: intfloat/e5-base-v2

embedding_retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
use_ssl: true
verify_certs: false
top_k: 20

connections:
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding

inputs:
query:
- query_embedder.text
- embedding_retriever.query
filters:
- embedding_retriever.filters

outputs:
documents: embedding_retriever.documents

max_runs_per_component: 100

metadata: {}

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrsentence-transformers/all-mpnet-base-v2The model to use for calculating embeddings. Specify the path to a local model or the ID of the model on Hugging Face.
deviceOptional[ComponentDevice]NoneOverrides the default device used to load the model.
tokenOptional[Secret]Secret.from_env_var(['HF_API_TOKEN', 'HF_TOKEN'], strict=False)An API token to use private models from Hugging Face.
prefixstrA string to add at the beginning of each text to be embedded. You can use it to prepend the text with an instruction, as required by some embedding models, such as E5 and bge.
suffixstrA string to add at the end of each text to embed.
batch_sizeint32Number of texts to embed at once.
progress_barboolTrueIf True, shows a progress bar for calculating embeddings. If False, disables the progress bar.
normalize_embeddingsboolFalseIf True, the embeddings are normalized using L2 normalization, so that the embeddings have a norm of 1.
trust_remote_codeboolFalseIf False, permits only Hugging Face verified model architectures. If True, permits custom models and scripts.
local_files_onlyboolFalseIf True, does not attempt to download the model from Hugging Face Hub and only looks at local files.
truncate_dimOptional[int]NoneThe dimension to truncate sentence embeddings to. None does no truncation. If the model has not been trained with Matryoshka Representation Learning, truncation of embeddings can significantly affect performance.
model_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoModelForSequenceClassification.from_pretrained when loading the model. Refer to specific model documentation for available kwargs.
tokenizer_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoTokenizer.from_pretrained when loading the tokenizer. Refer to specific model documentation for available kwargs.
config_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoConfig.from_pretrained when loading the model configuration.
precisionLiteral['float32', 'int8', 'uint8', 'binary', 'ubinary']float32The precision to use for the embeddings. All non-float32 precisions are quantized embeddings. Quantized embeddings are smaller in size and faster to compute, but may have a lower accuracy. They are useful for reducing the size of the embeddings of a corpus for semantic search, among other tasks.
encode_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for SentenceTransformer.encode when embedding texts. This parameter is provided for fine customization. Be careful not to clash with already set parameters and avoid passing parameters that change the output type.
backendLiteral['torch', 'onnx', 'openvino']torchThe backend to use for the Sentence Transformers model. Choose from "torch", "onnx", or "openvino". Refer to the Sentence Transformers documentation for more information on acceleration and quantization options.
revisionOptional[str]NoneThe specific model version to use. It can be a branch name, a tag name, or a commit ID for a stored model on Hugging Face. This enables pinning to a particular model version for reproducibility and stability.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
textstrText to embed.