Skip to main content

SentenceTransformersTextEmbedder

Embeds strings using Sentence Transformers models.

Basic Information

  • Type: haystack_integrations.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder

Inputs

ParameterTypeDefaultDescription
textstrText to embed.

Outputs

ParameterTypeDefaultDescription
embeddingList[float]A dictionary with the following keys: - embedding: The embedding of the input text.

Overview

Embeds strings using Sentence Transformers models.

You can use it to embed user query and send it to an embedding retriever.

Usage example:

from haystack.components.embedders import SentenceTransformersTextEmbedder

text_to_embed = "I love pizza!"

text_embedder = SentenceTransformersTextEmbedder()
text_embedder.warm_up()

print(text_embedder.run(text_to_embed))

# {'embedding': [-0.07804739475250244, 0.1498992145061493,, ...]}

Usage Example

components:
SentenceTransformersTextEmbedder:
type: components.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder
init_parameters:

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrsentence-transformers/all-mpnet-base-v2The model to use for calculating embeddings. Specify the path to a local model or the ID of the model on Hugging Face.
deviceOptional[ComponentDevice]NoneOverrides the default device used to load the model.
tokenOptional[Secret]Secret.from_env_var(['HF_API_TOKEN', 'HF_TOKEN'], strict=False)An API token to use private models from Hugging Face.
prefixstrA string to add at the beginning of each text to be embedded. You can use it to prepend the text with an instruction, as required by some embedding models, such as E5 and bge.
suffixstrA string to add at the end of each text to embed.
batch_sizeint32Number of texts to embed at once.
progress_barboolTrueIf True, shows a progress bar for calculating embeddings. If False, disables the progress bar.
normalize_embeddingsboolFalseIf True, the embeddings are normalized using L2 normalization, so that the embeddings have a norm of 1.
trust_remote_codeboolFalseIf False, permits only Hugging Face verified model architectures. If True, permits custom models and scripts.
local_files_onlyboolFalseIf True, does not attempt to download the model from Hugging Face Hub and only looks at local files.
truncate_dimOptional[int]NoneThe dimension to truncate sentence embeddings to. None does no truncation. If the model has not been trained with Matryoshka Representation Learning, truncation of embeddings can significantly affect performance.
model_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoModelForSequenceClassification.from_pretrained when loading the model. Refer to specific model documentation for available kwargs.
tokenizer_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoTokenizer.from_pretrained when loading the tokenizer. Refer to specific model documentation for available kwargs.
config_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoConfig.from_pretrained when loading the model configuration.
precisionLiteral['float32', 'int8', 'uint8', 'binary', 'ubinary']float32The precision to use for the embeddings. All non-float32 precisions are quantized embeddings. Quantized embeddings are smaller in size and faster to compute, but may have a lower accuracy. They are useful for reducing the size of the embeddings of a corpus for semantic search, among other tasks.
encode_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for SentenceTransformer.encode when embedding texts. This parameter is provided for fine customization. Be careful not to clash with already set parameters and avoid passing parameters that change the output type.
backendLiteral['torch', 'onnx', 'openvino']torchThe backend to use for the Sentence Transformers model. Choose from "torch", "onnx", or "openvino". Refer to the Sentence Transformers documentation for more information on acceleration and quantization options.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
textstrText to embed.