Skip to main content

OptimumTextEmbedder

A component to embed text using models loaded with the

Basic Information

  • Type: haystack_integrations.optimum.src.haystack_integrations.components.embedders.optimum.optimum_text_embedder.OptimumTextEmbedder

Inputs

ParameterTypeDefaultDescription
textstrThe text to embed.

Outputs

ParameterTypeDefaultDescription
embeddingList[float]The embeddings of the text.

Overview

Work in Progress

Bear with us while we're working on adding pipeline examples and most common components connections.

A component to embed text using models loaded with the HuggingFace Optimum library, leveraging the ONNX runtime for high-speed inference.

Usage example:

from haystack_integrations.components.embedders.optimum import OptimumTextEmbedder

text_to_embed = "I love pizza!"

text_embedder = OptimumTextEmbedder(model="sentence-transformers/all-mpnet-base-v2")
text_embedder.warm_up()

print(text_embedder.run(text_to_embed))

# {'embedding': [-0.07804739475250244, 0.1498992145061493,, ...]}

Usage Example

components:
OptimumTextEmbedder:
type: optimum.src.haystack_integrations.components.embedders.optimum.optimum_text_embedder.OptimumTextEmbedder
init_parameters:

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrsentence-transformers/all-mpnet-base-v2A string representing the model id on HF Hub.
tokenOptional[Secret]Secret.from_env_var('HF_API_TOKEN', strict=False)The HuggingFace token to use as HTTP bearer authorization.
prefixstrA string to add to the beginning of each text.
suffixstrA string to add to the end of each text.
normalize_embeddingsboolTrueWhether to normalize the embeddings to unit length.
onnx_execution_providerstrCPUExecutionProviderThe execution provider to use for ONNX models. Note: Using the TensorRT execution provider TensorRT requires to build its inference engine ahead of inference, which takes some time due to the model optimization and nodes fusion. To avoid rebuilding the engine every time the model is loaded, ONNX Runtime provides a pair of options to save the engine: trt_engine_cache_enable and trt_engine_cache_path. We recommend setting these two provider options using the model_kwargs parameter, when using the TensorRT execution provider. The usage is as follows: python embedder = OptimumDocumentEmbedder( model="sentence-transformers/all-mpnet-base-v2", onnx_execution_provider="TensorrtExecutionProvider", model_kwargs={ "provider_options": { "trt_engine_cache_enable": True, "trt_engine_cache_path": "tmp/trt_cache", } }, )
pooling_modeOptional[Union[str, OptimumEmbedderPooling]]NoneThe pooling mode to use. When None, pooling mode will be inferred from the model config.
model_kwargsOptional[Dict[str, Any]]NoneDictionary containing additional keyword arguments to pass to the model. In case of duplication, these kwargs override model, onnx_execution_provider and token initialization parameters.
working_dirOptional[str]NoneThe directory to use for storing intermediate files generated during model optimization/quantization. Required for optimization and quantization.
optimizer_settingsOptional[OptimumEmbedderOptimizationConfig]NoneConfiguration for Optimum Embedder Optimization. If None, no additional optimization is be applied.
quantizer_settingsOptional[OptimumEmbedderQuantizationConfig]NoneConfiguration for Optimum Embedder Quantization. If None, no quantization is be applied.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
textstrThe text to embed.