Skip to main content

OptimumDocumentEmbedder

A component for computing Document embeddings using models loaded with the

Basic Information

  • Type: haystack_integrations.optimum.src.haystack_integrations.components.embedders.optimum.optimum_document_embedder.OptimumDocumentEmbedder

Inputs

ParameterTypeDefaultDescription
documentsList[Document]A list of Documents to embed.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]The updated Documents with their embeddings.

Overview

Work in Progress

Bear with us while we're working on adding pipeline examples and most common components connections.

A component for computing Document embeddings using models loaded with the HuggingFace Optimum library, leveraging the ONNX runtime for high-speed inference.

The embedding of each Document is stored in the embedding field of the Document.

Usage example:

from haystack.dataclasses import Document
from haystack_integrations.components.embedders.optimum import OptimumDocumentEmbedder

doc = Document(content="I love pizza!")

document_embedder = OptimumDocumentEmbedder(model="sentence-transformers/all-mpnet-base-v2")
document_embedder.warm_up()

result = document_embedder.run([doc])
print(result["documents"][0].embedding)

# [0.017020374536514282, -0.023255806416273117, ...]

Usage Example

components:
OptimumDocumentEmbedder:
type: optimum.src.haystack_integrations.components.embedders.optimum.optimum_document_embedder.OptimumDocumentEmbedder
init_parameters:

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrsentence-transformers/all-mpnet-base-v2A string representing the model id on HF Hub.
tokenOptional[Secret]Secret.from_env_var('HF_API_TOKEN', strict=False)The HuggingFace token to use as HTTP bearer authorization.
prefixstrA string to add to the beginning of each text.
suffixstrA string to add to the end of each text.
normalize_embeddingsboolTrueWhether to normalize the embeddings to unit length.
onnx_execution_providerstrCPUExecutionProviderThe execution provider to use for ONNX models. Note: Using the TensorRT execution provider TensorRT requires to build its inference engine ahead of inference, which takes some time due to the model optimization and nodes fusion. To avoid rebuilding the engine every time the model is loaded, ONNX Runtime provides a pair of options to save the engine: trt_engine_cache_enable and trt_engine_cache_path. We recommend setting these two provider options using the model_kwargs parameter, when using the TensorRT execution provider. The usage is as follows: python embedder = OptimumDocumentEmbedder( model="sentence-transformers/all-mpnet-base-v2", onnx_execution_provider="TensorrtExecutionProvider", model_kwargs={ "provider_options": { "trt_engine_cache_enable": True, "trt_engine_cache_path": "tmp/trt_cache", } }, )
pooling_modeOptional[Union[str, OptimumEmbedderPooling]]NoneThe pooling mode to use. When None, pooling mode will be inferred from the model config.
model_kwargsOptional[Dict[str, Any]]NoneDictionary containing additional keyword arguments to pass to the model. In case of duplication, these kwargs override model, onnx_execution_provider and token initialization parameters.
working_dirOptional[str]NoneThe directory to use for storing intermediate files generated during model optimization/quantization. Required for optimization and quantization.
optimizer_settingsOptional[OptimumEmbedderOptimizationConfig]NoneConfiguration for Optimum Embedder Optimization. If None, no additional optimization is be applied.
quantizer_settingsOptional[OptimumEmbedderQuantizationConfig]NoneConfiguration for Optimum Embedder Quantization. If None, no quantization is be applied.
batch_sizeint32Number of Documents to encode at once.
progress_barboolTrueWhether to show a progress bar or not.
meta_fields_to_embedOptional[List[str]]NoneList of meta fields that should be embedded along with the Document text.
embedding_separatorstr\nSeparator used to concatenate the meta fields to the Document text.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
documentsList[Document]A list of Documents to embed.