OptimumTextEmbedder
A component to embed text using models loaded with the
Basic Information
- Type:
haystack_integrations.optimum.src.haystack_integrations.components.embedders.optimum.optimum_text_embedder.OptimumTextEmbedder
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | The text to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| embedding | List[float] | The embeddings of the text. |
Overview
Work in Progress
Bear with us while we're working on adding pipeline examples and most common components connections.
A component to embed text using models loaded with the HuggingFace Optimum library, leveraging the ONNX runtime for high-speed inference.
Usage example:
from haystack_integrations.components.embedders.optimum import OptimumTextEmbedder
text_to_embed = "I love pizza!"
text_embedder = OptimumTextEmbedder(model="sentence-transformers/all-mpnet-base-v2")
text_embedder.warm_up()
print(text_embedder.run(text_to_embed))
# {'embedding': [-0.07804739475250244, 0.1498992145061493,, ...]}
Usage Example
components:
OptimumTextEmbedder:
type: optimum.src.haystack_integrations.components.embedders.optimum.optimum_text_embedder.OptimumTextEmbedder
init_parameters:
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | sentence-transformers/all-mpnet-base-v2 | A string representing the model id on HF Hub. |
| token | Optional[Secret] | Secret.from_env_var('HF_API_TOKEN', strict=False) | The HuggingFace token to use as HTTP bearer authorization. |
| prefix | str | A string to add to the beginning of each text. | |
| suffix | str | A string to add to the end of each text. | |
| normalize_embeddings | bool | True | Whether to normalize the embeddings to unit length. |
| onnx_execution_provider | str | CPUExecutionProvider | The execution provider to use for ONNX models. Note: Using the TensorRT execution provider TensorRT requires to build its inference engine ahead of inference, which takes some time due to the model optimization and nodes fusion. To avoid rebuilding the engine every time the model is loaded, ONNX Runtime provides a pair of options to save the engine: trt_engine_cache_enable and trt_engine_cache_path. We recommend setting these two provider options using the model_kwargs parameter, when using the TensorRT execution provider. The usage is as follows: python embedder = OptimumDocumentEmbedder( model="sentence-transformers/all-mpnet-base-v2", onnx_execution_provider="TensorrtExecutionProvider", model_kwargs={ "provider_options": { "trt_engine_cache_enable": True, "trt_engine_cache_path": "tmp/trt_cache", } }, ) |
| pooling_mode | Optional[Union[str, OptimumEmbedderPooling]] | None | The pooling mode to use. When None, pooling mode will be inferred from the model config. |
| model_kwargs | Optional[Dict[str, Any]] | None | Dictionary containing additional keyword arguments to pass to the model. In case of duplication, these kwargs override model, onnx_execution_provider and token initialization parameters. |
| working_dir | Optional[str] | None | The directory to use for storing intermediate files generated during model optimization/quantization. Required for optimization and quantization. |
| optimizer_settings | Optional[OptimumEmbedderOptimizationConfig] | None | Configuration for Optimum Embedder Optimization. If None, no additional optimization is be applied. |
| quantizer_settings | Optional[OptimumEmbedderQuantizationConfig] | None | Configuration for Optimum Embedder Quantization. If None, no quantization is be applied. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | The text to embed. |
Was this page helpful?