Skip to main content

DeepsetNvidiaDocumentEmbedder

Embed documents using embedding models by NVIDIA Triton.

Basic Information

  • Type: deepset_cloud_custom_nodes.embedders.nvidia.document_embedder.DeepsetNvidiaDocumentEmbedder
  • Components it most often connects with:
    • PreProcessors: DeepsetNvidiaDocumentEmbedder can receive documents to embed from a PreProcessor, like DocumentSplitter.
    • DocumentWriter: DeepsetNvidiaDocumentEmbedder can send embedded documents to DocumentWriter that writes them into the document store.

Inputs

ParameterTypeDefaultDescription
documentsList[Document]The documents to embed.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]Documents with their embeddings added to the metadata.
metaDict[str, Any]Metadata on usage statistics.

Overview

NvidiaDocumentEmbedder uses NVIDTIA Triton models to embed a list of documents. It then adds the computed embeddings to the document's embedding metadata field.

This component runs on optimized hardware in deepset AI Platform, which means it doesn't work if you export it to a local Python file. If you're planning to export, use SentenceTransformersDocumentEmbedder instead.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Usage Example

Initializing the Component

components:
DeepsetNvidiaDocumentEmbedder:
type: embedders.nvidia.document_embedder.DeepsetNvidiaDocumentEmbedder
init_parameters:

Using the Component in an Index

This is an example of a DeepsetNvidiaDocumentEmbedder used in an index. It receives a list of documents from DocumentSplitter and then sends the embedded documents to DocumentWriter:

The embedder in an index in Pipeline Builder

Here's the YAML configuration:

components:
DocumentSplitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: word
split_length: 200
split_overlap: 0
split_threshold: 0
splitting_function: null
DeepsetNvidiaDocumentEmbedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.document_embedder.DeepsetNvidiaDocumentEmbedder
init_parameters:
model: intfloat/multilingual-e5-base
prefix: ''
suffix: ''
batch_size: 32
meta_fields_to_embed: null
embedding_separator: \n
truncate: null
normalize_embeddings: true
timeout: null
backend_kwargs: null
DocumentWriter:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
embedding_dim: 1024
similarity: cosine
policy: NONE
connections:
- sender: DocumentSplitter.documents
receiver: DeepsetNvidiaDocumentEmbedder.documents
- sender: DeepsetNvidiaDocumentEmbedder.documents
receiver: DocumentWriter.documents
max_runs_per_component: 100
metadata: {}

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelDeepsetNVIDIAEmbeddingModelsDeepsetNVIDIAEmbeddingModels.INTFLOAT_MULTILINGUAL_E5_BASEThe model to use for calculating embeddings. Can be a specific model path like intfloat/multilingual-e5-base.
Choose the model from the list.
prefixstrA string to add at the beginning of each document text. Can be used to prepend the text with an instruction, as required by some embedding models, such as E5 and bge.
suffixstrA string to add at the end of each document text.
batch_sizeint32The number of documents to embed at once.
meta_fields_to_embedList[str]NoneNone
embedding_separatorstr\nSeparator used to concatenate the meta fields to the document text.
truncateEmbeddingTruncateModeNoneNone
normalize_embeddingsboolTrueWhether to normalize the embeddings. Normalization is done by dividing the embedding by its L2 norm.
timeoutfloatNoneNone
backend_kwargsDict[str, Any]NoneNone

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
documentsList[Document]Documents to embed.