DeepsetNvidiaNIMDocumentEmbedder

Embed documents using embedding models by NVIDIA NIM.

Basic Information

  • Pipeline type: Indexing
  • Type: deepset_cloud_custom_nodes.embedders.nvidia.nim_document_embedder.DeepsetNvidiaNIMDocumentEmbedder
  • Components it most often connects with:
    • PreProcessors: DeepsetNvidiaDocumentEmbedder can receive documents to embed from a PreProcessor, like DocumentSplitter.
    • DocumentWriter: DeepsetNvidiaDocumentEmbedder can send embedded documents to DocumentWriter that writes them into the document store.

Inputs

NameTypeDescription
documentsList of Document objectsThe documents to embed.

Outputs

NameTypeDescription
documentsList of Document objectsDocuments with their embeddings added to the metadata.
metaDictionaryMetadata regarding the usage statistics.

Overview

NvidiaDocumentEmbedder uses an NVIDIA NIM model to embed a list of documents. It then adds the computed embeddings to the document's embedding metadata field.

This component runs on models provided by deepset on hardware optimized for performance. Unlike models hosted on platforms like Hugging Face, these models are not downloaded at query time. Instead, you choose a model upfront on the component card.

The optimized models are only available on deepset AI Platform. To run this component on your own hardware, use a sentence transformers embedder instead.

ℹ️

Embedding Models in Query and Indexing Pipelines

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Usage Example

This is an example of a DeepsetNvidiaNIMDocumentEmbedder used in an indexing pipeline. It receives a list of documents from DocumentJoiner and then sends the embedded documents to DocumentWriter:

DocumentSplitter connected to NvidiaDocumentEmbedder which in turn is connected to DocumentWriter, in Builder.

Here's the YAML configuration:

components:
  joiner_xlsx:  # merge split documents with non-split xlsx documents
    type: haystack.components.joiners.document_joiner.DocumentJoiner
    init_parameters:
      join_mode: concatenate
      sort_by_score: false
      
  DeepsetNvidiaNIMDocumentEmbedder:
    type: deepset_cloud_custom_nodes.embedders.nvidia.nim_document_embedder.DeepsetNvidiaNIMDocumentEmbedder
    init_parameters:
      model: nvidia/nv-embedqa-e5-v5
      prefix: ''
      suffix: ''
      batch_size: 32
      meta_fields_to_embed:
      embedding_separator: \n
      truncate:
      normalize_embeddings: true
      timeout:
      backend_kwargs:
      
  writer:
    type: haystack.components.writers.document_writer.DocumentWriter
    init_parameters:
      document_store:
        type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
        init_parameters:
          hosts:
          index: default
          max_chunk_bytes: 104857600
          embedding_dim: 768
          return_embedding: false
          method:
          mappings:
          settings:
          create_index: true
          http_auth:
          use_ssl:
          verify_certs:
          timeout:
      policy: OVERWRITE
      
connections:
  - sender: joiner.documents
    receiver: DeepsetNvidiaDocumentEmbedder.documents
  - sender: DeepsetNvidiaDocumentEmbedder.documents
    receiver: writer.documents


Init Parameters

ParameterTypePossible valuesDescription
modelDeepsetNVIDIANIMEmbeddingModelsDefault: NVIDIA_NV_EMBEDQA_E5_V5The model to use for calculating embeddings.
Choose the model from the list.
Required.
prefixStringDefault: ""A string to add at the beginning of each document text, useful for instructions required by some embedding models.
Required
suffixStringDefault: ""A string to add at the end of each document text.
Required
batch_sizeIntegerDefault: 32The number of documents to embed at once.
Required
meta_fields_to_embedList of strings Default: NoneA list of metadata fields to embed along with the document text.
Required.
embedding_separatorStringDefault: "\n"The separator used to concatenate the metadata fields to the document text.
Required.
truncateEmbeddingTruncateMode START, END, NONE
Default: None
Specifies how to truncate inputs longer than the maximum token length. Possible options are: START, END, NONE.
If set to START, the input is truncated from the start.
If set to END, the input is truncated from the end.
If set to NONE, returns an error if the input is too long.
Required.
normalize_embeddingsBooleanTrue
False
Default: False
Whether to normalize the embeddings by dividing the embedding by its L2 norm.
Required.
timeoutFloatDefault: NoneTimeout for request calls in seconds.
Required.
backend_kwargsDictionary Default: NoneKeyword arguments to further customize the model behavior.
Required.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDescription
documentsList of Document objectsThe documents to embed.
Required.