Skip to main content

DeepsetNvidiaNIMDocumentEmbedder

Embed documents using embedding models by NVIDIA NIM.

Basic Information

  • Type: deepset_cloud_custom_nodes.embedders.nvidia.nim_document_embedder.DeepsetNvidiaNIMDocumentEmbedder
  • Components it most often connects with:
    • PreProcessors: DeepsetNvidiaDocumentEmbedder can receive documents to embed from a PreProcessor, like DocumentSplitter.
    • DocumentWriter: DeepsetNvidiaDocumentEmbedder can send embedded documents to DocumentWriter that writes them into the document store.

Inputs

ParameterTypeDefaultDescription
documentsList[Document]Documents to embed.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]Documents with their embeddings added to the metadata.
metaDict[str, Any]Metadata regarding the usage statistics.

Overview

NvidiaDocumentEmbedder uses an NVIDIA NIM model to embed a list of documents. It then adds the computed embeddings to the document's embedding metadata field.

This component runs on models provided by deepset on hardware optimized for performance. Unlike models hosted on platforms like Hugging Face, these models are not downloaded at query time. Instead, you choose a model upfront on the component card.

The optimized models are only available on deepset AI Platform. To run this component on your own hardware, use a sentence transformers embedder instead.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Usage Example

Initializing the Component

components:
DeepsetNvidiaNIMDocumentEmbedder:
type: embedders.nvidia.nim_document_embedder.DeepsetNvidiaNIMDocumentEmbedder
init_parameters:

Using the Component in an Index

This is an example of a DeepsetNvidiaNIMDocumentEmbedder used in an index. It receives a list of documents from DocumentJoiner and then sends the embedded documents to DocumentWriter:

The embedder in an index in Pipeline Builder

Here's the YAML configuration:

components:
joiner_xlsx: # merge split documents with non-split xlsx documents
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate
sort_by_score: false

DeepsetNvidiaNIMDocumentEmbedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.nim_document_embedder.DeepsetNvidiaNIMDocumentEmbedder
init_parameters:
model: nvidia/nv-embedqa-e5-v5
prefix: ''
suffix: ''
batch_size: 32
meta_fields_to_embed:
embedding_separator: \n
truncate:
normalize_embeddings: true
timeout:
backend_kwargs:

writer:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: default
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
policy: OVERWRITE

connections:
- sender: joiner.documents
receiver: DeepsetNvidiaDocumentEmbedder.documents
- sender: DeepsetNvidiaDocumentEmbedder.documents
receiver: writer.documents


Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelDeepsetNvidiaNIMEmbeddingModelsDeepsetNvidiaNIMEmbeddingModels.NVIDIA_NV_EMBEDQA_E5_V5The model to use for calculating embeddings.
Choose the model from the list..
prefixstrA string to add at the beginning of each document text. Can be used to prepend the text with an instruction, as required by some embedding models, such as E5 and bge.
suffixstrA string to add at the end of each document text.
batch_sizeint32The number of documents to embed at once.
meta_fields_to_embedList[str] | NoneNoneList of meta fields that should be embedded along with the document text.
embedding_separatorstr\nSeparator used to concatenate the meta fields to the document text.
truncateEmbeddingTruncateMode | NoneNoneSpecifies how to truncate inputs longer than the maximum token length. Possible options are: START, END, NONE. If set to START, the input is truncated from the start. If set to END, the input is truncated from the end. If set to NONE, returns an error if the input is too long.
normalize_embeddingsboolTrueWhether to normalize the embeddings. Normalization is done by dividing the embedding by its L2 norm.
timeoutfloat | NoneNoneTimeout for request calls in seconds.
backend_kwargsDict[str, Any] | NoneNoneKeyword arguments to further customize the model behavior.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
documentsList[Document]Documents to embed.