DeepsetNvidiaNIMTextEmbedder

Embed strings of text using embedding models by NVIDIA NIM on optimized hardware.

Basic Information

  • Type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
  • Components it most often connects with:
    • Query: DeepsetNvidiaNIMTextEmbedder receives the query to embed from Query.
    • Embedding Retrievers: DeepsetNvidiaNIMTextEmbedder can send the embedded query to an Embedding Retriever that uses it to find matching documents.

Inputs

NameTypeDescription
textStringThe text to embed.

Outputs

NameTypeDescription
embeddingList of floatsThe embedding of the text.
metaDictionaryMetadata regarding the usage statistics.

Overview

NvidiaNIMTextEmbedder uses an NVIDIA NIM model to embed a text string, such as a query.

This component runs on models provided by deepset on hardware optimized for performance. Unlike models hosted on platforms like Hugging Face, these models are not downloaded at query time. Instead, you choose a model upfront on the component card.

The optimized models are only available on deepset AI Platform. To run this component on your own hardware, use a sentence transformers embedder instead.


ℹ️

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your index must be the same as the embedding model you use to embed the query in your pipeline.

This means the embedders for your indexes and pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Usage Example

This is an example of a DeepsetNvidiaNIMTextEmbedder used in a query pipeline. It receives the text to embed from Query and then sends the embedded query to OpenSearchEmbeddingRetriever:

Here's the YAML configuration:

components:
  DeepsetNvidiaNIMTextEmbedder:
    type: deepset_cloud_custom_nodes.embedders.nvidia.nim_text_embedder.DeepsetNvidiaNIMTextEmbedder
    init_parameters:
      model: nvidia/nv-embedqa-e5-v5
      prefix: ''
      suffix: ''
      normalize_embeddings: true
      
connections:
  - sender: DeepsetNvidiaNIMTextEmbedder.embedding
    receiver: OpenSearchEmbeddingRetriever.query_embedding
max_runs_per_component: 100

inputs:
  query:
    - DeepsetNvidiaNIMTextEmbedder.text

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

Parameter

Type

Possible values

Description

model

DeepsetNVIDIAEmbeddingModels

Default: NVIDIA_NV_EMBEDQA_E5_V5

The model to use for calculating embeddings. Choose the model from the list in Builder. Required.

prefix

String

Default: ""

A string to add at the beginning of the string being embedded. Required.

suffix

String

Default: ""

A string to add at the end of the string being embedded. Required.

truncate

EmbeddingTruncateMode

START, END, NONE
Default: None

Specifies how to truncate inputs longer than the maximum token length. Possible options are: START, END, NONE.
If set to START, the input is truncated from the start.
If set to END, the input is truncated from the end.
If set to NONE, returns an error if the input is too long. Optional.

normalize_embeddings

Boolean

True
False
Default: True

Whether to normalize the embeddings by dividing the embedding by its L2 norm. Required.

timeout

Float

Default: None

Timeout for request calls in seconds. Optional.

backend_kwargs

Dictionary

Default: None

Keyword arguments to further customize model behavior. Optional.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

Run() method parameters take precedence over initialization parameters.

Parameter

Type

Description

text

String

The text to embed.
Required.