NvidiaTextEmbedder
Embed strings, such as user queries, using NVIDIA models.
Basic Information
- Type:
haystack_integrations.components.embedders.nvidia.text_embedder.NvidiaTextEmbedder - Components it can connect with:
Input:NvidiaTextEmbedderreceives the query to embed fromInput.- Embedding Retrievers:
NvidiaTextEmbeddercan send the embedded query to an embedding retriever that uses it to find matching documents.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | The text to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| embedding | List[float] | The embedding of the text. | |
| meta | Dict[str, Any] | Metadata about the request, including usage statistics. |
Overview
NvidiaTextEmbedder uses NVIDIA models to embed strings, such as user queries. Use this component in apps with embedding retrieval to transform your query into a vector.
For models that differentiate between query and document inputs, this component embeds the input string as a query.
You can use it with self-hosted models deployed with NVIDIA NIM or models hosted on the NVIDIA API Catalog.
This component is used in query pieplines to embed user queries. To embed a list of documents in an index, use the NvidiaDocumentEmbedder, which enriches the document with the computed embedding.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Authorization
You need an NVIDIA API key to use this component. Connect Haystack Enterprise Platform to NVIDIA on the Integrations page. For detailed instructions, see Use NVIDIA Models.
Usage Example
Using the Component in a Pipeline
This is an example of a query pipeline with NvidiaTextEmbedder that receives a query to embed and then sends the embedded query to OpenSearchEmbeddingRetriever to find matching documents.
components:
NvidiaTextEmbedder:
type: haystack_integrations.components.embedders.nvidia.text_embedder.NvidiaTextEmbedder
init_parameters:
api_key:
type: env_var
env_vars:
- NVIDIA_API_KEY
strict: true
model: nvidia/nv-embedqa-e5-v5
api_url: https://integrate.api.nvidia.com/v1
prefix: ''
suffix: ''
truncate:
timeout:
OpenSearchEmbeddingRetriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
filters:
top_k: 10
filter_policy: replace
custom_query:
raise_on_failure: true
efficient_filtering: true
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: nvidia-embeddings-index
max_chunk_bytes: 104857600
embedding_dim: 1024
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
similarity: cosine
connections:
- sender: NvidiaTextEmbedder.embedding
receiver: OpenSearchEmbeddingRetriever.query_embedding
max_runs_per_component: 100
metadata: {}
inputs:
query:
- NvidiaTextEmbedder.text
outputs:
documents: OpenSearchEmbeddingRetriever.documents
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | Optional[str] | None | Embedding model to use. If no specific model along with locally hosted API URL is provided, the system defaults to the available model found using /models API. |
| api_key | Optional[Secret] | Secret.from_env_var('NVIDIA_API_KEY') | API key for the NVIDIA NIM. |
| api_url | str | os.getenv('NVIDIA_API_URL', DEFAULT_API_URL) | Custom API URL for the NVIDIA NIM. Format for API URL is http://host:port |
| prefix | str | A string to add to the beginning of each text. | |
| suffix | str | A string to add to the end of each text. | |
| truncate | Optional[Union[EmbeddingTruncateMode, str]] | None | Specifies how inputs longer that the maximum token length should be truncated. If None the behavior is model-dependent, see the official documentation for more information. |
| timeout | Optional[float] | None | Timeout for request calls, if not set it is inferred from the NVIDIA_TIMEOUT environment variable or set to 60 by default. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | The text to embed. |
Was this page helpful?