NvidiaDocumentEmbedder
Calculate document embeddings using NVIDIA models. Document embedders are used to embed documents in your indexes.
Basic Information
- Type:
haystack_integrations.components.embedders.nvidia.document_embedder.NvidiaDocumentEmbedder - Components it can connect with:
- Converters and Preprocessors:
NvidiaDocumentEmbeddercan receive documents to embed from a converter, such asTextFileToDocumentor a preprocessor, such asDocumentSplitter. DocumentWriter:NvidiaDocumentEmbeddersends embedded documents toDocumentWriterthat writes them into a document store.
- Converters and Preprocessors:
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | A list of Documents to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | Documents with their embeddings added to the embedding field. | |
| meta | Dict[str, Any] | Metadata related to the embedding process, including usage statistics. |
Overview
NvidiaDocumentEmbedder uses NVIDIA models to embed a list of documents. It then adds the computed embeddings to the document's embedding metadata field. It's used in indexing pipelines to embed documents and pass them to DocumentWriter.
You can use it with self-hosted models deployed with NVIDIA NIM or models hosted on the NVIDIA API Catalog.
This component emdeds documents. To embed a string (like a query), use the NvidiaTextEmbedder.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Authorization
You need an NVIDIA API key to use this component. Connect Haystack Enterprise Platform to NVIDIA on the Integrations page. For detailed instructions, see Use NVIDIA Models.
Usage Example
Using the Component in an Index
In this index, NvidiaDocumentEmbedder receives documents from DocumentSplitter and embeds them. It then sends the embedded documents to DocumentWriter that writes them into a document store. The index is configured to use the nvidia/nv-embedqa-e5-v5 model, which means NvidiaTextEmbedder used in the query pipeline must also use the same model.
components:
TextFileToDocument:
type: haystack.components.converters.txt.TextFileToDocument
init_parameters:
encoding: utf-8
store_full_path: false
DocumentSplitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: word
split_length: 200
split_overlap: 0
split_threshold: 0
splitting_function:
NvidiaDocumentEmbedder:
type: haystack_integrations.components.embedders.nvidia.document_embedder.NvidiaDocumentEmbedder
init_parameters:
api_key:
type: env_var
env_vars:
- NVIDIA_API_KEY
strict: true
model: nvidia/nv-embedqa-e5-v5
api_url: https://integrate.api.nvidia.com/v1
prefix: ''
suffix: ''
batch_size: 32
progress_bar: true
meta_fields_to_embed:
embedding_separator: "\n"
truncate:
timeout:
DocumentWriter:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: nvidia-embeddings-index
max_chunk_bytes: 104857600
embedding_dim: 1024
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
similarity: cosine
policy: NONE
connections:
- sender: TextFileToDocument.documents
receiver: DocumentSplitter.documents
- sender: DocumentSplitter.documents
receiver: NvidiaDocumentEmbedder.documents
- sender: NvidiaDocumentEmbedder.documents
receiver: DocumentWriter.documents
max_runs_per_component: 100
metadata: {}
inputs:
files:
- TextFileToDocument.sources
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | Optional[str] | None | Embedding model to use. If no specific model along with locally hosted API URL is provided, the system defaults to the available model found using /models API. |
| api_key | Optional[Secret] | Secret.from_env_var('NVIDIA_API_KEY') | API key for the NVIDIA NIM. |
| api_url | str | os.getenv('NVIDIA_API_URL', DEFAULT_API_URL) | Custom API URL for the NVIDIA NIM. Format for API URL is http://host:port |
| prefix | str | A string to add to the beginning of each text. | |
| suffix | str | A string to add to the end of each text. | |
| batch_size | int | 32 | Number of Documents to encode at once. Cannot be greater than 50. |
| progress_bar | bool | True | Whether to show a progress bar or not. |
| meta_fields_to_embed | Optional[List[str]] | None | List of meta fields that should be embedded along with the Document text. |
| embedding_separator | str | \n | Separator used to concatenate the meta fields to the Document text. |
| truncate | Optional[Union[EmbeddingTruncateMode, str]] | None | Specifies how inputs longer than the maximum token length should be truncated. If None the behavior is model-dependent, see the official documentation for more information. |
| timeout | Optional[float] | None | Timeout for request calls, if not set it is inferred from the NVIDIA_TIMEOUT environment variable or set to 60 by default. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | A list of Documents to embed. |
Was this page helpful?