DeepsetNvidiaDocumentEmbedder
Embed documents using embedding models by NVIDIA Triton.
Basic Information
- Pipeline type: Indexing
- Type:
deepset_cloud_custom_nodes.embedders.nvidia.document_embedder.DeepsetNvidiaDocumentEmbedder
- Components it most often connects with:
- PreProcessors:
DeepsetNvidiaDocumentEmbedder
can receive documents to embed from a PreProcessor, likeDocumentSplitter
. - DocumentWriter:
DeepsetNvidiaDocumentEmbedder
can send embedded documents toDocumentWriter
that writes them into the document store.
- PreProcessors:
Inputs
Name | Type | Description |
---|---|---|
documents | List of Document objects | The documents to embed. |
Outputs
Name | Type | Description |
---|---|---|
documents | List of Document objects | Documents with their embeddings added to the metadata. |
meta | Dictionary | Metadata regarding the usage statistics. |
Overview
NvidiaDocumentEmbedder
uses NVIDTIA Triton models to embed a list of documents. It then adds the computed embeddings to the document's embedding
metadata field.
This component runs on optimized hardware in deepset Cloud, which means it doesn't work if you export it to a local Python file. If you're planning to export, use SentenceTransformersDocumentEmbedder instead.
Embedding Models in Query and Indexing Pipelines
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use
CohereDocumentEmbedder
to embed your documents, you should useCohereTextEmbedder
with the same model to embed your queries.
Usage Example
This is an example of a DeepsetNvidiaDocumentEmbedder
used in an indexing pipeline. It receives a list of documents from DocumentSplitter
and then sends the embedded documents to DocumentWriter
:
Here's the YAML configuration:
components:
DocumentSplitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: word
split_length: 200
split_overlap: 0
split_threshold: 0
splitting_function: null
DeepsetNvidiaDocumentEmbedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.document_embedder.DeepsetNvidiaDocumentEmbedder
init_parameters:
model: intfloat/multilingual-e5-base
prefix: ''
suffix: ''
batch_size: 32
meta_fields_to_embed: null
embedding_separator: \n
truncate: null
normalize_embeddings: true
timeout: null
backend_kwargs: null
DocumentWriter:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
embedding_dim: 1024
similarity: cosine
policy: NONE
connections:
- sender: DocumentSplitter.documents
receiver: DeepsetNvidiaDocumentEmbedder.documents
- sender: DeepsetNvidiaDocumentEmbedder.documents
receiver: DocumentWriter.documents
max_runs_per_component: 100
metadata: {}
Init Parameters
Parameter | Type | Possible values | Description |
---|---|---|---|
model | DeepsetNVIDIAEmbeddingModels | Default: DeepsetNVIDIAEmbeddingModels.INTFLOAT_MULTILINGUAL_E5_BASE | The model to use for calculating embeddings. Can be a specific model path like intfloat/multilingual-e5-base .Choose the model from the list. Required. |
prefix | String | Default: "" | A string to add at the beginning of each document text, useful for instructions required by some embedding models. Required |
suffix | String | Default: "" | A string to add at the end of each document text. Required |
batch_size | Integer | Default: 32 | The number of documents to embed at once. Required |
meta_fields_to_embed | List of strings | Default: None | A list of metadata fields to embed along with the document text. Required. |
embedding_separator | String | Default: "\n" | The separator used to concatenate the metadata fields to the document text. Required. |
truncate | EmbeddingTruncateMode | START , END , NONE Default: None | Specifies how to truncate inputs longer than the maximum token length. Possible options are: START , END , NONE .If set to START , the input is truncated from the start.If set to END , the input is truncated from the end.If set to NONE , returns an error if the input is too long.Required. |
normalize_embeddings | Boolean | True False Default: False | Whether to normalize the embeddings by dividing the embedding by its L2 norm. Required. |
timeout | Float | Default: None | Timeout for request calls in seconds. Required. |
backend_kwargs | Dictionary | Default: None | Keyword arguments to further customize the model behavior. Required. |
Updated 16 days ago