VertexAIDocumentEmbedder
Embed documents using Vertex AI Embeddings API.
Basic Information
- Type:
haystack_integrations.components.embedders.google_vertex.document_embedder.VertexAIDocumentEmbedder - Components it can connect with:
- Preprocessors: Receives documents from
ConvertersorDocumentSplitterin an index. DocumentWriter: Sends embedded documents toDocumentWriterfor storage.
- Preprocessors: Receives documents from
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | A list of documents to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | A list of documents with embeddings. |
Overview
VertexAIDocumentEmbedder embeds documents using the Vertex AI Embeddings API. Use this component in an index to embed documents before storing them in a document store.
Compatible Models
You can find the supported models in the official Google documentation.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Authorization
This component authenticates using Google Cloud Application Default Credentials (ADCs). Create secrets with the following keys: GCP_PROJECT_ID and GCP_DEFAULT_REGION. For detailed instructions on creating secrets, see Create Secrets.
Usage Example
This index uses VertexAIDocumentEmbedder to embed documents before storing them:
components:
TextFileToDocument:
type: haystack.components.converters.txt.TextFileToDocument
init_parameters:
encoding: utf-8
store_full_path: false
DocumentSplitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: sentence
split_length: 5
split_overlap: 1
VertexAIDocumentEmbedder:
type: haystack_integrations.components.embedders.google_vertex.document_embedder.VertexAIDocumentEmbedder
init_parameters:
model: text-embedding-005
task_type: RETRIEVAL_DOCUMENT
gcp_region_name:
type: env_var
env_vars:
- GCP_DEFAULT_REGION
strict: false
gcp_project_id:
type: env_var
env_vars:
- GCP_PROJECT_ID
strict: false
batch_size: 32
max_tokens_total: 20000
time_sleep: 30
retries: 3
progress_bar: true
truncate_dim:
meta_fields_to_embed:
embedding_separator: "\n"
DocumentWriter:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'vertex-embeddings'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
policy: OVERWRITE
connections:
- sender: TextFileToDocument.documents
receiver: DocumentSplitter.documents
- sender: DocumentSplitter.documents
receiver: VertexAIDocumentEmbedder.documents
- sender: VertexAIDocumentEmbedder.documents
receiver: DocumentWriter.documents
inputs:
files:
- TextFileToDocument.sources
max_runs_per_component: 100
metadata: {}
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | Literal['text-embedding-004', 'text-embedding-005', ...] | Name of the model to use. | |
| task_type | Literal['RETRIEVAL_DOCUMENT', 'RETRIEVAL_QUERY', ...] | RETRIEVAL_DOCUMENT | The type of task for which the embeddings are being generated. See Google documentation. |
| gcp_region_name | Optional[Secret] | Secret.from_env_var('GCP_DEFAULT_REGION', strict=False) | The default location to use when making API calls. If not set, uses us-central-1. |
| gcp_project_id | Optional[Secret] | Secret.from_env_var('GCP_PROJECT_ID', strict=False) | ID of the GCP project to use. |
| batch_size | int | 32 | The number of documents to process in a single batch. |
| max_tokens_total | int | 20000 | The maximum number of tokens to process in total. |
| time_sleep | int | 30 | The time to sleep between retries in seconds. |
| retries | int | 3 | The number of retries in case of failure. |
| progress_bar | bool | True | Whether to display a progress bar during processing. |
| truncate_dim | Optional[int] | None | The dimension to truncate the embeddings to, if specified. |
| meta_fields_to_embed | Optional[List[str]] | None | A list of metadata fields to include in the embeddings. |
| embedding_separator | str | \n | The separator to use between different embeddings. |
Run Method Parameters
These are the parameters you can configure for the run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | A list of documents to embed. |
Was this page helpful?