Skip to main content

VertexAITextEmbedder

Embed text using Vertex AI Text Embeddings API.

Basic Information

  • Type: haystack_integrations.components.embedders.google_vertex.text_embedder.VertexAITextEmbedder
  • Components it can connect with:
    • Input: Receives a query string as input in a query pipeline.
    • Retrievers: Sends the computed embedding to an embedding Retriever.

Inputs

ParameterTypeDefaultDescription
textUnion[List[Document], List[str], str]The text to embed.

Outputs

ParameterTypeDefaultDescription
embeddingList[float]The embedding of the input text.

Overview

VertexAITextEmbedder embeds text using the Vertex AI Text Embeddings API. Use this component in query pipelines to embed the user's query before passing it to a retriever for semantic search.

Make sure to use the same embedding model as the one used to embed the documents in the document store.

Compatible Models

You can find the supported models in the official Google documentation.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Authorization

This component authenticates using Google Cloud Application Default Credentials (ADCs). Create secrets for GCP_PROJECT_ID and GCP_DEFAULT_REGION. For detailed instructions on creating secrets, see Create Secrets.

Usage Example

This query pipeline uses VertexAITextEmbedder to embed queries for semantic search:

components:
VertexAITextEmbedder:
type: haystack_integrations.components.embedders.google_vertex.text_embedder.VertexAITextEmbedder
init_parameters:
model: text-embedding-005
task_type: RETRIEVAL_QUERY
gcp_region_name:
type: env_var
env_vars:
- GCP_DEFAULT_REGION
strict: false
gcp_project_id:
type: env_var
env_vars:
- GCP_PROJECT_ID
strict: false
progress_bar: true
truncate_dim:

embedding_retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'vertex-embeddings'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 10

ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- role: system
content: "You are a helpful assistant answering questions based on the provided documents."
- role: user
content: "Documents:\n{% for doc in documents %}\n{{ doc.content }}\n{% endfor %}\n\nQuestion: {{ query }}"

OpenAIChatGenerator:
type: haystack.components.generators.chat.openai.OpenAIChatGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- OPENAI_API_KEY
strict: false
model: gpt-4o-mini

OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm

connections:
- sender: VertexAITextEmbedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: ChatPromptBuilder.documents
- sender: ChatPromptBuilder.prompt
receiver: OpenAIChatGenerator.messages
- sender: OpenAIChatGenerator.replies
receiver: OutputAdapter.replies
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: embedding_retriever.documents
receiver: answer_builder.documents

inputs:
query:
- VertexAITextEmbedder.text
- ChatPromptBuilder.query
- answer_builder.query
filters:
- embedding_retriever.filters

outputs:
documents: embedding_retriever.documents
answers: answer_builder.answers

max_runs_per_component: 100

metadata: {}

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelLiteral['text-embedding-004', 'text-embedding-005', ...]Name of the model to use.
task_typeLiteral['RETRIEVAL_DOCUMENT', 'RETRIEVAL_QUERY', ...]RETRIEVAL_QUERYThe type of task for which the embeddings are being generated. See Google documentation.
gcp_region_nameOptional[Secret]Secret.from_env_var('GCP_DEFAULT_REGION', strict=False)The default location to use when making API calls. If not set, uses us-central-1.
gcp_project_idOptional[Secret]Secret.from_env_var('GCP_PROJECT_ID', strict=False)ID of the GCP project to use.
progress_barboolTrueWhether to display a progress bar during processing.
truncate_dimOptional[int]NoneThe dimension to truncate the embeddings to, if specified.

Run Method Parameters

These are the parameters you can configure for the run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.

ParameterTypeDefaultDescription
textUnion[List[Document], List[str], str]The text to embed.