Skip to main content
For the complete documentation index for agents and LLMs, see llms.txt.

GoogleGenAITextEmbedder

Embed strings using Google AI models.

Key Features

  • Embeds query text using Google AI embedding models for semantic search.
  • Supports both the Gemini Developer API and Vertex AI through a single component.
  • Use with the same model as GoogleGenAIDocumentEmbedder to ensure compatible embeddings.
  • Outputs a float vector embedding for use with embedding retrievers.

Configuration

Authentication

Gemini Developer API: Create a secret with your Google API key. Use GOOGLE_API_KEY or GEMINI_API_KEY as the secret key. Get your API key from Google AI Studio.

Vertex AI: Create secrets for GCP_PROJECT_ID and GCP_DEFAULT_REGION, or use Application Default Credentials.

For detailed instructions on creating secrets, see Create Secrets.

  1. Drag the GoogleGenAITextEmbedder component onto the canvas from the Component Library.
  2. Click the component to open the configuration panel.
  3. On the General tab:
    1. Enter the model name (for example, text-embedding-004). For supported models, see the Google AI documentation.
  4. Go to the Advanced tab to configure the API type, API key, Vertex AI project and location, prefix, suffix, and config.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Connections

GoogleGenAITextEmbedder accepts a text string as input (text). It outputs the computed embedding (embedding) and usage metadata (meta).

In a query pipeline, connect the pipeline's query input to text, then connect embedding to an embedding retriever's query_embedding input.

Usage Example

This query pipeline uses GoogleGenAITextEmbedder to embed queries for semantic search:

components:
GoogleGenAITextEmbedder:
type: haystack_integrations.components.embedders.google_genai.text_embedder.GoogleGenAITextEmbedder
init_parameters:
api_key:
type: env_var
env_vars:
- GOOGLE_API_KEY
- GEMINI_API_KEY
strict: false
api: gemini
vertex_ai_project:
vertex_ai_location:
model: text-embedding-004
prefix: ""
suffix: ""
config:

embedding_retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'google-embeddings'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 10

ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- role: system
content: "You are a helpful assistant answering questions based on the provided documents."
- role: user
content: "Documents:\n{% for doc in documents %}\n{{ doc.content }}\n{% endfor %}\n\nQuestion: {{ query }}"

OpenAIChatGenerator:
type: haystack.components.generators.chat.openai.OpenAIChatGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- OPENAI_API_KEY
strict: false
model: gpt-4o-mini

OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm

connections:
- sender: GoogleGenAITextEmbedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: ChatPromptBuilder.documents
- sender: ChatPromptBuilder.prompt
receiver: OpenAIChatGenerator.messages
- sender: OpenAIChatGenerator.replies
receiver: OutputAdapter.replies
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: embedding_retriever.documents
receiver: answer_builder.documents

inputs:
query:
- GoogleGenAITextEmbedder.text
- ChatPromptBuilder.query
- answer_builder.query
filters:
- embedding_retriever.filters

outputs:
documents: embedding_retriever.documents
answers: answer_builder.answers

max_runs_per_component: 100

metadata: {}

Parameters

Inputs

ParameterTypeDefaultDescription
textstrText to embed.

Outputs

ParameterTypeDefaultDescription
embeddingList[float]The embedding of the input text.
metaDict[str, Any]Information about the usage of the model.

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
api_keySecretSecret.from_env_var(['GOOGLE_API_KEY', 'GEMINI_API_KEY'], strict=False)Google API key. Not needed if using Vertex AI with Application Default Credentials.
apiLiteral['gemini', 'vertex']geminiWhich API to use. Either "gemini" for the Gemini Developer API or "vertex" for Vertex AI.
vertex_ai_projectOptional[str]NoneGoogle Cloud project ID for Vertex AI. Required when using Vertex AI with Application Default Credentials.
vertex_ai_locationOptional[str]NoneGoogle Cloud location for Vertex AI (for example, "us-central1", "europe-west1"). Required when using Vertex AI with Application Default Credentials.
modelstrtext-embedding-004The name of the model to use for calculating embeddings.
prefixstr""A string to add at the beginning of each text to embed.
suffixstr""A string to add at the end of each text to embed.
configOptional[Dict[str, Any]]NoneA dictionary to configure embedding content configuration. Defaults to {"task_type": "SEMANTIC_SIMILARITY"}. See Google AI Task types.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
textstrText to embed.