GoogleGenAIDocumentEmbedder
Compute document embeddings using Google AI models.
Basic Information
- Type:
haystack_integrations.components.embedders.google_genai.document_embedder.GoogleGenAIDocumentEmbedder - Components it can connect with:
- Preprocessors: Receives documents from
ConvertersorDocumentSplitterin an index. DocumentWriter: Sends embedded documents toDocumentWriterfor storage.
- Preprocessors: Receives documents from
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | A list of documents to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | A list of documents with embeddings. | |
| meta | Dict[str, Any] | Information about the usage of the model. |
Overview
GoogleGenAIDocumentEmbedder computes document embeddings using Google AI models. It supports both the Gemini Developer API and Vertex AI.
The embedding of each document is stored in the embedding field of the document. Use this component in an index to embed documents before storing them in a document store.
Compatible Models
You can find the supported models in the Google AI documentation.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Authorization
Gemini Developer API: Create a secret with your Google API key. Type GOOGLE_API_KEY or GEMINI_API_KEY as the secret key. Get your API key from Google AI Studio.
Vertex AI: Create secrets for GCP_PROJECT_ID and GCP_DEFAULT_REGION, or use Application Default Credentials.
For detailed instructions on creating secrets, see Create Secrets.
Usage Example
This index uses GoogleGenAIDocumentEmbedder to embed documents before storing them:
components:
TextFileToDocument:
type: haystack.components.converters.txt.TextFileToDocument
init_parameters:
encoding: utf-8
store_full_path: false
DocumentSplitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: sentence
split_length: 5
split_overlap: 1
GoogleGenAIDocumentEmbedder:
type: haystack_integrations.components.embedders.google_genai.document_embedder.GoogleGenAIDocumentEmbedder
init_parameters:
api_key:
type: env_var
env_vars:
- GOOGLE_API_KEY
- GEMINI_API_KEY
strict: false
api: gemini
vertex_ai_project:
vertex_ai_location:
model: text-embedding-004
prefix: ""
suffix: ""
batch_size: 32
progress_bar: true
meta_fields_to_embed:
embedding_separator: "\n"
config:
DocumentWriter:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'google-embeddings'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
policy: OVERWRITE
connections:
- sender: TextFileToDocument.documents
receiver: DocumentSplitter.documents
- sender: DocumentSplitter.documents
receiver: GoogleGenAIDocumentEmbedder.documents
- sender: GoogleGenAIDocumentEmbedder.documents
receiver: DocumentWriter.documents
inputs:
files:
- TextFileToDocument.sources
max_runs_per_component: 100
metadata: {}
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_key | Secret | Secret.from_env_var(['GOOGLE_API_KEY', 'GEMINI_API_KEY'], strict=False) | Google API key. Not needed if using Vertex AI with Application Default Credentials. |
| api | Literal['gemini', 'vertex'] | gemini | Which API to use. Either "gemini" for the Gemini Developer API or "vertex" for Vertex AI. |
| vertex_ai_project | Optional[str] | None | Google Cloud project ID for Vertex AI. Required when using Vertex AI with Application Default Credentials. |
| vertex_ai_location | Optional[str] | None | Google Cloud location for Vertex AI (for example, "us-central1", "europe-west1"). Required when using Vertex AI with Application Default Credentials. |
| model | str | text-embedding-004 | The name of the model to use for calculating embeddings. |
| prefix | str | "" | A string to add at the beginning of each text. |
| suffix | str | "" | A string to add at the end of each text. |
| batch_size | int | 32 | Number of documents to embed at once. |
| progress_bar | bool | True | If True, shows a progress bar when running. |
| meta_fields_to_embed | Optional[List[str]] | None | List of metadata fields to embed along with the document text. |
| embedding_separator | str | \n | Separator used to concatenate the metadata fields to the document text. |
| config | Optional[Dict[str, Any]] | None | A dictionary to configure embedding content configuration. Defaults to {"task_type": "SEMANTIC_SIMILARITY"}. See Google AI Task types. |
Run Method Parameters
These are the parameters you can configure for the run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | A list of documents to embed. |
Was this page helpful?