VertexAIGeminiGenerator
Generate text using Google Gemini models through Vertex AI.
Basic Information
- Type:
haystack_integrations.components.generators.google_vertex.gemini.VertexAIGeminiGenerator - Components it can connect with:
PromptBuilder: Receives a prompt fromPromptBuilder.AnswerBuilder: Sends generated replies toAnswerBuilder.- Supports multimodal inputs (text and images).
This integration will be deprecated soon. We recommend using GoogleGenAIChatGenerator instead, which provides unified access to both Gemini Developer API and Vertex AI.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| parts | Variadic[Union[str, ByteStream, Part]] | Prompt for the model. | |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[str] | A list of generated content. |
Overview
VertexAIGeminiGenerator generates text using Google Gemini models through Vertex AI. It supports multimodal inputs including text and images.
This component is designed for text generation, not for chat. If you want chat capabilities, use GoogleGenAIChatGenerator instead.
Authorization
This component authenticates using Google Cloud Application Default Credentials (ADCs). For more information, see the official Google documentation.
Create secrets for GCP_PROJECT_ID and optionally GCP_DEFAULT_REGION. For detailed instructions on creating secrets, see Create Secrets.
Usage Example
This query pipeline uses VertexAIGeminiGenerator to generate text responses:
components:
bm25_retriever:
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'default'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 10
fuzziness: 0
PromptBuilder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |
Given the following information, answer the question.
Context:
{% for document in documents %}
{{ document.content }}
{% endfor %}
Question: {{ query }}
required_variables:
variables:
VertexAIGeminiGenerator:
type: haystack_integrations.components.generators.google_vertex.gemini.VertexAIGeminiGenerator
init_parameters:
project_id:
model: gemini-2.0-flash
location:
generation_config:
safety_settings:
system_instruction:
streaming_callback:
AnswerBuilder:
type: haystack.components.builders.answer_builder.AnswerBuilder
init_parameters:
pattern:
reference_pattern:
connections:
- sender: bm25_retriever.documents
receiver: PromptBuilder.documents
- sender: PromptBuilder.prompt
receiver: VertexAIGeminiGenerator.parts
- sender: VertexAIGeminiGenerator.replies
receiver: AnswerBuilder.replies
- sender: bm25_retriever.documents
receiver: AnswerBuilder.documents
inputs:
query:
- bm25_retriever.query
- PromptBuilder.query
- AnswerBuilder.query
outputs:
answers: AnswerBuilder.answers
max_runs_per_component: 100
metadata: {}
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| project_id | Optional[str] | None | ID of the GCP project to use. By default, it is set during Google Cloud authentication. |
| model | str | gemini-2.0-flash | Name of the model to use. For available models, see Vertex AI models. |
| location | Optional[str] | None | The default location to use when making API calls. If not set, uses us-central-1. |
| generation_config | Optional[Union[GenerationConfig, Dict[str, Any]]] | None | The generation config to use. Accepted fields: temperature, top_p, top_k, candidate_count, max_output_tokens, stop_sequences. |
| safety_settings | Optional[Dict[HarmCategory, HarmBlockThreshold]] | None | The safety settings to use. |
| system_instruction | Optional[Union[str, ByteStream, Part]] | None | Default system instruction to use for generating content. |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Run Method Parameters
These are the parameters you can configure for the run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.
| Parameter | Type | Default | Description |
|---|---|---|---|
| parts | Variadic[Union[str, ByteStream, Part]] | Prompt for the model. | |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Was this page helpful?