GoogleAIGeminiGenerator
Generate text using multimodal Gemini models through Google AI Studio.
Basic Information
- Type:
haystack_integrations.components.generators.google_ai.gemini.GoogleAIGeminiGenerator - Components it can connect with:
PromptBuilder: Receives a prompt fromPromptBuilder.AnswerBuilder: Sends generated replies toAnswerBuilder.- Supports multimodal inputs (text and images).
This integration will be deprecated soon. We recommend using GoogleGenAIChatGenerator instead, which provides unified access to both Gemini Developer API and Vertex AI.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| parts | Variadic[Union[str, ByteStream, Part]] | A heterogeneous list of strings, ByteStream or Part objects. | |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[str] | A list of strings containing the generated responses. |
Overview
GoogleAIGeminiGenerator generates text using multimodal Gemini models through Google AI Studio. It supports both text and image inputs, making it suitable for vision-based tasks.
This component is designed for text generation, not for chat. If you want chat capabilities, use GoogleGenAIChatGenerator instead.
Authorization
To use this component, connect Haystack Platform to your Google AI Studio account on the Integrations page. Get your API key from Google AI Studio.
For detailed instructions, see Use Google AI Models.
Usage Example
This query pipeline uses GoogleAIGeminiGenerator to generate text responses:
components:
bm25_retriever:
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'default'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 10
fuzziness: 0
PromptBuilder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |
Given the following information, answer the question.
Context:
{% for document in documents %}
{{ document.content }}
{% endfor %}
Question: {{ query }}
required_variables:
variables:
GoogleAIGeminiGenerator:
type: haystack_integrations.components.generators.google_ai.gemini.GoogleAIGeminiGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- GOOGLE_API_KEY
strict: false
model: gemini-2.0-flash
generation_config:
safety_settings:
streaming_callback:
AnswerBuilder:
type: haystack.components.builders.answer_builder.AnswerBuilder
init_parameters:
pattern:
reference_pattern:
connections:
- sender: bm25_retriever.documents
receiver: PromptBuilder.documents
- sender: PromptBuilder.prompt
receiver: GoogleAIGeminiGenerator.parts
- sender: GoogleAIGeminiGenerator.replies
receiver: AnswerBuilder.replies
- sender: bm25_retriever.documents
receiver: AnswerBuilder.documents
inputs:
query:
- bm25_retriever.query
- PromptBuilder.query
- AnswerBuilder.query
outputs:
answers: AnswerBuilder.answers
max_runs_per_component: 100
metadata: {}
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_key | Secret | Secret.from_env_var('GOOGLE_API_KEY') | Google AI Studio API key. |
| model | str | gemini-2.0-flash | Name of the model to use. For available models, see Google AI models. |
| generation_config | Optional[Union[GenerationConfig, Dict[str, Any]]] | None | The generation configuration to use. Can be a GenerationConfig object or a dictionary of parameters. |
| safety_settings | Optional[Dict[HarmCategory, HarmBlockThreshold]] | None | The safety settings to use. A dictionary with HarmCategory as keys and HarmBlockThreshold as values. |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Run Method Parameters
These are the parameters you can configure for the run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.
| Parameter | Type | Default | Description |
|---|---|---|---|
| parts | Variadic[Union[str, ByteStream, Part]] | A heterogeneous list of strings, ByteStream or Part objects. | |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Was this page helpful?