Skip to main content

GoogleAIGeminiGenerator

Generates text using multimodal Gemini models through Google AI Studio.#### Multimodal example

Basic Information

  • Type: haystack_integrations.components.generators.google_ai.gemini.GoogleAIGeminiGenerator

Inputs

ParameterTypeDefaultDescription
partsVariadic[Union[str, ByteStream, Part]]A heterogeneous list of strings, ByteStream or Part objects.
streaming_callbackOptional[Callable[[StreamingChunk], None]]NoneA callback function that is called when a new token is received from the stream.

Outputs

ParameterTypeDefaultDescription
repliesList[str]A dictionary containing the following key: - replies: A list of strings containing the generated responses.

Overview

Work in Progress

Bear with us while we're working on adding pipeline examples and most common components connections.

Generates text using multimodal Gemini models through Google AI Studio.#### Multimodal example

import requests
from haystack.utils import Secret
from haystack.dataclasses.byte_stream import ByteStream
from haystack_integrations.components.generators.google_ai import GoogleAIGeminiGenerator

BASE_URL = (
"https://raw.githubusercontent.com/deepset-ai/haystack-core-integrations"
"/main/integrations/google_ai/example_assets"
)

URLS = [
f"{BASE_URL}/robot1.jpg",
f"{BASE_URL}/robot2.jpg",
f"{BASE_URL}/robot3.jpg",
f"{BASE_URL}/robot4.jpg"
]
images = [
ByteStream(data=requests.get(url).content, mime_type="image/jpeg")
for url in URLS
]

gemini = GoogleAIGeminiGenerator(model="gemini-2.0-flash", api_key=Secret.from_token("<MY_API_KEY>"))
result = gemini.run(parts = ["What can you tell me about this robots?", *images])
for answer in result["replies"]:
print(answer)

Usage Example

components:
GoogleAIGeminiGenerator:
type: google_ai.src.haystack_integrations.components.generators.google_ai.gemini.GoogleAIGeminiGenerator
init_parameters:

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
api_keySecretSecret.from_env_var('GOOGLE_API_KEY')Google AI Studio API key.
modelstrgemini-2.0-flashName of the model to use. For available models, see https://ai.google.dev/gemini-api/docs/models/gemini
generation_configOptional[Union[GenerationConfig, Dict[str, Any]]]NoneThe generation configuration to use. This can either be a GenerationConfig object or a dictionary of parameters. For available parameters, see the GenerationConfig API reference.
safety_settingsOptional[Dict[HarmCategory, HarmBlockThreshold]]NoneThe safety settings to use. A dictionary with HarmCategory as keys and HarmBlockThreshold as values. For more information, see the API reference
streaming_callbackOptional[Callable[[StreamingChunk], None]]NoneA callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
partsVariadic[Union[str, ByteStream, Part]]A heterogeneous list of strings, ByteStream or Part objects.
streaming_callbackOptional[Callable[[StreamingChunk], None]]NoneA callback function that is called when a new token is received from the stream.