GoogleAIGeminiGenerator
Generates text using multimodal Gemini models through Google AI Studio.#### Multimodal example
Basic Information
- Type:
haystack_integrations.components.generators.google_ai.gemini.GoogleAIGeminiGenerator
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| parts | Variadic[Union[str, ByteStream, Part]] | A heterogeneous list of strings, ByteStream or Part objects. | |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[str] | A dictionary containing the following key: - replies: A list of strings containing the generated responses. |
Overview
Work in Progress
Bear with us while we're working on adding pipeline examples and most common components connections.
Generates text using multimodal Gemini models through Google AI Studio.#### Multimodal example
import requests
from haystack.utils import Secret
from haystack.dataclasses.byte_stream import ByteStream
from haystack_integrations.components.generators.google_ai import GoogleAIGeminiGenerator
BASE_URL = (
"https://raw.githubusercontent.com/deepset-ai/haystack-core-integrations"
"/main/integrations/google_ai/example_assets"
)
URLS = [
f"{BASE_URL}/robot1.jpg",
f"{BASE_URL}/robot2.jpg",
f"{BASE_URL}/robot3.jpg",
f"{BASE_URL}/robot4.jpg"
]
images = [
ByteStream(data=requests.get(url).content, mime_type="image/jpeg")
for url in URLS
]
gemini = GoogleAIGeminiGenerator(model="gemini-2.0-flash", api_key=Secret.from_token("<MY_API_KEY>"))
result = gemini.run(parts = ["What can you tell me about this robots?", *images])
for answer in result["replies"]:
print(answer)
Usage Example
components:
GoogleAIGeminiGenerator:
type: google_ai.src.haystack_integrations.components.generators.google_ai.gemini.GoogleAIGeminiGenerator
init_parameters:
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_key | Secret | Secret.from_env_var('GOOGLE_API_KEY') | Google AI Studio API key. |
| model | str | gemini-2.0-flash | Name of the model to use. For available models, see https://ai.google.dev/gemini-api/docs/models/gemini |
| generation_config | Optional[Union[GenerationConfig, Dict[str, Any]]] | None | The generation configuration to use. This can either be a GenerationConfig object or a dictionary of parameters. For available parameters, see the GenerationConfig API reference. |
| safety_settings | Optional[Dict[HarmCategory, HarmBlockThreshold]] | None | The safety settings to use. A dictionary with HarmCategory as keys and HarmBlockThreshold as values. For more information, see the API reference |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| parts | Variadic[Union[str, ByteStream, Part]] | A heterogeneous list of strings, ByteStream or Part objects. | |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Was this page helpful?