VertexAIGeminiGenerator
VertexAIGeminiGenerator enables text generation using Google Gemini models.
Basic Information
- Type:
haystack_integrations.components.generators.google_vertex.gemini.VertexAIGeminiGenerator
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| parts | Variadic[Union[str, ByteStream, Part]] | Prompt for the model. | |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[str] | A dictionary with the following keys: - replies: A list of generated content. |
Overview
Work in Progress
Bear with us while we're working on adding pipeline examples and most common components connections.
VertexAIGeminiGenerator enables text generation using Google Gemini models.
Usage example:
from haystack_integrations.components.generators.google_vertex import VertexAIGeminiGenerator
gemini = VertexAIGeminiGenerator()
result = gemini.run(parts = ["What is the most interesting thing you know?"])
for answer in result["replies"]:
print(answer)
>>> 1. **The Origin of Life:** How and where did life begin? The answers to this ...
>>> 2. **The Unseen Universe:** The vast majority of the universe is ...
>>> 3. **Quantum Entanglement:** This eerie phenomenon in quantum mechanics allows ...
>>> 4. **Time Dilation:** Einstein's theory of relativity revealed that time can ...
>>> 5. **The Fermi Paradox:** Despite the vastness of the universe and the ...
>>> 6. **Biological Evolution:** The idea that life evolves over time through natural ...
>>> 7. **Neuroplasticity:** The brain's ability to adapt and change throughout life, ...
>>> 8. **The Goldilocks Zone:** The concept of the habitable zone, or the Goldilocks zone, ...
>>> 9. **String Theory:** This theoretical framework in physics aims to unify all ...
>>> 10. **Consciousness:** The nature of human consciousness and how it arises ...
Usage Example
components:
VertexAIGeminiGenerator:
type: google_vertex.src.haystack_integrations.components.generators.google_vertex.gemini.VertexAIGeminiGenerator
init_parameters:
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| project_id | Optional[str] | None | ID of the GCP project to use. By default, it is set during Google Cloud authentication. |
| model | str | gemini-2.0-flash | Name of the model to use. For available models, see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models. |
| location | Optional[str] | None | The default location to use when making API calls, if not set uses us-central-1. |
| generation_config | Optional[Union[GenerationConfig, Dict[str, Any]]] | None | The generation config to use. Can either be a GenerationConfig object or a dictionary of parameters. Accepted fields are: - temperature - top_p - top_k - candidate_count - max_output_tokens - stop_sequences |
| safety_settings | Optional[Dict[HarmCategory, HarmBlockThreshold]] | None | The safety settings to use. See the documentation for HarmBlockThreshold and HarmCategory for more details. |
| system_instruction | Optional[Union[str, ByteStream, Part]] | None | Default system instruction to use for generating content. |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| parts | Variadic[Union[str, ByteStream, Part]] | Prompt for the model. | |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Was this page helpful?