Use Google Gemini Models
Use the multimodal Gemini models in your pipelines through Gemini API.
You can use the gemini-pro
models in your pipelines.
Prerequisites
You need an active Google AI Studio API key to use Gemini models.
Use Gemini Models
First, connect deepset Cloud to GoogleAI through the Connections page:
-
Click your initials in the top right corner and select Connections.
-
Click Connect next to the provider.
-
Enter your user access token and submit it.
Then, add a component that uses a Gemini LLM to your pipeline:
- GoogleAIGeminiGenerator: Can generate text using a Gemini model, often used in RAG pipelines.
Usage Examples
This is an example of how to use a Gemini model in a query pipeline:
components:
...
prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |-
You are a technical expert.
You answer questions truthfully based on provided documents.
For each document check whether it is related to the question.
Only use documents that are related to the question to answer it.
Ignore documents that are not related to the question.
If the answer exists in several documents, summarize them.
Only answer based on the documents provided. Don't make things up.
If the documents can't answer the question or you are unsure say: 'The answer can't be found in the text'.
These are the documents:
{% for document in documents %}
Document[{{ loop.index }}]:
{{ document.content }}
{% endfor %}
Question: {{question}}
Answer:
generator:
type: haystack_integrations.components.generators.google_ai.gemini.GoogleAIGeminiGenerator
init_parameters:
model: gemini-ultra
answer_builder:
init_parameters: {}
type: haystack.components.builders.answer_builder.AnswerBuilder
...
connections:
...
- sender: prompt_builder.prompt
receiver: generator.prompt
- sender: generator.replies
receiver: answer_builder.replies
...
Updated about 1 month ago