DeepsetAmazonBedrockVisionGenerator
Generate text using prompts containing text and images with large language models hosted on Amazon Bedrock.
Basic Information
- Pipeline type: Query
- Type:
deepset_cloud_custom_nodes.generators.deepset_amazon_bedrock_vision_generator.DeepsetAmazonBedrockVisionGenerator
- Components it can connect with:
- PromptBuilder:
DeepsetAmazonBedrockVisionGenerator
receives the prompt fromPromptBuilder
. - AnswerBuilder:
DeepsetAmazonBedrockVisionGenerator
sends the generated replies toDeepsetAnswerBuilder
, which uses them to buildGeneratedAnswer
objects.
- PromptBuilder:
Inputs
Required Inputs
Name | Type | Description |
---|---|---|
prompt | String | The prompt with instructions for the model. |
images | List of Base64Image | A list of Base64Images that represent the image content of the message. The base64 encoded images are passed on to the large language model to be used as images for the text generation. |
Optional Inputs
Name | Type | Default | Description |
---|---|---|---|
generation_kwargs | Dictionary of string and any | None | Additional keyword arguments you want to pass to the generator. For details on supported parameters, check the documentation of the model. |
streaming_callback | Callable [StreamingChunk] | None | A callback function to handle streaming chunks. |
Outputs
Name | Type | Description |
---|---|---|
replies | List of strings | Generated responses. |
Overview
DeepsetAmazonBedrockVisionGenerator
makes it possible to use models in Amazon Bedrock through deepset's Amazon Bedrock account. This component only works with models that support multimodal inputs.
For a full list of models, see Amazon Bedrock documentation.
You can pass any text generation parameters valid for the underlying model directly to DeepsetAmazonBedrockVisionGenerator
using the generation_kwargs
parameter. For details on the parameters Amazon Bedrock API supports, see Amazon Bedrock documentation.
Authentication
DeepsetAmazonBedrockVisionGenerator
connects to deepset's Bedrock account without requiring you to pass any credentials. You can use models hosted in Bedrock right away.
Usage Example
This example uses the Sonnet 3.5 model hosted on Amazon Bedrock to generate answers. It gets the prompt with documents from PromptBuilder
and then sends the generated replies to AnswerBuilder
:
components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
embedding_dim: 1024
top_k: 20 # The number of results to return
fuzziness: 0
query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: "BAAI/bge-m3"
embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
embedding_dim: 1024
top_k: 20 # The number of results to return
document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate
ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: "BAAI/bge-reranker-v2-m3"
top_k: 5
meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by: null
sort_docs_by: split_id
image_downloader:
type: deepset_cloud_custom_nodes.augmenters.deepset_file_downloader.DeepsetFileDownloader
init_parameters:
file_extensions:
- ".pdf"
pdf_to_image:
type: deepset_cloud_custom_nodes.converters.pdf_to_image.DeepsetPDFDocumentToBase64Image
init_parameters:
detail: "high"
prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |-
Answer the question briefly and precisely based on the pictures.
Give reasons for your answer.
When answering the question only provide references within the answer text.
Only use references in the form [NUMBER OF IMAGE] if you are using information from a image.
For example, if the first image is used in the answer add [1] and if the second image is used then use [2], etc.
Never name the images, but always enter a number in square brackets as a reference.
Question: {{ question }}
Answer:
required_variables: "*"
llm:
type: deepset_cloud_custom_nodes.generators.deepset_amazon_bedrock_vision_generator.DeepsetAmazonBedrockVisionGenerator
init_parameters:
model: anthropic.claude-3-5-sonnet-20241022-v2:0
aws_region_name: us-west-2
max_length: 10000
model_max_length: 200000
temperature: 0
answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm
connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: image_downloader.documents
- sender: image_downloader.documents
receiver: pdf_to_image.documents
- sender: pdf_to_image.base64_images
receiver: llm.images
- sender: prompt_builder.prompt
receiver: llm.prompt
- sender: image_downloader.documents
receiver: answer_builder.documents
- sender: prompt_builder.prompt
receiver: answer_builder.prompt
- sender: llm.replies
receiver: answer_builder.replies
inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "prompt_builder.question"
- "answer_builder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"
outputs: # Defines the output of your pipeline
documents: "pdf_to_image.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
Parameter | Type | Possible values | Description |
---|---|---|---|
model | String | Default: None | The ID of the model to use. For model IDs, check Amazon Bedrock documentation. Required. |
system_prompt | String | Default: None | The system prompt to use. Required. |
streaming_callback | StreamingChunk | deepset_cloud_custom_nodes.callbacks.streaming.streaming_callback Default: None | Specifies if a Generator should stream. Required. |
generation_kwargs | Any | Additional keyword arguments to be passed to the model. Optional. |
Run Method Parameters
These are the parameters you can configure for the component's run()
method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
Parameters | Type | Possible Values | Description |
---|---|---|---|
prompt | String | Instructions for the model. Required. | |
images | List of Base64Image objects | A list of Base64Image objects that represent the image content of the message. The base64-encoded images are passed on to the large language model to be used as images for the text generation. Required. | |
streaming_callback | Callable[StreamingChunk] | Default: None | A callback function to handle streaming chunks. It specifies if a Generator should stream. To enable streaming, set streaming_callback to deepset_cloud_custom_nodes.callbacks.streaming.streaming_callback .Optional. |
generation_kwargs | Dictionary | Default: None | Additional keyword arguments for text generation. These parameters override the parameters in pipeline configuration. For details on supported parameters, check the documentation of the model you're using. Optional. |
Updated 6 days ago