AmazonBedrockGenerator
Use chat completion models hosted on Amazon Bedrock.
Basic Information
- Type:
haystack_integrations.components.generators.amazon_bedrock.chat.chat_generator.AmazonBedrockChatGenerator - Components it can connect with:
ChatPromptBuilder:AmazonBedrockChatGeneratorreceives a list of chat messages with instructions fromChatPromptBuilder.AnswerBuilder:AmazonBedrockChatGeneratorsends the generated replies toAnswerBuilder, which uses them to build the final output.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of ChatMessage objects that form the chat history. | |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function to invoke when the model starts streaming responses. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments passed to the model. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools for the model to call. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[ChatMessage] | Responses generated by the model. |
Overview
Amazon Bedrock is a fully managed service that makes state-of-the-art language models available for use through a unified API. To learn more, see Amazon Bedrock documentation.
With AmazonBedrockGenerator, you can use chat completion models from AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon.
Authentication
To use this component, connect deepset with Amazon Bedrock first. You'll need:
- The region name
- Access key ID
- Secret access key
Connection Instructions
- Click your profile icon in the top right corner and choose Integrations.

- Click Connect next to the provider.
- Enter your API key and submit it.
For detailed explanation, see Use Amazon Bedrock and SageMaker Models.
Usage Example
Initializing the Component
components:
AmazonBedrockGenerator:
type: amazon_bedrock.src.haystack_integrations.components.generators.amazon_bedrock.generator.AmazonBedrockGenerator
init_parameters:
Using the Component in a Pipeline
This is an example of a rag chat pipeline with AmazonBedrockChatGenerator. Note that it receives instructions from ChatPromptBuilder, and it needs an OutputAdapter to send the generated replies to DeepsetAnswerBuilder:
components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0
query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2
embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate
ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8
meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id
answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm
AmazonBedrockChatGenerator:
type: haystack_integrations.components.generators.amazon_bedrock.chat.chat_generator.AmazonBedrockChatGenerator
init_parameters:
model: amazon.nova-pro-v1:0
aws_access_key_id:
type: env_var
env_vars:
- AWS_ACCESS_KEY_ID
strict: false
aws_secret_access_key:
type: env_var
env_vars:
- AWS_SECRET_ACCESS_KEY
strict: false
aws_session_token:
type: env_var
env_vars:
- AWS_SESSION_TOKEN
strict: false
aws_region_name:
type: env_var
env_vars:
- AWS_DEFAULT_REGION
strict: false
aws_profile_name:
type: env_var
env_vars:
- AWS_PROFILE
strict: false
generation_kwargs:
stop_words:
streaming_callback:
boto3_config:
tools:
ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents.\nIf the answer is not in the documents, rely on the web_search tool to find information.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}] :\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
required_variables:
variables:
OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
custom_filters:
unsafe: false
connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: ChatPromptBuilder.prompt
receiver: AmazonBedrockChatGenerator.messages
- sender: AmazonBedrockChatGenerator.replies
receiver: OutputAdapter.replies
- sender: OutputAdapter.output
receiver: answer_builder.replies
inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"
outputs: # Defines the output of your pipeline
documents: "meta_field_grouping_ranker.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers
max_runs_per_component: 100
metadata: {}
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | The name of the model to use. | |
| aws_access_key_id | Optional[Secret] | Secret.from_env_var('AWS_ACCESS_KEY_ID', strict=False) | The AWS access key ID. |
| aws_secret_access_key | Optional[Secret] | Secret.from_env_var('AWS_SECRET_ACCESS_KEY', strict=False) | The AWS secret access key. |
| aws_session_token | Optional[Secret] | Secret.from_env_var('AWS_SESSION_TOKEN', strict=False) | The AWS session token. |
| aws_region_name | Optional[Secret] | Secret.from_env_var('AWS_DEFAULT_REGION', strict=False) | The AWS region name. Make sure the region you set supports Amazon Bedrock. |
| aws_profile_name | Optional[Secret] | Secret.from_env_var('AWS_PROFILE', strict=False) | The AWS profile name. |
| max_length | Optional[int] | None | The maximum length of the generated text. This can also be set in the kwargs parameter by using the model specific parameter name. |
| truncate | Optional[bool] | None | Deprecated. This parameter no longer has any effect. |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
| boto3_config | Optional[Dict[str, Any]] | None | The configuration for the boto3 client. |
| model_family | Optional[MODEL_FAMILIES] | None | The model family to use. If not provided, the model adapter is selected based on the model name. |
| kwargs | Any | Additional keyword arguments to be passed to the model. You can find the model specific arguments in AWS Bedrock's documentation. These arguments are specific to the model. You can find them in the model's documentation. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| prompt | str | The prompt to generate a response for. | |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments passed to the generator. |
Was this page helpful?