Skip to main content
For the complete documentation index for agents and LLMs, see llms.txt.

AmazonBedrockChatGenerator

Use chat completion models hosted on Amazon Bedrock. This component supports models from AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon.

Key Features

  • Supports chat completion models from multiple providers on Amazon Bedrock.
  • Accepts a list of ChatMessage objects as input for multi-turn conversations.
  • Supports streaming responses through a configurable callback function.
  • Passes custom generation parameters to the underlying model.
  • Supports tool use (function calling) for compatible models.
  • Works with Haystack Platform's Amazon Bedrock integration.

Configuration

Authentication

To use this component, connect Haystack Platform with Amazon Bedrock first. You need the region name, access key ID, and secret access key. For connection instructions, see the setup guide below.

For detailed explanation, see Use Amazon Bedrock and SageMaker Models.

  1. Drag the AmazonBedrockChatGenerator component onto the canvas from the Component Library.
  2. Click the component to open the configuration panel.
  3. On the General tab:
    1. Enter the model name, for example amazon.nova-pro-v1:0.
  4. Go to the Advanced tab to configure AWS credentials, generation parameters, streaming callback, and tools.

Connections

AmazonBedrockChatGenerator accepts a list of ChatMessage objects (messages) as input, along with an optional streaming_callback and generation_kwargs.

Connect ChatPromptBuilder to the messages input to provide formatted chat prompts. Connect the replies output to AnswerBuilder or an OutputAdapter to build the final response.

Usage Example

This is an example of a RAG chat pipeline with AmazonBedrockChatGenerator. It receives instructions from ChatPromptBuilder and needs an OutputAdapter to send the generated replies to DeepsetAnswerBuilder:

components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0

query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2

embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return

document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate

ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8

meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm

AmazonBedrockChatGenerator:
type: haystack_integrations.components.generators.amazon_bedrock.chat.chat_generator.AmazonBedrockChatGenerator
init_parameters:
model: amazon.nova-pro-v1:0
aws_access_key_id:
type: env_var
env_vars:
- AWS_ACCESS_KEY_ID
strict: false
aws_secret_access_key:
type: env_var
env_vars:
- AWS_SECRET_ACCESS_KEY
strict: false
aws_session_token:
type: env_var
env_vars:
- AWS_SESSION_TOKEN
strict: false
aws_region_name:
type: env_var
env_vars:
- AWS_DEFAULT_REGION
strict: false
aws_profile_name:
type: env_var
env_vars:
- AWS_PROFILE
strict: false
generation_kwargs:
stop_words:
streaming_callback:
boto3_config:
tools:
ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents.\nIf the answer is not in the documents, rely on the web_search tool to find information.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}] :\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
required_variables:
variables:
OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
custom_filters:
unsafe: false

connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: ChatPromptBuilder.prompt
receiver: AmazonBedrockChatGenerator.messages
- sender: AmazonBedrockChatGenerator.replies
receiver: OutputAdapter.replies
- sender: OutputAdapter.output
receiver: answer_builder.replies

inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"

outputs: # Defines the output of your pipeline
documents: "meta_field_grouping_ranker.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers

max_runs_per_component: 100

metadata: {}

Parameters

Inputs

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of ChatMessage objects that form the chat history.
streaming_callbackOptional[Callable[[StreamingChunk], None]]NoneA callback function to invoke when the model starts streaming responses.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments passed to the model.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools for the model to call.

Outputs

ParameterTypeDefaultDescription
repliesList[ChatMessage]Responses generated by the model.

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrThe name of the model to use.
aws_access_key_idOptional[Secret]Secret.from_env_var('AWS_ACCESS_KEY_ID', strict=False)The AWS access key ID.
aws_secret_access_keyOptional[Secret]Secret.from_env_var('AWS_SECRET_ACCESS_KEY', strict=False)The AWS secret access key.
aws_session_tokenOptional[Secret]Secret.from_env_var('AWS_SESSION_TOKEN', strict=False)The AWS session token.
aws_region_nameOptional[Secret]Secret.from_env_var('AWS_DEFAULT_REGION', strict=False)The AWS region name. Make sure the region you set supports Amazon Bedrock.
aws_profile_nameOptional[Secret]Secret.from_env_var('AWS_PROFILE', strict=False)The AWS profile name.
max_lengthOptional[int]NoneThe maximum length of the generated text. This can also be set in the kwargs parameter by using the model specific parameter name.
truncateOptional[bool]NoneDeprecated. This parameter no longer has any effect.
streaming_callbackOptional[Callable[[StreamingChunk], None]]NoneA callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.
boto3_configOptional[Dict[str, Any]]NoneThe configuration for the boto3 client.
model_familyOptional[MODEL_FAMILIES]NoneThe model family to use. If not provided, the model adapter is selected based on the model name.
kwargsAnyAdditional keyword arguments to be passed to the model. You can find the model specific arguments in AWS Bedrock's documentation. These arguments are specific to the model. You can find them in the model's documentation.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
promptstrThe prompt to generate a response for.
streaming_callbackOptional[Callable[[StreamingChunk], None]]NoneA callback function that is called when a new token is received from the stream.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments passed to the generator.