DeepsetAmazonBedrockChatGenerator
Generate chat responses using Amazon Bedrock models with deepset integration.
Basic Information
- Type:
deepset_cloud_custom_nodes.generators.chat.deepset_amazon_bedrock_chat_generator.DeepsetAmazonBedrockChatGenerator - Components it can connect with:
ChatPromptBuilder:DeepsetAmazonBedrockChatGeneratorreceives chat messages from a prompt builder.AnswerBuilder:DeepsetAmazonBedrockChatGeneratorsends replies to an answer builder.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of chat messages to send to the model. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[ChatMessage] | Generated chat message responses from the model. |
Overview
Use DeepsetAmazonBedrockChatGenerator to generate chat responses using Amazon Bedrock models hosted on deepset Bedrock account. This component provides access to various foundation models available through Amazon Bedrock, including Claude, Llama, and Titan models.
Usage Example
This is an example RAG pipeline using DeepsetAmazonBedrockChatGenerator with Amazon Bedrock models:
components:
bm25_retriever:
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'default'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20
fuzziness: 0
query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2
embedding_retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'default'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20
document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate
ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8
meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id
answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm
ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _role: system
_content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents. Do not use your own knowledge."
- _role: user
_content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}]:\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\nAnswer:"
variables:
required_variables:
generator:
type: deepset_cloud_custom_nodes.generators.chat.deepset_amazon_bedrock_chat_generator.DeepsetAmazonBedrockChatGenerator
init_parameters:
model: anthropic.claude-3-sonnet-20240229-v1:0
aws_region_name:
type: env_var
env_vars:
- AWS_DEFAULT_REGION
strict: false
generation_kwargs:
max_tokens: 1000
temperature: 0.7
streaming_callback:
boto3_config:
tools:
connections:
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: ChatPromptBuilder.prompt
receiver: generator.messages
- sender: generator.replies
receiver: answer_builder.replies
inputs:
query:
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters:
- "bm25_retriever.filters"
- "embedding_retriever.filters"
outputs:
documents: "meta_field_grouping_ranker.documents"
answers: "answer_builder.answers"
max_runs_per_component: 100
metadata: {}
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | The Amazon Bedrock model identifier to use for generation. | |
| aws_region_name | Optional[str] | None | AWS region name where the Bedrock service is accessed. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Additional parameters for generation, such as max_tokens, temperature, top_p. |
| streaming_callback | Optional[Callable] | None | A callback function called when a new token is received during streaming. |
| boto3_config | Optional[Dict[str, Any]] | None | Configuration for the boto3 client. |
| tools | Optional[List[Tool]] | None | A list of tools the model can use for function calling. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of chat messages to send to the model. |
Was this page helpful?