Skip to main content

AnthropicVertexChatGenerator

Generate text using state-of-the-art Claude 3 LLMs through the Anthropic Vertex AI API.

Basic Information

  • Type: anthropic.src.haystack_integrations.components.generators.anthropic.chat.vertex_chat_generator.AnthropicVertexChatGenerator
  • Components it can connect with:
    • ChatPromptBuilder: AnthropicVertexChatGenerator receives a rendered prompt from ChatPromptBuilder.
    • DeepsetAnswerBuilder: AnthropicVertexChatGenerator sends the generated replies to DeepsetAnswerBuilder through OutputAdapter (see Usage Examples below).

Inputs

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of ChatMessage instances representing the input messages.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function that is called when a new token is received from the stream.
generation_kwargsOptional[Dict[str, Any]]NoneOptional arguments to pass to the Anthropic generation endpoint.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of Tool objects or a Toolset that the model can use. Each tool should have a unique name. If set, it will override the tools parameter set during component initialization.

Outputs

ParameterTypeDefaultDescription
repliesList[ChatMessage]The responses from the model.

Overview

AnthropicVertexChatGenerator supports models such as Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku, accessible through the Vertex AI API endpoint.

To use AnthropicVertexChatGenerator, you must have a GCP project with Vertex AI enabled. Additionally, ensure that the desired Anthropic model is activated in the Vertex AI Model Garden.

Before making requests, you may need to authenticate with GCP using gcloud auth login. For more details, refer to the [Anthropic guide] (https://docs.anthropic.com/en/api/claude-on-vertex-ai).

You can pass any valid text generation parameters for the Anthropic messaging API to the AnthropicVertex API. Provide these parameters directly to the component in the generation_kwargs parameter. For a list of parameters supported by the Anthropic API, refer to the Anthropic Message API documentation.

Authentication

To use this component, connect deepset to Anthropic on the Integrations page first:

Connection Instructions

  1. Click your profile icon in the top right corner and choose Integrations.
    Integrations menu screenshot
  2. Click Connect next to the provider.
  3. Enter your API key and submit it.

You also need a GCP project with VertexAI enabled. Create secrets for REGION and PROJECT_ID called "REGION" and "PROJECT_ID", respectively. For details on secrets, see Add a secret.

Usage Example

Initializing the Component

components:
AnthropicVertexChatGenerator:
type: anthropic.src.haystack_integrations.components.generators.anthropic.chat.vertex_chat_generator.AnthropicVertexChatGenerator
init_parameters:

Using the Component in a Pipeline

This is an example of a RAG pipeline with AnthropicVertexChatGenerator:

components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0

query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2

embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return

document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate

ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8

meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm

ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents.\nIf the answer is not in the documents, rely on the web_search tool to find information.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}] :\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
required_variables:
variables:
OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
custom_filters:
unsafe: false

AnthropicVertexChatGenerator:
type: haystack_integrations.components.generators.anthropic.chat.vertex_chat_generator.AnthropicVertexChatGenerator
init_parameters:
region:
project_id:
model: claude-3-5-sonnet@20240620
streaming_callback:
generation_kwargs:
ignore_tools_thinking_messages: true
tools:

connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: AnthropicVertexChatGenerator.replies
receiver: OutputAdapter.replies
- sender: ChatPromptBuilder.prompt
receiver: AnthropicVertexChatGenerator.messages

inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"

outputs: # Defines the output of your pipeline
documents: "meta_field_grouping_ranker.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers

max_runs_per_component: 100

metadata: {}

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
regionstrThe region where the Anthropic model is deployed. Defaults to "us-central1".
project_idstrThe GCP project ID where the Anthropic model is deployed.
modelstrclaude-sonnet-4@20250514The name of the model to use.
streaming_callbackOptional[Callable[[StreamingChunk], None]]NoneA callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.
generation_kwargsOptional[Dict[str, Any]]NoneOther parameters to use for the model. These parameters are all sent directly to the AnthropicVertex endpoint. See Anthropic documentation for more details. Supported generation_kwargs parameters are:
- system: The system message to be passed to the model.
- max_tokens: The maximum number of tokens to generate.
- metadata: A dictionary of metadata to be passed to the model. - stop_sequences: A list of strings that the model should stop generating at.
- temperature: The temperature to use for sampling.
- top_p: The top_p value to use for nucleus sampling.
- top_k: The top_k value to use for top-k sampling.
- extra_headers: A dictionary of extra headers to be passed to the model (that is for beta features).
ignore_tools_thinking_messagesboolTrueAnthropic's approach to tools (function calling) resolution involves a "chain of thought" messages before returning the actual function names and parameters in a message. If ignore_tools_thinking_messages is True, the generator will drop so-called thinking messages when tool use is detected. See the Anthropic tools for more details.
toolsOptional[List[Tool]]NoneA list of Tool objects that the model can use. Each tool should have a unique name.
timeoutOptional[float]NoneTimeout for Anthropic client calls. If not set, it defaults to the default set by the Anthropic client.
max_retriesOptional[int]NoneMaximum number of retries to attempt for failed requests. If not set, it defaults to the default set by the Anthropic client.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription