Skip to main content

AnthropicGenerator

Generate text using large language models (LLMs) by Anthropic.

Basic Information

  • Type: haystack_integrations.components.generators.anthropic.generator.AnthropicGenerator
  • Components it can connect with:
    • PromptBuilder: AnthropicGenerator can receive instructions from PromptBuilder.
    • DeepsetAnswerBuilder: AnthropicGenerator can send generated replies to DeepsetAnswerBuilder that uses them to return answers with references.

Inputs

ParameterTypeDefaultDescription
promptstrThe instructions for the model.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for text generation.
streaming_callbackOptional[Callable[[StreamingChunk], None]]NoneAn optional callback function to handle streaming chunks.

Outputs

ParameterTypeDefaultDescription
repliesList[str]A list of generated replies.
metaList[Dict[str, Any]]A list of metadata dictionaries for each reply.

Overview

For a complete list of models that work with this generator, see Anthropic documentation. Although Anthropic natively supports a much richer messaging API, this component intentionally simplifies it so that the main input and output interface is string-based. For more complete support, consider using AnthropicChatGenerator.

Authentication

To use this component, connect deepset with Anthropic first. You'll need an Anthropic API key to do this.

Add Workspace-Level Integration

  1. Click your profile icon and choose Settings.
  2. Go to Workspace>Integrations.
  3. Find the provider you want to connect and click Connect next to them.
  4. Enter the API key and any other required details.
  5. Click Connect. You can use this integration in pipelines and indexes in the current workspace.

Add Organization-Level Integration

  1. Click your profile icon and choose Settings.
  2. Go to Organization>Integrations.
  3. Find the provider you want to connect and click Connect next to them.
  4. Enter the API key and any other required details.
  5. Click Connect. You can use this integration in pipelines and indexes in all workspaces in the current organization.

Usage Example

Initializing the Component

components:
AnthropicGenerator:
type: haystack_integrations.components.generators.anthropic.generator.AnthropicGenerator
init_parameters:

Using the Component in a Pipeline

This is a RAG pipeline that uses Claude Sonnet 4:

components:
retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.open_search_hybrid_retriever.OpenSearchHybridRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: ''
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0
embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/multilingual-e5-base

ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: svalabs/cross-electra-ms-marco-german-uncased
top_k: 8

meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id

prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
required_variables: "*"
template: |-
Du bist ein technischer Experte.
Du beantwortest die Fragen wahrheitsgemäß auf Grundlage der vorgelegten Dokumente.
Wenn die Antwort in mehreren Dokumenten enthalten ist, fasse diese zusammen.
Ignoriere Dokumente, die keine Antwort auf die Frage enthalten.
Antworte nur auf der Grundlage der vorgelegten Dokumente. Erfinde keine Fakten.
Wenn in dem Dokument keine Informationen zu der Frage gefunden werden können, gib dies an.
Verwende immer Verweise in der Form [NUMMER DES DOKUMENTS], wenn du Informationen aus einem Dokument verwendest, z. B. [3] für Dokument [3] .
Nenne die Dokumente nie, sondern gebe nur eine Zahl in eckigen Klammern als Referenz an.
Der Verweis darf sich nur auf die Nummer beziehen, die in eckigen Klammern hinter der Passage steht.
Andernfalls verwende in deiner Antwort keine Klammern und gib NUR die Nummer des Dokuments an, ohne das Wort Dokument zu erwähnen.
Gebe eine präzise, exakte und strukturierte Antwort ohne die Frage zu wiederholen.

Hier sind die Dokumente:
{%- if documents|length > 0 %}
{%- for document in documents %}
Dokument [{{ loop.index }}] :
Name der Quelldatei: {{ document.meta.file_name }}
{{ document.content }}
{% endfor -%}
{%- else %}
Keine Dokumente gefunden.
Sage "Es wurden leider keine passenden Dokumente gefunden, bitte passen sie die Filter an oder versuchen es mit einer veränderten Frage."
{% endif %}

Frage: {{ question }}
Antwort:

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm
# extract_xml_tags: # uncomment to move thinking part into answer's meta
# - thinking

attachments_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate
weights:
top_k:
sort_by_score: true

AnthropicGenerator:
type: haystack_integrations.components.generators.anthropic.generator.AnthropicGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- ANTHROPIC_API_KEY
strict: false
model: claude-sonnet-4-20250514
streaming_callback:
system_prompt:
generation_kwargs:
S3Downloader:
type: haystack_integrations.components.downloaders.s3.s3_downloader.S3Downloader
init_parameters:
aws_access_key_id:
type: env_var
env_vars:
- AWS_ACCESS_KEY_ID
strict: false
aws_secret_access_key:
type: env_var
env_vars:
- AWS_SECRET_ACCESS_KEY
strict: false
aws_session_token:
type: env_var
env_vars:
- AWS_SESSION_TOKEN
strict: false
aws_region_name:
type: env_var
env_vars:
- AWS_DEFAULT_REGION
strict: false
aws_profile_name:
type: env_var
env_vars:
- AWS_PROFILE
strict: false
boto3_config:
file_root_path:
file_extensions:
file_name_meta_key: file_name
max_workers: 32
max_cache_size: 100
s3_key_generation_function: deepset_cloud_custom_nodes.utils.storage.get_s3_key

connections: # Defines how the components are connected
- sender: retriever.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: prompt_builder.prompt
receiver: answer_builder.prompt
- sender: meta_field_grouping_ranker.documents
receiver: attachments_joiner.documents
- sender: attachments_joiner.documents
receiver: answer_builder.documents

- sender: attachments_joiner.documents
receiver: prompt_builder.documents
- sender: prompt_builder.prompt
receiver: AnthropicGenerator.prompt
- sender: AnthropicGenerator.replies
receiver: answer_builder.replies
- sender: retriever.documents
receiver: S3Downloader.documents
- sender: S3Downloader.documents
receiver: attachments_joiner.documents

inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "retriever.query"
- "ranker.query"
- "prompt_builder.question"
- "answer_builder.query"

filters: # These components will receive a potential query filter as input
- "retriever.filters_bm25"
- "retriever.filters_embedding"

outputs: # Defines the output of your pipeline
documents: "attachments_joiner.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers

max_runs_per_component: 100

metadata: {}


Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
api_keySecretSecret.from_env_var('ANTHROPIC_API_KEY')The Anthropic API key.
modelstrclaude-sonnet-4-20250514The name of the Anthropic model to use.
streaming_callbackOptional[Callable[[StreamingChunk], None]]NoneAn optional callback function to handle streaming chunks.
system_promptOptional[str]NoneAn optional system prompt to use for generation.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for generation.
timeoutOptional[float]NoneThe timeout for request.
max_retriesOptional[int]NoneThe maximum number of retries if a request fails.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
promptstrThe prompt with instructions for the model.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for generation. For a complete list, see Anthropic API documentation.
streaming_callbackOptional[Callable[[StreamingChunk], None]]NoneAn optional callback function to handle streaming chunks.