Agent
Add reasoning and external tools to your pipelines through the Agent component.
Basic Information​
- Type:
haystack.components.agents.agent.Agent - Components it can connect with:
- To pass a query to an Agent, use
OutputAdapter,DeepsetChatHistoryParser, orChatPromptBuilder.ChatPromptBuilderalso lets you pass dynamic content to the Agent. - To make the Agent's response the final output of your pipeline, use an
OutputAdapterto convert the Agent's messages into a list of strings and send them on toDeepsetAnswerBuilder. - The connections depend on the output and input types configured in the Agent's
state_schema.
- To pass a query to an Agent, use
Inputs​
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | List of Haystack ChatMessage objects to process. If a list of dictionaries is provided, each dictionary is converted into a ChatMessage object. | |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function to invoke when a response is streamed from the LLM. You can configure the same callback function for emitting tool results when the agent calls a tool. |
| system_prompt | Optional[str] | None | System prompt for this specific run. If provided, it overrides the default system prompt configured during initialization. This allows you to dynamically adjust the Agent's behavior for different queries. |
| tools | Optional[Union[List[Tool], Toolset, List[str]]] | None | Optional list of Tool objects, a Toolset, or list of tool names to use for this run. When passing tool names, tools are selected from the Agent's originally configured tools. This allows you to dynamically select which tools the Agent uses at query time. |
| kwargs | Any | Additional inputs forwarded to the Agent's state. The keys must match the schema defined in the Agent's state_schema. |
Outputs​
| Output | Type | Description |
|---|---|---|
| messages | List[ChatMessage] | Complete conversation history including all messages from the Agent run: user messages, LLM responses, tool calls, and tool results. |
| last_message | ChatMessage | The final message from the Agent run, typically containing the LLM's final response. This is useful when you only need the final result. |
| Additional state outputs | Varies | Any fields defined in state_schema are also returned as outputs (for example, documents, repository). |
Unlike a ChatGenerator, which returns only the final message, the Agent returns all messages generated during the process. This includes the messages provided as input.|
Overview​
Agent is designed to help AI systems use tools to accomplish tasks. You can think of it like a coordinator that manages conversations and knows when and how to use different tools.
Key features:
- Works with different language models.
You can configure the model through an underlyingChatGenerator. - Can use external tools.
The tools include pipeline components, entire pipelines, and custom functions. - Lets you define custom exit conditions.
Exit conditions specify when the Agent should stop, for example, after generating text or using a specific tool. - Maintains conversation history.
The Agent keeps track of events during a single interaction. To maintain context across multiple queries (like in a real chat), use the deepset AI Platform Chat endpoint orDeepsetChatHistoryParser. - Allows real-time streaming responses.
Streaming is supported for both synchronous and asynchronous mode. - Supports asynchronous execution.
The Agent provides bothrun()andrun_async()methods for synchronous and asynchronous pipeline execution. - Has tracing support.
Connect a tracer like Langfuse or Weights & Biases Weave to monitor the Agent's execution in depth.
How It Works​
- The Agent receives a message from the user.
- It sends the message to its chat model (ChatGenerator), along with the list of tools.
- The model decides whether it needs to call tools or can answer right away. It responds with either plain text (the final response) or a tool call.
- If the response is just text, the Agent returns the current conversation and the response.
- If the response includes a tool call:
- The Agent calls the tool and collects the result.
- If the tool has
outputs_to_statedefined, the Agent adds the specified tool result to the state. - The Agent adds the result to the conversation history.
- Then:
- If the tool name matches an item of the
exit_conditionslist AND the tool executed successfully without errors, the Agent exits and returns the conversation. If the tool produced an error, the Agent continues and sends the error message back to the LLM. - If it doesn't match, the Agent continues. It sends the updated conversation back to the model. The model may decide to call another tool or the same one again. This loop continues until one of the
exit_conditionsis met ormax_agent_stepsis reached.
- If the tool name matches an item of the
Configuration​
To configure an Agent, you need to provide:
- A chat model: Supplied through an underlying ChatGenerator, which processes and generates text. The Agent is provider-agnostic, so it can work with any model. Make sure you choose a ChatGenerator that works with the desired model and supports tool calls.
- A list of tools or a Toolset: These can be custom tools for a specific use case, pipeline components, entire pipelines, or MCP tools. You can provide tools as a list of individual Tool objects or organize them into a Toolset.
- Exit conditions: Defined using
exit_conditions. The Agent runs iteratively—calling tools and feeding their outputs back to the model—until one of these conditions is met. For example, you can configure it to stop after a tool is used or once the model returns a text response.
You configure the Agent in YAML.
Initialization and Warm-up​
Before using an Agent in a pipeline:
- Configure the Agent with its chat generator and tools
- Ensure the Agent is warmed up (this happens automatically when the pipeline is warmed up)
The warm-up process initializes the underlying chat generator. If you're using the Agent standalone (not in a pipeline), call agent.warm_up() before running it.
Agent Without Tools​
When you initialize an Agent without providing tools, it behaves like a standard ChatGenerator:
- It produces one response from the LLM
- It immediately exits after generating text
- It cannot perform iterative reasoning or tool calling
This mode is useful for simple conversational tasks that don't require external tools.
State Schema​
The state_schema Agent parameter lets you define extra inputs for tools—beyond what the LLM generates—and collect additional outputs from them. This is useful when some tool parameters come from outside the LLM (for example, user credentials) or when you need to accumulate tool results (such as a growing list of retrieved documents).
To define the state schema, use the following format:
state_key_name:
type: <type>
For example, this is the state schema with two items retrieved_documents and repository:
retrieved_documents:
type: typing.List[haystack.dataclasses.Document]
repository:
type: str
If a tool's output is incompatible with the type you specified in state_schema, you may encounter an error or unexpected behavior. Make sure the types align.
Messages from a single Agent execution are automatically part of the Agent's schema.
Using Schema to Pass Arguments to Tools​
Tools can automatically receive arguments from the Agent's state if their run() method defines parameters with matching names.
For example, if you define a repositoryfield of type string in your state_schema and the tool's run() method includes a repository: str argument, the Agent automatically fills that parameter from the state_schema.
You can also explicitly map state attributes to tool parameters by using the inputs_from_state setting when configuring the tool. This mapping uses the format: state_attribute_name: tool_parameter_name.
For example, the following configuration passes the repository value from the Agent's state to the tool's repository_name parameter:
- data
component:
init_parameters:
parameter: value
type: component_type
description: Here comes the description of the tool
name: tool_name
inputs_from_state:
repository: repository_name # the tool's `repository` argument will be filled in with the state's `repository` attribute
...
When you explicitly map tool inputs using inputs_from_state, the tool only receives the attributes you specify.
Even if the state_schema contains additional attributes, the tool will not have access to them unless they are explicitly mapped.
If the tool needs other arguments at runtime, those arguments must come from the language model's output. Otherwise, the tool will not receive them.
If both the language model and the state contain a value for the same parameter, the language model's value takes precedence.
Using Schema to Accumulate Outputs​
By default, when an Agent uses a tool, all its outputs are converted to strings and appended as a single ChatMessage with the tool role. This approach ensures that every tool result is recorded in the conversation history, making it available for the language model in the next turn.
When a tool runs multiple times, only the outputs from the last run are included. To accumulate outputs across runs, you can define them in the state schema, specifying the output names, their types, and optionally, a handler function.
By default, outputs are merged based on their declared types:
- List: If the tool output is a list, it's extended using the
list.extendfunction. If the output is not a list, the value is appended to the existing list. - Other types: The existing value is replaced with the new one.
State Handlers​
Each field in state_schema can have a handler function that controls how values are merged when tools update the state:
Default Handlers:
- List types: Values are merged using
merge_lists(concatenates lists) - Other types: Values are replaced using
replace_values(overwrites existing value)
Custom Handlers: You can provide custom handlers to control merge behavior. The handler function receives two arguments: the current value and the new value, and returns the merged result.
You can also explicitly specify which tool outputs to add to the schema using the outputs_to_state parameter. Each entry should include:
state_key: The name under which the data will be stored in the Agent's state.source: The specific output field from the tool's result you want to store. If you don't specify asource, the entire output from the tool is added to the state.
Here's an example of how to add documents resulting from a tool call to the Agent's schema:
- data
component:
init_parameters:
parameter: value
type: component_type
description: Here comes the description of the tool
name: tool_name
outputs_to_state:
retrieved_documents: # key of the attribute in state_schema
source: documents # defines the output to store
System Prompt​
System Prompt
You can optionally configure a system prompt for the Agent to provide fixed instructions that guide its behavior, tone, or knowledge throughout the conversation.
The system prompt only supports static content. It's plain text.
Adding Dynamic Content to the System Prompt​
To include dynamic content, such as variables, in the system prompt, use a ChatPromptBuilder and connect it to the Agent. The template parameter accepts a list of ChatMessage objects.
Each ChatMessage includes a content field, which can contain a text key. This text value supports Jinja2 templating, allowing you to dynamically insert variables and logic into the message content.
For detailed instructions and examples of how to write prompts, see Writing Prompts in deepset AI Platform.
Exit Conditions​
Exit conditions determine when the Agent stops processing. You can specify multiple exit conditions:
"text": The Agent stops when the LLM generates a response without tool calls- Tool names: The Agent stops after successfully executing the named tool
Examples:
# Stop only when LLM returns text
exit_conditions: ["text"]
# Stop after the search tool runs successfully
exit_conditions: ["search"]
# Stop when text is returned OR after search runs
exit_conditions: ["text", "search"]
When multiple exit conditions are specified, the Agent stops as soon as any one of them is met. If a tool in the exit conditions produces an error, the Agent does not stop and instead continues processing, sending the error message back to the LLM.
Agent and Tools​
Agents can use individual pipeline components—or even entire pipelines—as tools. This section explains how to configure both options.
Using Pipeline Components as Tools with ComponentTool​
To make a component available to an Agent, wrap it with haystack.tools.component_tool.ComponentTool. This makes a component callable by the Agent.
When configuring a ComponentTool, you can control how it interacts with the Agent using the following settings:
name: The name of the tool. You can refer to this name in the Agent's prompt.description: The description that helps the Agent decide when to use the tool.inputs_from_state: A mapping of keys from the Agent's state to the tool's input parameter names. This tells the tool where to retrieve its inputs from in the Agent's memory. For details, see Using Schema to Pass Inputs to Tools.outputs_to_state: A mapping of the tool's output fields to keys in the Agent's state. This determines how the tool's results are stored for later use. For details, see Using Schema to Accumulate Tool Outputs.parameters: A JSON schema of the inputs the tool expects. If not provided, the Agent infers the tool's inputs from itsrun()method. See Usage Examples for examples of how to use this parameter.
For detailed explanation of parameters, see Component Tool Parameters.
Using Pipelines as Tools with SuperComponent​
To use an entire pipeline as a tool, you first need to make it callable by setting its type to haystack.tools.component_tool.ComponentTool. Then, define the pipeline as a component inside ComponentTool, wrapping it with a SuperComponent by setting its type to haystack.core.super_component.super_component.SuperComponent. SuperComponent runs the pipeline internally and manages input and output mappings. It automatically matches received arguments to the inputs of the pipeline's components and collects outputs from the final components.
You can also explicitly configure input and output mappings:
-
input_mappingMaps the Agent's input fields to specific component inputs within the pipeline used as a tool.
For example, you can indicate that thequery_embedder'stextinput should receive thequeryinput as follows:input_mapping:
query:
- query_embedder.text -
output_mappingSpecifies which outputs from the pipeline should be returned to the Agent, and under what names.
For example, this configuration returns theretriever'sdocumentsoutput under the labelretrieved_documents. The label is the label that will be used to store the documents in the Agent's state:output_mapping:
retriever.documents: retrieved_documents
Check the Usage Examples section below for more examples.
Usage Examples​
Agent Configuration​
In this example, the Agent:
- Uses
AnthropicChatGenerator. Note that you pass the generator's init parameters and type in the configuration. - Stops when the LLM generates text without tool calls (
exit_conditions: ['text']). - Has streaming enabled using:
streaming_callback: deepset_cloud_custom_nodes.callbacks.streaming.streaming_callback. Note that you enable the streaming for the Agent, not the ChatGenerator it uses. - Includes a system prompt with instructions on how to answer queries and use tools.
- Defines a
state_schemawith a list ofDocumentobjects. Tools can read from and write to these documents. For instance, if a web_search tool retrieves documents, they are saved underdocumentsin the Agent’s state. These documents are then available as the Agent’s output and can be sent to theAnswerBuilder. - Uses the
SerperDevWebSearchcomponent as a tool. To use a pipeline component as a tool, set its type tohaystack.tools.component_tool.ComponentTool. Then, wrap the tool init parameters in thedataobject. You configure the tools for the Agent the same way as you do forToolInvoker. For details and examples of how to configure tools, see ToolInvoker documentation. - The Agent is connected to other components in the pipeline with its
messagesinput anddocumentsoutput (defined instate_schema). The Agent also sends itsmessagesoutput to the Adapter.
components:
agent:
type: haystack.components.agents.agent.Agent
init_parameters:
chat_generator:
init_parameters: # here you configure the ChatGenerator
api_key:
env_vars:
- ANTHROPIC_API_KEY
strict: false
type: env_var
generation_kwargs:
max_tokens: 8000
ignore_tools_thinking_messages: true
model: claude-3-7-sonnet-latest
type: haystack_integrations.components.generators.anthropic.chat.chat_generator.AnthropicChatGenerator
exit_conditions: ['text'] # this tells the Agent to stop once it receives text from the LLM, without tool calls
max_agent_steps: 100
raise_on_tool_invocation_failure: false
streaming_callback: deepset_cloud_custom_nodes.callbacks.streaming.streaming_callback # enables streaming for the Agent
system_prompt: |-
You are a deep research assistant.
You create comprehensive research reports to answer the user's questions.
You use the 'search'-tool to answer any questions.
You perform multiple searches until you have the information you need to answer the question.
Make sure you research different aspects of the question.
Use markdown to format your response.
When you use information from the websearch results, cite your sources using markdown links.
It is important that you cite accurately.
state_schema: # define the data the components will have access to
documents: # here, we're giving the web_search component access to the documents
type: typing.List[haystack.dataclasses.Document]
tools:
- type: haystack.tools.component_tool.ComponentTool # this is the type you use to configure pipeline components as tools
data: # wrap the component configuration in the data object
component: # specify the tool type, for pipeline components, it's `component`
type: haystack.components.websearch.serper_dev.SerperDevWebSearch # this is the component import path or type
init_parameters: # pass the component configuration here
api_key:
type: env_var
env_vars:
- SERPERDEV_API_KEY
strict: false
top_k: 10
name: web_search # give the tool a name, you can use this name as the exit condition
description: Search the web for current information on any topic # describe what the tool does, this can help the model to decide when and if to use the tool
answer_builder:
init_parameters:
pattern:
reference_pattern:
type: haystack.components.builders.answer_builder.AnswerBuilder
history_parser:
init_parameters: {}
type: dc_custom_component.components.parsers.chat_history_parser.DeepsetChatHistoryParser
adapter:
init_parameters:
custom_filters: {}
output_type: typing.List[str]
template: '{{ [(messages|last).text] }}'
unsafe: false
type: haystack.components.converters.output_adapter.OutputAdapter
connections:
- receiver: agent.messages # agent's input is always `messages`
sender: history_parser.messages
- receiver: adapter.messages
sender: agent.messages
- receiver: answer_builder.replies
sender: adapter.output
- sender: agent.documents # this is because the Agent has `documents` defined in its state_schema
receiver: answer_builder.documents
inputs:
query:
- answer_builder.query
- history_parser.history_and_query
outputs:
answers: answer_builder.answers
documents: agent.documents
pipeline_output_type: chat
max_runs_per_component: 100
metadata: {}
Adding a Pipeline as a Tool​
This is an example of an Agent with two tools: an internal search and a weather forecaster tool. Both are pipelines wrapped in SuperComponent. Note that the "weather_forecaster" tool also has parameters to define the JSON structure illustrating the inputs the tool expects.
components:
agent:
type: haystack.components.agents.agent.Agent
init_parameters:
chat_generator:
init_parameters: # here you configure the ChatGenerator
api_key:
env_vars:
- ANTHROPIC_API_KEY
strict: false
type: env_var
generation_kwargs:
max_tokens: 8000
ignore_tools_thinking_messages: true
model: claude-3-7-sonnet-latest
type: haystack_integrations.components.generators.anthropic.chat.chat_generator.AnthropicChatGenerator
exit_conditions: ['text'] # this tells the Agent to stop once it receives text from the LLM, without tool calls
max_agent_steps: 100
raise_on_tool_invocation_failure: false
streaming_callback: deepset_cloud_custom_nodes.callbacks.streaming.streaming_callback # enables streaming for the Agent
system_prompt: |-
You are a professional trip planner.
You perform comprehensive research to help users plan their travels.
Use the 'travel_guide_search' tool to find reliable information and advice about the destination.
Use the 'weather_tool' to check the current weather forecast for the city.
Research different aspects of the question (culture, safety, transport, events, weather).
Use markdown to format your response.
When you use information from the travel guide results, cite your sources using markdown links.
It is important that you cite accurately.
state_schema: # define the data the components will have access to
documents: # here, we're giving the tools access to the documents
type: typing.List[haystack.dataclasses.Document]
tools:
- type: haystack.tools.component_tool.ComponentTool # we wrap the SuperComponent in ComponentTool
data: # this is a necessary object always added for ComponentTool
component: # we indicate this is a component
init_parameters: #we're configuring the component
input_mapping: # here we list the components that will receive query and filters as input together with their input connections
query:
- query_embedder.text
- OpenSearchBM25Retriever.query
- ranker.query
filters:
- OpenSearchBM25Retriever.filters
- OpenSearchEmbeddingRetriever.filters
output_mapping: # here we're saying the ranker's documents output will have the label documents
ranker.documents: documents
pipeline: # this is the pipeline configuration
components:
query_embedder:
init_parameters:
model: intfloat/e5-base-v2
truncate: END
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
OpenSearchBM25Retriever:
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: default_index
embedding_dim: 768
return_embedding: false
max_chunk_bytes: 104857600
create_index: true
filters:
fuzziness: AUTO
top_k: 20
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
OpenSearchEmbeddingRetriever:
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: default_index
embedding_dim: 768
return_embedding: false
max_chunk_bytes: 104857600
create_index: true
filters:
top_k: 20
efficient_filtering: false
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
document_joiner:
init_parameters:
join_mode: concatenate
type: haystack.components.joiners.document_joiner.DocumentJoiner
ranker:
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 2
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
connection_type_validation: true
connections:
- receiver: OpenSearchEmbeddingRetriever.query_embedding
sender: query_embedder.embedding
- receiver: document_joiner.documents
sender: OpenSearchBM25Retriever.documents
- receiver: document_joiner.documents
sender: OpenSearchEmbeddingRetriever.documents
- receiver: ranker.documents
sender: document_joiner.documents
max_runs_per_component: 100
metadata: {}
type: haystack.core.super_component.super_component.SuperComponent # this indicates the pipeline is wrapped in a SuperComponent
description: A tool to search travel guides, tips, and advice for specific destinations or travel topics.
outputs_to_state: # here we're listing the outputs of the tool to be added to the Agent's state
documents:
source: documents
name: travel_guide_search
parameters: # this gives the LLM a schema of the inputs the tool expects: it must receive the query of type string, no additional properties are allowed.
type: object
properties:
query:
type: string
description: The search query
required:
- query
additionalProperties: false
- type: haystack.tools.component_tool.ComponentTool
data:
component:
init_parameters:
input_mapping:
city:
- weather_retriever.city
output_mapping:
weather_formatter.prompt: result
pipeline:
components:
weather_retriever:
init_parameters:
api_url: https://api.weatherapi.com/v1/current.json
lang: en
timeout: 10
type: dc_custom_component.components.retrievers.weather_data_retriever.DeepsetCurrentWeatherRetriever
weather_formatter:
init_parameters:
template: "This is the weather information for the named location:\n{{ weather_data }}\nUse these weather conditions when answering."
type: haystack.components.builders.prompt_builder.PromptBuilder
connection_type_validation: true
connections:
- receiver: weather_formatter.weather_data
sender: weather_retriever.weather
max_runs_per_component: 100
metadata: {}
type: haystack.core.super_component.super_component.SuperComponent
description: A tool to get the weather at the specified location. It takes a city name as input and returns the current weather for that city as a formatted string in English.
name: weather_tool
parameters:
type: object
properties:
city:
type: string
description: The city name to get the weather for.
required:
- city
additionalProperties: false
Passing a Query to the Agent​
The Agent expects a list of messages as input. However, the Query component outputs plain text. To bridge this gap, you can use the DeepsetChatHistoryParser component.
DeepsetChatHistoryParser takes the text from Query and converts it into a list of ChatMessage objects. Simply connect Query to DeepsetChatHistoryParser, and then connect its output to the Agent.

Displaying the Agent Results​
The Agent returns a list of ChatMessages, but in most cases, you only need the last message as the final output of your pipeline. To extract just the last message, use the OutputAdapter component. Configure it to:
- Take the Agent's output (a list of ChatMessages)
- Convert only the last message into a list of strings
This format is compatible with downstream components like the AnswerBuilder. Simply connect the Agent to the OutputAdapter, and then connect the adapter's output to the AnswerBuilder.

This is how to configure OutputAdapter for this scenario:
adapter:
init_parameters:
output_type: List[str] # this is the output type an AnswerBuilder accepts
template: '{{ [(messages|last).text] }}' # here you're pointing to the last message the Agent returned
type: haystack.components.converters.output_adapter.OutputAdapter
Adding Sources to Agent Results​
To show the sources or documents the Agent used to generate it answer, configure the Agent to output those documents. You do this by adding documents to the Agent's state_schema:
state_schema:
documents: # this becomes the name of the output connection
type: List[haystack.Document] # here you define what's sent there

The Agent then outputs the documents through its output connection called documents. You can connect an AnswerBuilder to the Agent's documents to add documents to the final answer:
connections:
- receiver: agent.messages
sender: history_parser.messages
- receiver: adapter.messages
sender: agent.messages
- receiver: answer_builder.replies
sender: adapter.output
- sender: agent.documents # we're sending the documents to DeepsetAnswerBuilder
receiver: answer_builder.documents
inputs:
query:
- answer_builder.query
- history_parser.history_and_query
outputs:
answers: answer_builder.answers
documents: agent.documents # the final output also includes the Agent's documents
Parameters​
Init Parameters​
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| chat_generator | ChatGenerator | The chat generator that the agent will use. You configure the generator by passing its init parameters and type to the Agent. Check the Usage Example section for details. The chat generator must support tools. | |
| tools | Optional[Union[List[Tool], Toolset]] | None | External tools or toolsets the Agent can use. You can provide individual Tool objects as a list, or organize related tools into a Toolset. |
| system_prompt | Optional[str] | None | System prompt to guide the Agent's behavior. This can be overridden at runtime by passing a system_prompt parameter to the run method. |
| exit_conditions | Optional[List[str]] | ["text"] | Defines when the agent stops processing messages. Pass "text" to stop the Agent when it generates a message without tool calls. Pass the name of a tool to stop the Agent after it successfully runs this tool. Multiple exit conditions can be specified, and the Agent stops when any one is met. |
| state_schema | Optional[Dict[str, Any]] | None | Optional schema for managing the runtime state used by tools. It defines extra information—such as documents or context—that tools can read from or write to during execution. You can use this schema to pass parameters that tools can both produce and consume during a call. This means that when a pipeline runs, tools can read from the Agent's state (for example, the current set of retrieved documents) and write into or update this state as they run. |
| max_agent_steps | int | 100 | Maximum number of steps (LLM calls) the Agent runs before stopping. Defaults to 100. If the Agent reaches this limit, it stops execution and returns all messages and state accumulated up to that point. A warning is logged when this limit is reached. Increase this value for complex tasks that require many tool calls. |
| streaming_callback | Optional[StreamingCallbackT] | None | Function invoked for streaming responses. To enable streaming, set streaming_callback to deepset_cloud_custom_nodes.callbacks.streaming.streaming_callback. To learn more about streaming, see Enable Streaming. |
| raise_on_tool_invocation_failure | bool | False | Whether to raise an error when a tool call fails. If set to False, the exception is turned into a chat message and passed to the LLM. |
| tool_invoker_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments to pass to the ToolInvoker. |
Run Method Parameters​
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | List of Haystack ChatMessage objects to process. If a list of dictionaries is provided, each dictionary will be converted to a ChatMessage object. | |
| streaming_callback | Optional[StreamingCallbackT] | None | A function to handle streamed responses. You can configure the same callback function to emit tool results when a tool is called. |
| system_prompt | Optional[str] | None | System prompt for this specific run. If provided, it overrides the default system prompt configured during initialization. This allows you to dynamically adjust the Agent's behavior for different queries. |
| tools | Optional[Union[List[Tool], Toolset, List[str]]] | None | Optional list of Tool objects, a Toolset, or list of tool names to use for this run. When passing tool names, tools are selected from the Agent's originally configured tools. This allows you to dynamically select which tools the Agent uses at query time. |
| break_point | Optional[AgentBreakpoint] | None | An AgentBreakpoint, can be a Breakpoint for the "chat_generator" or a ToolBreakpoint for "tool_invoker". Used for debugging and monitoring agent execution. |
| snapshot | Optional[AgentSnapshot] | None | A dictionary containing a snapshot of a previously saved agent execution. The snapshot contains the relevant information to restart the Agent execution from where it left off. |
| kwargs | Any | Additional parameters to pass to the Agent's state_schema. The keys must match the schema defined in the Agent's state_schema. |
Was this page helpful?