Configure an Agent: Advanced Settings
Configure Agent's exit condition to define when it stops, define what's stored in the Agent's state, and more.
About This Task
Advanced Settings Visual Configuration Coming Soon
Currently, you can only configure the Agent's advanced settings using the YAML editor. Bear with us while we're working on bringing you the visual configuration.
- For full Agent component documentation, see Agent
- For an explanation of Agent's workflow, see AI Agent.
Prerequisites
Complete the following tasks before you start: Configure an Agent: Model, System Prompt, and Tools.
Access Agent's Advanced Settings
- In Builder, click the Agent component to open its configuration panel.
- At the bottom of the panel, click Advanced Settings.
Configure the Agent's Exit Conditions
The Agent runs iteratively calling tools and feeding their outputs back to the model until one of the exit conditions is met. For example, you can configure the Agent to stop after a tool is used or once the model returns a text response.
- To stop the Agent when the LLM generates a text response, set the
exit_conditionsparameter to"text". - To stop the Agent after a specific tool is used, set the
exit_conditionsparameter to the tool's name. For example, to stop the Agent after thesearchtool is used, set theexit_conditionsparameter to["search"]. - To stop the Agent when both a text response and a tool are generated, set the
exit_conditionsparameter to["text", "search"].
Define the Agent's State Schema
Use the state_schema parameter to define extra inputs for tools and collect additional outputs from them. Tools can read from the schema and write to it.
Schema is useful when some tool parameters come from outside the LLM (for example, user credentials) or when you need to accumulate tool results (such as a growing list of retrieved documents).
To define the state schema, use the following format:
state_key: #this is the label for storing information
type: <type> # defines the type of the information to be storedFor example, this is the state schema with two items retrieved_documents and repository:
retrieved_documents:
type: List[Document]
repository:
type: strType Mismatch
If a tool's output is incompatible with the type you specified in
state_schema, you may encounter an error or unexpected behavior. Make sure the types align.
Pass Arguments to Tools from the Schema
Tools can automatically receive arguments from the Agent's state if their input parameters match the schema keys.
For example, if you define a repository key of type string in your state_schema and the tool's input parameters include repository: str, the Agent automatically fills that parameter from the state_schema.
You can also explicitly map state keys to tool parameters by using the inputs_from_state setting when configuring the tool. This mapping uses the format: state_key: tool_parameter.
For example, the following configuration passes the repository value from the Agent's state to the tool's repository_name parameter:
inputs_from_state:
repository: repository_name # the tool's `repository` argument will be filled in with the `repository` state key value
...Explicit Mapping Limits
When you explicitly map tool inputs using
inputs_from_state, the tool only receives the attributes you specify.
Even if thestate_schemacontains additional attributes, the tool will not have access to them unless they are explicitly mapped.
Accumulate Tool Results in the Schema
By default, when an Agent uses a tool, all its outputs are converted to strings and appended as a single ChatMessage with the tool role. This approach ensures that every tool result is recorded in the conversation history, making it available for the LLM in the next turn.
You can explicitly specify which tool outputs to add to the schema using the outputs_to_state parameter. This is a dictionary where:
state_keyis the name under which the data will be stored in the statesourceis the output field from the tool's result that you want to store. If you leave it out, the whole tool result is stored.
For example, to add documents resulting from a tool call to the state schema under the retrieved_documents key, you can use the following configuration:
- outputs_to_state:
retrieved_documents: # key in state schema
source: documents # the tool output to store
Configure a Different Model
You can choose a model other than those available in the Model list. Use the chat_generator parameter to do this. Make sure there is a ChatGenerator component for the model you want to use and deepset is connected to the model provider.
For example, to use a model hosted on Together AI, pass the DeepsetTogetherAIChatGenerator configuration in the Agent's chat_generator parameter:
agent:
init_parameters:
chat_generator:
type: deepset_cloud_custom_nodes.generators.togetherai.DeepsetTogetherAIGenerator
init_parameters:
api_key: {"type": "env_var", "env_vars": ["TOGETHERAI_API_KEY"], "strict": false}
model: deepseek-ai/DeepSeek-R1
generation_kwargs:
max_tokens: 650
temperature: 0
seed: 0
Updated about 8 hours ago