Agent Tools
Tools are a way to extend the capabilities of the Agent beyond what the LLM alone can do. They are external services the Agent can call to gather more information or perform actions that are not possible with the LLM alone. The Agent relies on the tools to resolve complex queries that require multiple steps or additional information.
Available Tools
A tool can be:
- A deepset pipeline (like a RAG pipeline on local data)
- A deepset component (like a web search component)
- A custom function
- An MCP server that lets the Agent access an external service
Components as Tools
You can use any deepset component as a tool. You can also use your custom components as tools. The Agent automatically generates tool schemas from the component inputs.
Pipelines as Tools
The Agent accepts whole deepset pipelines as tools. It builds the tool parameter schema from the pipeline inputs. You can choose the pipeline inputs and outputs you want to expose.
MCP Servers as Tools
MCP servers make it possible to integrate external services into the Agent through the Model Context Protocol (MCP). MCP is protocol that standardizes how AI applications communicate with external tools and services. The MCP tool supports two transport options:
- Streamable HTTP for connecting to HTTP servers
- Server-Sent Events (SSE) for connecting to servers that support SSE
Streamable HTTP sends data in pieces rather than all at once. It's like streaming a video file, you start watching it while the rest is still loading.
SSE is one-way communication from the server to deepset. The server pushes real-time updates to the Agent. You can compare it to text message alerts from your bank. Once you subscribe, you get notifications when there's activity. The connections stays open and they push updates to you.
Some MCP servers require and authentication token to access the service.
You can connect only remote MCP servers. To use a local server, you must first deploy it to a remote server.
Tool Naming and Descriptions
Your tool names and descriptions should be clear and concise, and should describe the tool's capabilities and the data it expects and produces. This helps the Agent understand what the tool does and how to use it. Make sure it's easy for the LLM to differentiate between tools and their functions.
Tools and Agent State
Tools can read from and write to the Agent's state. To learn about Agent state, see the Agent State documentation.
Reading from Agent State
Tools can automatically receive arguments from the Agent's state if the inputs they take match the names of the state keys. For example, if you define a repository key of type string in the Agent's state_schema and the tool's input parameters include repository: str, the Agent automatically fills that tool's parameter from the state_schema.
You can also explicitly map state keys to tool parameters to indicate which keys should be passed to the tool. To do so, use the inputs_from_state parameter when configuring the tool.
Explicit Mapping Limits
When you explicitly map tool inputs using
inputs_from_state, the tool only receives the inputs you specify.
Even if thestate_schemacontains additional keys, the tool won't have access to them unless they're explicitly mapped.
Writing to Agent State
By default, when an Agent uses a tool, all its outputs are converted to strings and appended as a single ChatMessage with the tool role. This approach ensures that every tool result is recorded in the conversation history, making it available for the LLM in the next turn.
Additionally, you can store specific tool outputs in custom state fields using the outputs_to_state parameter. When you do this, outputs are merged into state based on their declared types in the state schema:
- List types: New values are concatenated to the existing list. If the new value isn't a list, it's converted to one first.
- Other types: The existing value is replaced with the new one.
Updated about 8 hours ago