AI Agents
An AI Agent uses a large language model (LLM) to generate accurate responses to complex queries. When you initialize an Agent, you provide it with tools—these can be pipelines, components, or custom functions. The Agent then uses these tools iteratively to arrive at the best possible answer. Agents are ideal for tasks that require multiple steps or gathering additional information before producing a final response.
Beta Component
Agent is part of the Haystack experimental package, which means it may change in future releases. It’s intended for testing and exploration only—do not use it in production. Updates to the Agent component may break any pipelines that rely on it.
Currently, you can't view the Agent's prompt in Playground or Prompt Explorer.
What Are AI Agents?
AI Agents are intelligent systems that perform tasks autonomously by reasoning, making decisions, and interacting with external tools. At their core, they rely on LLMs to determine what actions to take. By working with tools like components, pipelines, or custom functions, Agents can handle tasks far more complex than what traditional chatbots or generative question-answering systems can manage.
Agents are particularly useful when tasks require autonomous decision-making and interactions with multiple tools. When given a query, an Agent:
- Evaluates whether it can resolve it directly.
- If not, selects and calls a tool that can help.
- Uses the tool’s output to either answer the query or as input for another tool.
- Repeats this process until it meets the defined exit condition or finds the best answer.
How Agents Work
Agents follow this general workflow:
- An Agent receives a user message and sends it to the LLM together with a list of tools.
- The LLM decides whether to respond directly or call a tool.
- If the LLM returns an answer, the Agent stops and returns the result.
- If the LLM returns a tool call, the Agent runs the tool.
- The Agent receives the tool call result and checks if it matches the exit condition.
- If it does, the Agent stops and returns all the messages.
- If it doesn't, the Agent sends the conversation history, including the tool output, back to the LLM and the loop starts over.

Implementation
Agents are available as pipeline components in the deepset AI Platform. You can find them in the Component Library within Pipeline Builder.
To use an Agent:
- Drag the Agent component onto your canvas.
- Configure it using YAML.
- Connect it to other components in your pipeline.
You can use multiple Agents in a single pipeline.
Note: deepset Agent is currently in the beta phase. This means it can change so we don't recommend using it in production scenarios.
For configuration details and usage examples, see the Agent.
Available Pipeline Templates
You can find the following Agent-based pipeline templates under the Agents group:
- GitHub Issue Resolver Agent: Helps resolve GitHub issues by analyzing the repository content and issue comments.
- Intelligent Research Agent (RAG + Web): Answers questions using a local knowledge base and falls back on web search if needed.
- Intelligent Research Agent (Web): Searches the web, gathers results, and generates a single consolidated answer.
To use these templates, make sure your deepset AI Platform is connected to the required model provider, as described in each template.
Tools
Tools are external services or components the Agent uses to gather more information or perform tasks. The Agent relies on these tools to resolve complex queries.
A tool can be:
- A pipeline (like a RAG pipeline on local data)
- A component (such as
SerperDevWebSearch
) - A calculator, API, or other service
- A custom function
The Agent uses an internal ToolInvoker
to call these tools, gather their results, and feed them back into the conversation. Agents can use tools multiple times during a run.
You can configure:
- How many times a tool can be used
- When the Agent should stop, based on tool usage
Defining Tools
When defining a tool, you must:
- Give it a name
- Provide a description that helps the LLM understand when and how to use it. The LLM uses this description to decide when a tool is relevant to a query.
For details on how to configure and use tools, see Agent and ToolInvoker documentation.
Updated 3 days ago