Skip to main content

Agentic Pipelines

Agentic pipelines have additional capabilities compared to traditional pipelines. They can have conditional branches or tools the LLM can call to gain additional capabilities. However, these pipelines remain predictable and repeatable.


Overview

Unlike AI Agents, agentic pipelines contain a predefined sequence of steps executed in a deterministic way. You can think of them like flowcharts—you define exactly what happens in each step and the pipeline follows that path every time it runs. Even if there are conditional branches or tool calls, you define when and how they happen. Agentic pipelines are best for use cases when you know the optimal path and want to avoid the complexity of an AI Agent.

In deepset AI Platform, you can build agentic pipelines in two main ways:

  • Using conditional routing
  • Using an LLM with tool calling

You can also use combinations of these approaches to build more complex pipelines.

Conditional Routing

In this setup, a Generator (an LLM) is paired with a ConditionalRouter component. The LLM analyzes incoming data and classifies it, routing the input to the most appropriate pipeline branch.

How It Works

  1. The LLM receives the input data and based on it decides which branch to take.
  2. The LLM sends the data to ConditionalRouter, which forwards it along the appropriate route.

Example Use Case

The LLM decides whether a query can be answered directly or needs additional data from a local database. It then sends the query to the ConditionalRouter, which forwards it along the appropriate route the LLM indicated.

When To Use It

These types of systems are best if:

  • You need to handle different input types with specialized processing for each type.
  • You want predictable, transparent behavior with minimal complexity.

Pros

Systems with conditional routing are:

  • Clear
  • Controllable
  • Transparent
  • Predictable
  • Easy to debug

Cons

  • Limited flexibility compared to agent-based systems
  • No decision loop or iterative improvements

LLM with Tool Calling

This approach uses an LLM paired with tools it can call using a ToolInvoker component. The LLM chooses the tool and generates a structured tool call, but ToolInvoker executes it. One limitation of this setup is that it can’t send the tool’s output back to the LLM, preventing it from making follow-up decisions based on the result.

While functional, this setup is considered a predecessor to the more advanced Agent-based design. It’s less flexible and more complex to configure.

How It Works

  1. The LLM receives input and generates a tool call.
  2. ToolInvoker validates and executes the tool call.
  3. The result is returned as the final output.

When To Use It

  • Rare cases where a simple tool call is needed without the full complexity of an AI Agent

Limitations

Compared to pipelines using the Agent component, LLM with tool calling pipelines are:

  • Harder to set up
  • Cannot return tool's output to the LLM
  • Cannot use the tool's output to make follow-up decisions
  • Limited to a single tool call