Skip to main content
For the complete documentation index for agents and LLMs, see llms.txt.

LLMMessagesRouter

Use LLMMessagesRouter to route chat messages to different pipeline branches by using a language model to classify them. The component sends messages to an LLM and matches the model's response against patterns you define to determine which output to use.

Key Features

  • Uses an LLM to classify chat messages and route them to different branches
  • Works with general-purpose LLMs and specialized moderation models such as Llama Guard
  • Configurable output names and regex patterns to match the model's classification response
  • Optional system prompt to customize model behavior or use model-specific moderation formats
  • Evaluates patterns in order — the first matching pattern determines the output

Configuration

  1. Drag the LLMMessagesRouter component onto the canvas from the Component Library.
  2. Click the component to open the configuration panel.
  3. On the General tab:
    1. Set chat_generator to the LLM instance to use for classification. Specify it by type and init parameters.
    2. Set output_names to the list of output connection names (for example, ["safe", "unsafe"]).
    3. Set output_patterns to the list of regular expressions matched against the LLM's output. Each pattern corresponds to the output name at the same index (for example, ["^safe$", "^unsafe"]). Refer to the model card for moderation models to understand their expected output format.
  4. Go to the Advanced tab to configure system_prompt.

Connections

LLMMessagesRouter accepts a list of ChatMessage objects — only user and assistant messages are supported. It creates one output connection per name in output_names. Connect each output to the appropriate downstream component for that classification result.

Usage Example

components:
LLMMessagesRouter:
type: components.routers.llm_messages_router.LLMMessagesRouter
init_parameters:

Parameters

Inputs

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of chat messages to route. Only user and assistant messages are supported.

Outputs

The component creates one output per name in output_names. Each output is a List[ChatMessage] containing the messages routed to that branch.

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
chat_generatorChatGeneratorA ChatGenerator instance representing the LLM to use for classification.
output_namesList[str]A list of output connection names. Use these to connect the router to downstream components.
output_patternsList[str]A list of regular expressions matched against the LLM's output. Each pattern corresponds to an output name at the same index. Patterns are evaluated in order. For moderation models, refer to the model card to understand the expected output format.
system_promptOptional[str]NoneAn optional system prompt to customize the LLM's behavior. For moderation models, refer to the model card for supported customization options.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of chat messages to route. Only user and assistant messages are supported.