LLMMessagesRouter
Use LLMMessagesRouter to route chat messages to different pipeline branches by using a language model to classify them. The component sends messages to an LLM and matches the model's response against patterns you define to determine which output to use.
Key Features
- Uses an LLM to classify chat messages and route them to different branches
- Works with general-purpose LLMs and specialized moderation models such as Llama Guard
- Configurable output names and regex patterns to match the model's classification response
- Optional system prompt to customize model behavior or use model-specific moderation formats
- Evaluates patterns in order — the first matching pattern determines the output
Configuration
- Drag the
LLMMessagesRoutercomponent onto the canvas from the Component Library. - Click the component to open the configuration panel.
- On the General tab:
- Set
chat_generatorto the LLM instance to use for classification. Specify it by type and init parameters. - Set
output_namesto the list of output connection names (for example,["safe", "unsafe"]). - Set
output_patternsto the list of regular expressions matched against the LLM's output. Each pattern corresponds to the output name at the same index (for example,["^safe$", "^unsafe"]). Refer to the model card for moderation models to understand their expected output format.
- Set
- Go to the Advanced tab to configure
system_prompt.
Connections
LLMMessagesRouter accepts a list of ChatMessage objects — only user and assistant messages are supported. It creates one output connection per name in output_names. Connect each output to the appropriate downstream component for that classification result.
Usage Example
components:
LLMMessagesRouter:
type: components.routers.llm_messages_router.LLMMessagesRouter
init_parameters:
Parameters
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
messages | List[ChatMessage] | A list of chat messages to route. Only user and assistant messages are supported. |
Outputs
The component creates one output per name in output_names. Each output is a List[ChatMessage] containing the messages routed to that branch.
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
chat_generator | ChatGenerator | A ChatGenerator instance representing the LLM to use for classification. | |
output_names | List[str] | A list of output connection names. Use these to connect the router to downstream components. | |
output_patterns | List[str] | A list of regular expressions matched against the LLM's output. Each pattern corresponds to an output name at the same index. Patterns are evaluated in order. For moderation models, refer to the model card to understand the expected output format. | |
system_prompt | Optional[str] | None | An optional system prompt to customize the LLM's behavior. For moderation models, refer to the model card for supported customization options. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
messages | List[ChatMessage] | A list of chat messages to route. Only user and assistant messages are supported. |
Was this page helpful?