Skip to main content
For the complete documentation index for agents and LLMs, see llms.txt.

TransformersTextRouter

Use TransformersTextRouter to route text strings to different pipeline branches based on a category label predicted by a Hugging Face text classification model.

Key Features

  • Routes text to different outputs based on labels from a Hugging Face text classification model
  • Automatically fetches available labels from the model's configuration on Hugging Face Hub
  • Supports custom label overrides
  • Configurable device for model loading (CPU, GPU, or auto-detect)
  • Supports private Hugging Face models with API token authentication

Configuration

  1. Drag the TransformersTextRouter component onto the canvas from the Component Library.
  2. Click the component to open the configuration panel.
  3. On the General tab:
    1. Set model to the name or path of a Hugging Face text classification model (for example, cross-encoder/nli-deberta-v3-small).
  4. Go to the Advanced tab to configure labels, device, token, and huggingface_pipeline_kwargs.

Connections

TransformersTextRouter accepts a single text string as input. It creates one output per label the model can predict. Connect each output to the appropriate downstream component for that category.

The available labels are specific to the model you use. Check the model card on Hugging Face to find the expected label names.

Usage Example

components:
TransformersTextRouter:
type: components.routers.transformers_text_router.TransformersTextRouter
init_parameters:

Parameters

Inputs

ParameterTypeDefaultDescription
textstrA string of text to route.

Outputs

The component creates one output per label the model can predict. Each output contains the text string if the model assigns that label.

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrThe name or path of a Hugging Face model for text classification.
labelsOptional[List[str]]NoneThe list of labels. If not provided, the component fetches the labels from the model configuration on Hugging Face Hub using transformers.AutoConfig.from_pretrained.
deviceOptional[ComponentDevice]NoneThe device for loading the model. If None, automatically selects the default device. If a device or device map is specified in huggingface_pipeline_kwargs, it overrides this parameter.
tokenOptional[Secret]Secret.from_env_var(['HF_API_TOKEN', 'HF_TOKEN'], strict=False)The API token for downloading private models from Hugging Face. If True, uses either the HF_API_TOKEN or HF_TOKEN environment variable.
huggingface_pipeline_kwargsOptional[Dict[str, Any]]NoneA dictionary of keyword arguments for initializing the Hugging Face text classification pipeline.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
textstrA string of text to route.