TransformersTextRouter
Use TransformersTextRouter to route text strings to different pipeline branches based on a category label predicted by a Hugging Face text classification model.
Key Features
- Routes text to different outputs based on labels from a Hugging Face text classification model
- Automatically fetches available labels from the model's configuration on Hugging Face Hub
- Supports custom label overrides
- Configurable device for model loading (CPU, GPU, or auto-detect)
- Supports private Hugging Face models with API token authentication
Configuration
- Drag the
TransformersTextRoutercomponent onto the canvas from the Component Library. - Click the component to open the configuration panel.
- On the General tab:
- Set
modelto the name or path of a Hugging Face text classification model (for example,cross-encoder/nli-deberta-v3-small).
- Set
- Go to the Advanced tab to configure
labels,device,token, andhuggingface_pipeline_kwargs.
Connections
TransformersTextRouter accepts a single text string as input. It creates one output per label the model can predict. Connect each output to the appropriate downstream component for that category.
The available labels are specific to the model you use. Check the model card on Hugging Face to find the expected label names.
Usage Example
components:
TransformersTextRouter:
type: components.routers.transformers_text_router.TransformersTextRouter
init_parameters:
Parameters
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
text | str | A string of text to route. |
Outputs
The component creates one output per label the model can predict. Each output contains the text string if the model assigns that label.
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
model | str | The name or path of a Hugging Face model for text classification. | |
labels | Optional[List[str]] | None | The list of labels. If not provided, the component fetches the labels from the model configuration on Hugging Face Hub using transformers.AutoConfig.from_pretrained. |
device | Optional[ComponentDevice] | None | The device for loading the model. If None, automatically selects the default device. If a device or device map is specified in huggingface_pipeline_kwargs, it overrides this parameter. |
token | Optional[Secret] | Secret.from_env_var(['HF_API_TOKEN', 'HF_TOKEN'], strict=False) | The API token for downloading private models from Hugging Face. If True, uses either the HF_API_TOKEN or HF_TOKEN environment variable. |
huggingface_pipeline_kwargs | Optional[Dict[str, Any]] | None | A dictionary of keyword arguments for initializing the Hugging Face text classification pipeline. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
text | str | A string of text to route. |
Was this page helpful?