SentenceTransformersSimilarityRanker
Ranks documents based on their semantic similarity to the query.
Basic Information
- Type:
haystack_integrations.rankers.sentence_transformers_similarity.SentenceTransformersSimilarityRanker
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| query | str | The input query to compare the documents to. | |
| documents | List[Document] | A list of documents to be ranked. | |
| top_k | Optional[int] | None | The maximum number of documents to return. |
| scale_score | Optional[bool] | None | If True, scales the raw logit predictions using a Sigmoid activation function. If False, disables scaling of the raw logit predictions. If set, overrides the value set at initialization. |
| score_threshold | Optional[float] | None | Use it to return documents only with a score above this threshold. If set, overrides the value set at initialization. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | A dictionary with the following keys: - documents: A list of documents closest to the query, sorted from most similar to least similar. |
Overview
Work in Progress
Bear with us while we're working on adding pipeline examples and most common components connections.
Ranks documents based on their semantic similarity to the query.
It uses a pre-trained cross-encoder model from Hugging Face to embed the query and the documents.
Usage Example
components:
SentenceTransformersSimilarityRanker:
type: components.rankers.sentence_transformers_similarity.SentenceTransformersSimilarityRanker
init_parameters:
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | Union[str, Path] | cross-encoder/ms-marco-MiniLM-L-6-v2 | The ranking model. Pass a local path or the Hugging Face model name of a cross-encoder model. |
| device | Optional[ComponentDevice] | None | The device on which the model is loaded. If None, the default device is automatically selected. |
| token | Optional[Secret] | Secret.from_env_var(['HF_API_TOKEN', 'HF_TOKEN'], strict=False) | The API token to download private models from Hugging Face. |
| top_k | int | 10 | The maximum number of documents to return per query. |
| query_prefix | str | A string to add at the beginning of the query text before ranking. Use it to prepend the text with an instruction, as required by reranking models like bge. | |
| document_prefix | str | A string to add at the beginning of each document before ranking. You can use it to prepend the document with an instruction, as required by embedding models like bge. | |
| meta_fields_to_embed | Optional[List[str]] | None | List of metadata fields to embed with the document. |
| embedding_separator | str | \n | Separator to concatenate metadata fields to the document. |
| scale_score | bool | True | If True, scales the raw logit predictions using a Sigmoid activation function. If False, disables scaling of the raw logit predictions. |
| score_threshold | Optional[float] | None | Use it to return documents with a score above this threshold only. |
| trust_remote_code | bool | False | If False, allows only Hugging Face verified model architectures. If True, allows custom models and scripts. |
| model_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for AutoModelForSequenceClassification.from_pretrained when loading the model. Refer to specific model documentation for available kwargs. |
| tokenizer_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for AutoTokenizer.from_pretrained when loading the tokenizer. Refer to specific model documentation for available kwargs. |
| config_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for AutoConfig.from_pretrained when loading the model configuration. |
| backend | Literal['torch', 'onnx', 'openvino'] | torch | The backend to use for the Sentence Transformers model. Choose from "torch", "onnx", or "openvino". Refer to the Sentence Transformers documentation for more information on acceleration and quantization options. |
| batch_size | int | 16 | The batch size to use for inference. The higher the batch size, the more memory is required. If you run into memory issues, reduce the batch size. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| query | str | The input query to compare the documents to. | |
| documents | List[Document] | A list of documents to be ranked. | |
| top_k | Optional[int] | None | The maximum number of documents to return. |
| scale_score | Optional[bool] | None | If True, scales the raw logit predictions using a Sigmoid activation function. If False, disables scaling of the raw logit predictions. If set, overrides the value set at initialization. |
| score_threshold | Optional[float] | None | Use it to return documents only with a score above this threshold. If set, overrides the value set at initialization. |
Was this page helpful?