Skip to main content

TransformersSimilarityRanker

Ranks documents based on their semantic similarity to the query.

Basic Information

  • Type: haystack_integrations.rankers.transformers_similarity.TransformersSimilarityRanker

Inputs

ParameterTypeDefaultDescription
querystrThe input query to compare the documents to.
documentsList[Document]A list of documents to be ranked.
top_kOptional[int]NoneThe maximum number of documents to return.
scale_scoreOptional[bool]NoneIf True, scales the raw logit predictions using a Sigmoid activation function. If False, disables scaling of the raw logit predictions.
calibration_factorOptional[float]NoneUse this factor to calibrate probabilities with sigmoid(logits * calibration_factor). Used only if scale_score is True.
score_thresholdOptional[float]NoneUse it to return documents only with a score above this threshold.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]A dictionary with the following keys: - documents: A list of documents closest to the query, sorted from most similar to least similar.

Overview

Work in Progress

Bear with us while we're working on adding pipeline examples and most common components connections.

Ranks documents based on their semantic similarity to the query.

It uses a pre-trained cross-encoder model from Hugging Face to embed the query and the documents.

Note: This component is considered legacy and will no longer receive updates. It may be deprecated in a future release, with removal following after a deprecation period. Consider using SentenceTransformersSimilarityRanker instead, which provides the same functionality along with additional features.

Usage Example

components:
TransformersSimilarityRanker:
type: components.rankers.transformers_similarity.TransformersSimilarityRanker
init_parameters:

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelUnion[str, Path]cross-encoder/ms-marco-MiniLM-L-6-v2The ranking model. Pass a local path or the Hugging Face model name of a cross-encoder model.
deviceOptional[ComponentDevice]NoneThe device on which the model is loaded. If None, overrides the default device.
tokenOptional[Secret]Secret.from_env_var(['HF_API_TOKEN', 'HF_TOKEN'], strict=False)The API token to download private models from Hugging Face.
top_kint10The maximum number of documents to return per query.
query_prefixstrA string to add at the beginning of the query text before ranking. Use it to prepend the text with an instruction, as required by reranking models like bge.
document_prefixstrA string to add at the beginning of each document before ranking. You can use it to prepend the document with an instruction, as required by embedding models like bge.
meta_fields_to_embedOptional[List[str]]NoneList of metadata fields to embed with the document.
embedding_separatorstr\nSeparator to concatenate metadata fields to the document.
scale_scoreboolTrueIf True, scales the raw logit predictions using a Sigmoid activation function. If False, disables scaling of the raw logit predictions.
calibration_factorOptional[float]1.0Use this factor to calibrate probabilities with sigmoid(logits * calibration_factor). Used only if scale_score is True.
score_thresholdOptional[float]NoneUse it to return documents with a score above this threshold only.
model_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoModelForSequenceClassification.from_pretrained when loading the model. Refer to specific model documentation for available kwargs.
tokenizer_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for AutoTokenizer.from_pretrained when loading the tokenizer. Refer to specific model documentation for available kwargs.
batch_sizeint16The batch size to use for inference. The higher the batch size, the more memory is required. If you run into memory issues, reduce the batch size.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
querystrThe input query to compare the documents to.
documentsList[Document]A list of documents to be ranked.
top_kOptional[int]NoneThe maximum number of documents to return.
scale_scoreOptional[bool]NoneIf True, scales the raw logit predictions using a Sigmoid activation function. If False, disables scaling of the raw logit predictions.
calibration_factorOptional[float]NoneUse this factor to calibrate probabilities with sigmoid(logits * calibration_factor). Used only if scale_score is True.
score_thresholdOptional[float]NoneUse it to return documents only with a score above this threshold.