Skip to main content

HuggingFaceAPITextEmbedder

Embed strings using Hugging Face APIs.

Basic Information

  • Type: haystack.components.embedders.hugging_face_api_text_embedder.HuggingFaceAPITextEmbedder
  • Components it can connect with:
    • Retrievers: HuggingFaceAPITextEmbedder can send embeddings to retrievers for semantic search.
    • Input: HuggingFaceAPITextEmbedder can receive query text to embed from the Input component.

Inputs

ParameterTypeDefaultDescription
textstrText to embed.

Outputs

ParameterTypeDefaultDescription
embeddingList[float]The embedding of the input text.

Overview

HuggingFaceAPITextEmbedder embeds strings using Hugging Face APIs. Use it to embed queries for semantic search in retrieval pipelines.

info

This component embeds plain text. To embed a list of documents, use HuggingFaceAPIDocumentEmbedder.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

You can use it with the following Hugging Face APIs:

Authorization

Connect Haystack Platform to your Hugging Face account on the Integrations page. For detailed instructions, see Use Hugging Face Models. A token is required for the Serverless Inference API and Inference Endpoints.

Usage Example

Using the component in a pipeline

This query pipeline uses HuggingFaceAPITextEmbedder to embed a query and retrieve documents using semantic search:

components:
query_embedder:
type: haystack.components.embedders.hugging_face_api_text_embedder.HuggingFaceAPITextEmbedder
init_parameters:
api_type: serverless_inference_api
api_params:
model: BAAI/bge-small-en-v1.5
token:
type: env_var
env_vars:
- HF_API_TOKEN
- HF_TOKEN
strict: false
prefix:
suffix:
truncate: true
normalize: false

embedding_retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
use_ssl: true
verify_certs: false
top_k: 20

connections:
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding

inputs:
query:
- query_embedder.text
filters:
- embedding_retriever.filters

outputs:
documents: embedding_retriever.documents

Parameters

Init parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
api_typeUnion[HFEmbeddingAPIType, str]The type of Hugging Face API to use. Options: serverless_inference_api, inference_endpoints, text_embeddings_inference.
api_paramsDict[str, str]A dictionary containing either model (Hugging Face model ID, required for serverless_inference_api) or url (URL of the inference endpoint, required for inference_endpoints or text_embeddings_inference).
tokenOptional[Secret]Secret.from_env_var(['HF_API_TOKEN', 'HF_TOKEN'], strict=False)The Hugging Face token to use as HTTP bearer authorization. Check your HF token in your account settings.
prefixstrA string to add at the beginning of each text.
suffixstrA string to add at the end of each text.
truncateOptional[bool]TrueTruncates the input text to the maximum length supported by the model. Applicable when api_type is text_embeddings_inference or inference_endpoints if the backend uses Text Embeddings Inference. Ignored for serverless_inference_api.
normalizeOptional[bool]FalseNormalizes the embeddings to unit length. Applicable when api_type is text_embeddings_inference or inference_endpoints if the backend uses Text Embeddings Inference. Ignored for serverless_inference_api.

Run method parameters

These are the parameters you can configure for the run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.

ParameterTypeDefaultDescription
textstrText to embed.