HuggingFaceAPITextEmbedder
Embed strings using Hugging Face APIs.
Key Features
- Embeds query text using the Hugging Face Serverless Inference API, Inference Endpoints, or self-hosted Text Embeddings Inference.
- Use with the same model as
HuggingFaceAPIDocumentEmbedderto ensure compatible embeddings. - Outputs a float vector embedding for use with embedding retrievers.
- Optional truncation and normalization for TEI and Inference Endpoints backends.
This component embeds plain text. To embed a list of documents, use HuggingFaceAPIDocumentEmbedder.
Configuration
Connect Haystack Platform to your Hugging Face account on the Integrations page. For detailed instructions, see Use Hugging Face Models. A token is required for the Serverless Inference API and Inference Endpoints.
- Drag the
HuggingFaceAPITextEmbeddercomponent onto the canvas from the Component Library. - Click the component to open the configuration panel.
- On the General tab:
- Set
api_typeto one of:serverless_inference_api: Uses the free Hugging Face Serverless Inference API.inference_endpoints: Uses a paid Hugging Face Inference Endpoint.text_embeddings_inference: Uses a self-hosted Text Embeddings Inference service.
- Set
api_params. Forserverless_inference_api, provide{"model": "BAAI/bge-small-en-v1.5"}. Forinference_endpointsortext_embeddings_inference, provide{"url": "<your-endpoint-url>"}.
- Set
- Go to the Advanced tab to configure the token, prefix, suffix, truncate, and normalize.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Connections
HuggingFaceAPITextEmbedder accepts a text string as input (text). It outputs the computed embedding (embedding).
In a query pipeline, connect the pipeline's query input to text, then connect embedding to an embedding retriever's query_embedding input.
Usage Example
Using the component in a pipeline
This query pipeline uses HuggingFaceAPITextEmbedder to embed a query and retrieve documents using semantic search:
components:
query_embedder:
type: haystack.components.embedders.hugging_face_api_text_embedder.HuggingFaceAPITextEmbedder
init_parameters:
api_type: serverless_inference_api
api_params:
model: BAAI/bge-small-en-v1.5
token:
type: env_var
env_vars:
- HF_API_TOKEN
- HF_TOKEN
strict: false
prefix:
suffix:
truncate: true
normalize: false
embedding_retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
use_ssl: true
verify_certs: false
top_k: 20
connections:
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
inputs:
query:
- query_embedder.text
filters:
- embedding_retriever.filters
outputs:
documents: embedding_retriever.documents
Parameters
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| embedding | List[float] | The embedding of the input text. |
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_type | Union[HFEmbeddingAPIType, str] | The type of Hugging Face API to use. Options: serverless_inference_api, inference_endpoints, text_embeddings_inference. | |
| api_params | Dict[str, str] | A dictionary containing either model (Hugging Face model ID, required for serverless_inference_api) or url (URL of the inference endpoint, required for inference_endpoints or text_embeddings_inference). | |
| token | Optional[Secret] | Secret.from_env_var(['HF_API_TOKEN', 'HF_TOKEN'], strict=False) | The Hugging Face token to use as HTTP bearer authorization. Check your HF token in your account settings. |
| prefix | str | A string to add at the beginning of each text. | |
| suffix | str | A string to add at the end of each text. | |
| truncate | Optional[bool] | True | Truncates the input text to the maximum length supported by the model. Applicable when api_type is text_embeddings_inference or inference_endpoints if the backend uses Text Embeddings Inference. Ignored for serverless_inference_api. |
| normalize | Optional[bool] | False | Normalizes the embeddings to unit length. Applicable when api_type is text_embeddings_inference or inference_endpoints if the backend uses Text Embeddings Inference. Ignored for serverless_inference_api. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to embed. |
Was this page helpful?