HuggingFaceAPITextEmbedder
Embed strings using Hugging Face APIs.
Basic Information
- Type:
haystack.components.embedders.hugging_face_api_text_embedder.HuggingFaceAPITextEmbedder - Components it can connect with:
- Retrievers:
HuggingFaceAPITextEmbeddercan send embeddings to retrievers for semantic search. Input:HuggingFaceAPITextEmbeddercan receive query text to embed from theInputcomponent.
- Retrievers:
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| embedding | List[float] | The embedding of the input text. |
Overview
HuggingFaceAPITextEmbedder embeds strings using Hugging Face APIs. Use it to embed queries for semantic search in retrieval pipelines.
This component embeds plain text. To embed a list of documents, use HuggingFaceAPIDocumentEmbedder.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
You can use it with the following Hugging Face APIs:
- Free Serverless Inference API: Experiment with models hosted on Hugging Face Hub. It's rate-limited and not meant for production.
- Paid Inference Endpoints: A private instance of the model deployed by Hugging Face.
- Self-hosted Text Embeddings Inference: A toolkit for efficiently deploying and serving text embedding models on-premise.
Authorization
Connect Haystack Platform to your Hugging Face account on the Integrations page. For detailed instructions, see Use Hugging Face Models. A token is required for the Serverless Inference API and Inference Endpoints.
Usage Example
Using the component in a pipeline
This query pipeline uses HuggingFaceAPITextEmbedder to embed a query and retrieve documents using semantic search:
components:
query_embedder:
type: haystack.components.embedders.hugging_face_api_text_embedder.HuggingFaceAPITextEmbedder
init_parameters:
api_type: serverless_inference_api
api_params:
model: BAAI/bge-small-en-v1.5
token:
type: env_var
env_vars:
- HF_API_TOKEN
- HF_TOKEN
strict: false
prefix:
suffix:
truncate: true
normalize: false
embedding_retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
use_ssl: true
verify_certs: false
top_k: 20
connections:
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
inputs:
query:
- query_embedder.text
filters:
- embedding_retriever.filters
outputs:
documents: embedding_retriever.documents
Parameters
Init parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_type | Union[HFEmbeddingAPIType, str] | The type of Hugging Face API to use. Options: serverless_inference_api, inference_endpoints, text_embeddings_inference. | |
| api_params | Dict[str, str] | A dictionary containing either model (Hugging Face model ID, required for serverless_inference_api) or url (URL of the inference endpoint, required for inference_endpoints or text_embeddings_inference). | |
| token | Optional[Secret] | Secret.from_env_var(['HF_API_TOKEN', 'HF_TOKEN'], strict=False) | The Hugging Face token to use as HTTP bearer authorization. Check your HF token in your account settings. |
| prefix | str | A string to add at the beginning of each text. | |
| suffix | str | A string to add at the end of each text. | |
| truncate | Optional[bool] | True | Truncates the input text to the maximum length supported by the model. Applicable when api_type is text_embeddings_inference or inference_endpoints if the backend uses Text Embeddings Inference. Ignored for serverless_inference_api. |
| normalize | Optional[bool] | False | Normalizes the embeddings to unit length. Applicable when api_type is text_embeddings_inference or inference_endpoints if the backend uses Text Embeddings Inference. Ignored for serverless_inference_api. |
Run method parameters
These are the parameters you can configure for the run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to embed. |
Was this page helpful?