Skip to main content

FastembedDocumentEmbedder

Compute document embeddings using Fastembed embedding models.

Basic Information

  • Type: haystack_integrations.components.embedders.fastembed.fastembed_document_embedder.FastembedDocumentEmbedder
  • Components it can connect with:
    • Receives documents from Converters or DocumentSplitter in an index.
    • Sends embedded documents to DocumentWriter for storage.

Inputs

ParameterTypeDefaultDescription
documentsList[Document]List of Documents to embed.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]List of Documents with each Document's embedding field set to the computed embeddings.

Overview

FastembedDocumentEmbedder computes document embeddings using Fastembed embedding models. Fastembed is a lightweight, fast Python library built for embedding generation with support for most state-of-the-art embedding models.

The embedding of each document is stored in the embedding metadata field of the document. Use this component in an index to embed documents before storing them in a document store.

Compatible Models

You can find the supported models in the FastEmbed documentation.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Usage Example

This index uses FastembedDocumentEmbedder to embed documents before storing them:

components:
TextFileToDocument:
type: haystack.components.converters.txt.TextFileToDocument
init_parameters:
encoding: utf-8
store_full_path: false

DocumentSplitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: sentence
split_length: 5
split_overlap: 1

FastembedDocumentEmbedder:
type: haystack_integrations.components.embedders.fastembed.fastembed_document_embedder.FastembedDocumentEmbedder
init_parameters:
model: BAAI/bge-small-en-v1.5
cache_dir:
threads:
prefix: ""
suffix: ""
batch_size: 256
progress_bar: true
parallel:
local_files_only: false
meta_fields_to_embed:
embedding_separator: "\n"

DocumentWriter:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'default'
max_chunk_bytes: 104857600
embedding_dim: 384
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
policy: OVERWRITE

connections:
- sender: TextFileToDocument.documents
receiver: DocumentSplitter.documents
- sender: DocumentSplitter.documents
receiver: FastembedDocumentEmbedder.documents
- sender: FastembedDocumentEmbedder.documents
receiver: DocumentWriter.documents

inputs:
files:
- TextFileToDocument.sources

max_runs_per_component: 100

metadata: {}

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrBAAI/bge-small-en-v1.5Local path or name of the model in Hugging Face's model hub, such as BAAI/bge-small-en-v1.5.
cache_dirOptional[str]NoneThe path to the cache directory. Can be set using the FASTEMBED_CACHE_PATH env variable. Defaults to fastembed_cache in the system's temp directory.
threadsOptional[int]NoneThe number of threads single onnxruntime session can use.
prefixstr""A string to add to the beginning of each text.
suffixstr""A string to add to the end of each text.
batch_sizeint256Number of strings to encode at once.
progress_barboolTrueIf True, displays progress bar during embedding.
parallelOptional[int]NoneIf > 1, data-parallel encoding is used, recommended for offline encoding of large datasets. If 0, use all available cores. If None, don't use data-parallel processing, use default onnxruntime threading instead.
local_files_onlyboolFalseIf True, only use the model files in the cache_dir.
meta_fields_to_embedOptional[List[str]]NoneList of meta fields that should be embedded along with the Document content.
embedding_separatorstr\nSeparator used to concatenate the meta fields to the Document content.

Run Method Parameters

These are the parameters you can configure for the run() method. You can pass these parameters at query time through the API, in Playground, or when running a job.

ParameterTypeDefaultDescription
documentsList[Document]List of Documents to embed.