Skip to main content

OllamaDocumentEmbedder

Calculate document embeddings using Ollama models. Document embedders are used to embed documents in your indexes.

Basic Information

  • Type: haystack_integrations.components.embedders.ollama.document_embedder.OllamaDocumentEmbedder
  • Components it can connect with:
    • Converters and Preprocessors: OllamaDocumentEmbedder can receive documents to embed from a converter, such as TextFileToDocument or a preprocessor, such as DocumentSplitter.
    • DocumentWriter: OllamaDocumentEmbedder sends embedded documents to DocumentWriter that writes them into a document store.

Inputs

ParameterTypeDefaultDescription
documentsList[Document]Documents to be converted to an embedding.
generation_kwargsOptional[Dict[str, Any]]NoneOptional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, etc. See the Ollama docs.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]Documents with their embeddings added to the embedding field.
metaDict[str, Any]Metadata about the request, including the model name.

Overview

OllamaDocumentEmbedder computes the embeddings of a list of documents and stores the obtained vectors in the embedding field of each document. It uses embedding models compatible with the Ollama Library.

Ollama is a project focused on running LLMs locally. This means you can run embedding models on your own infrastructure without relying on external API services.

This component embeds documents. To embed a string (like a query), use the OllamaTextEmbedder.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Compatible Models

Unless specified otherwise, the default embedding model is nomic-embed-text. See other pre-built models in Ollama's library. To load your own custom model, follow the instructions from Ollama.

Prerequisites

You need a running Ollama instance with the embedding model pulled. The component uses http://localhost:11434 as the default URL.

Usage Example

Using the Component in an Index

In this index, OllamaDocumentEmbedder receives documents from DocumentSplitter and embeds them. It then sends the embedded documents to DocumentWriter that writes them into a document store. The index is configured to use the nomic-embed-text model, which means OllamaTextEmbedder used in the query pipeline must also use the same model.

components:
TextFileToDocument:
type: haystack.components.converters.txt.TextFileToDocument
init_parameters:
encoding: utf-8
store_full_path: false
DocumentSplitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: word
split_length: 200
split_overlap: 0
split_threshold: 0
splitting_function:
OllamaDocumentEmbedder:
type: haystack_integrations.components.embedders.ollama.document_embedder.OllamaDocumentEmbedder
init_parameters:
model: nomic-embed-text
url: http://localhost:11434
generation_kwargs:
timeout: 120
prefix: ''
suffix: ''
progress_bar: true
meta_fields_to_embed:
embedding_separator: "\n"
batch_size: 32
DocumentWriter:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: ollama-embeddings-index
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
similarity: cosine
policy: NONE

connections:
- sender: TextFileToDocument.documents
receiver: DocumentSplitter.documents
- sender: DocumentSplitter.documents
receiver: OllamaDocumentEmbedder.documents
- sender: OllamaDocumentEmbedder.documents
receiver: DocumentWriter.documents

max_runs_per_component: 100

metadata: {}

inputs:
files:
- TextFileToDocument.sources

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrnomic-embed-textThe name of the model to use. The model should be available in the running Ollama instance.
urlstrhttp://localhost:11434The URL of a running Ollama instance.
generation_kwargsOptional[Dict[str, Any]]NoneOptional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, and others. See the available arguments in Ollama docs.
timeoutint120The number of seconds before throwing a timeout error from the Ollama API.
prefixstrA string to add at the beginning of each text.
suffixstrA string to add at the end of each text.
progress_barboolTrueIf True, shows a progress bar when running.
meta_fields_to_embedOptional[List[str]]NoneList of metadata fields to embed along with the document text.
embedding_separatorstr\nSeparator used to concatenate the metadata fields to the document text.
batch_sizeint32Number of documents to process at once.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
documentsList[Document]Documents to be converted to an embedding.
generation_kwargsOptional[Dict[str, Any]]NoneOptional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, etc. See the Ollama docs.