OllamaDocumentEmbedder
Calculate document embeddings using Ollama models. Document embedders are used to embed documents in your indexes.
Basic Information
- Type:
haystack_integrations.components.embedders.ollama.document_embedder.OllamaDocumentEmbedder - Components it can connect with:
- Converters and Preprocessors:
OllamaDocumentEmbeddercan receive documents to embed from a converter, such asTextFileToDocumentor a preprocessor, such asDocumentSplitter. DocumentWriter:OllamaDocumentEmbeddersends embedded documents toDocumentWriterthat writes them into a document store.
- Converters and Preprocessors:
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | Documents to be converted to an embedding. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, etc. See the Ollama docs. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | Documents with their embeddings added to the embedding field. | |
| meta | Dict[str, Any] | Metadata about the request, including the model name. |
Overview
OllamaDocumentEmbedder computes the embeddings of a list of documents and stores the obtained vectors in the embedding field of each document. It uses embedding models compatible with the Ollama Library.
Ollama is a project focused on running LLMs locally. This means you can run embedding models on your own infrastructure without relying on external API services.
This component embeds documents. To embed a string (like a query), use the OllamaTextEmbedder.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Compatible Models
Unless specified otherwise, the default embedding model is nomic-embed-text. See other pre-built models in Ollama's library. To load your own custom model, follow the instructions from Ollama.
Prerequisites
You need a running Ollama instance with the embedding model pulled. The component uses http://localhost:11434 as the default URL.
Usage Example
Using the Component in an Index
In this index, OllamaDocumentEmbedder receives documents from DocumentSplitter and embeds them. It then sends the embedded documents to DocumentWriter that writes them into a document store. The index is configured to use the nomic-embed-text model, which means OllamaTextEmbedder used in the query pipeline must also use the same model.
components:
TextFileToDocument:
type: haystack.components.converters.txt.TextFileToDocument
init_parameters:
encoding: utf-8
store_full_path: false
DocumentSplitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: word
split_length: 200
split_overlap: 0
split_threshold: 0
splitting_function:
OllamaDocumentEmbedder:
type: haystack_integrations.components.embedders.ollama.document_embedder.OllamaDocumentEmbedder
init_parameters:
model: nomic-embed-text
url: http://localhost:11434
generation_kwargs:
timeout: 120
prefix: ''
suffix: ''
progress_bar: true
meta_fields_to_embed:
embedding_separator: "\n"
batch_size: 32
DocumentWriter:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: ollama-embeddings-index
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
similarity: cosine
policy: NONE
connections:
- sender: TextFileToDocument.documents
receiver: DocumentSplitter.documents
- sender: DocumentSplitter.documents
receiver: OllamaDocumentEmbedder.documents
- sender: OllamaDocumentEmbedder.documents
receiver: DocumentWriter.documents
max_runs_per_component: 100
metadata: {}
inputs:
files:
- TextFileToDocument.sources
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | nomic-embed-text | The name of the model to use. The model should be available in the running Ollama instance. |
| url | str | http://localhost:11434 | The URL of a running Ollama instance. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, and others. See the available arguments in Ollama docs. |
| timeout | int | 120 | The number of seconds before throwing a timeout error from the Ollama API. |
| prefix | str | A string to add at the beginning of each text. | |
| suffix | str | A string to add at the end of each text. | |
| progress_bar | bool | True | If True, shows a progress bar when running. |
| meta_fields_to_embed | Optional[List[str]] | None | List of metadata fields to embed along with the document text. |
| embedding_separator | str | \n | Separator used to concatenate the metadata fields to the document text. |
| batch_size | int | 32 | Number of documents to process at once. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | Documents to be converted to an embedding. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, etc. See the Ollama docs. |
Was this page helpful?