Skip to main content

OllamaTextEmbedder

Embed strings, such as user queries, using Ollama models.

Basic Information

  • Type: haystack_integrations.components.embedders.ollama.text_embedder.OllamaTextEmbedder
  • Components it can connect with:
    • Input: OllamaTextEmbedder receives the query to embed from Input.
    • Embedding Retrievers: OllamaTextEmbedder can send the embedded query to an embedding retriever that uses it to find matching documents.

Inputs

ParameterTypeDefaultDescription
textstrText to be converted to an embedding.
generation_kwargsOptional[Dict[str, Any]]NoneOptional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, etc. See the Ollama docs.

Outputs

ParameterTypeDefaultDescription
embeddingList[float]The embedding of the text.
metaDict[str, Any]Metadata about the request, including the model name.

Overview

OllamaTextEmbedder uses Ollama models to embed strings, such as user queries. Use this component in apps with embedding retrieval to transform your query into a vector.

Ollama is a project focused on running LLMs locally. This means you can run embedding models on your own infrastructure without relying on external API services.

This component is used in query pipelines to embed user queries. To embed a list of documents in an index, use the OllamaDocumentEmbedder, which enriches the document with the computed embedding.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Compatible Models

Unless specified otherwise, the default embedding model is nomic-embed-text. See other pre-built models in Ollama's library. To load your own custom model, follow the instructions from Ollama.

Prerequisites

You need a running Ollama instance with the embedding model pulled. The component uses http://localhost:11434 as the default URL.

Usage Example

Using the Component in a Pipeline

This is an example of a query pipeline with OllamaTextEmbedder that receives a query to embed and then sends the embedded query to OpenSearchEmbeddingRetriever to find matching documents.

components:
OllamaTextEmbedder:
type: haystack_integrations.components.embedders.ollama.text_embedder.OllamaTextEmbedder
init_parameters:
model: nomic-embed-text
url: http://localhost:11434
generation_kwargs:
timeout: 120
OpenSearchEmbeddingRetriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
filters:
top_k: 10
filter_policy: replace
custom_query:
raise_on_failure: true
efficient_filtering: true
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: ollama-embeddings-index
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
similarity: cosine

connections:
- sender: OllamaTextEmbedder.embedding
receiver: OpenSearchEmbeddingRetriever.query_embedding

max_runs_per_component: 100

metadata: {}

inputs:
query:
- OllamaTextEmbedder.text

outputs:
documents: OpenSearchEmbeddingRetriever.documents

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrnomic-embed-textThe name of the model to use. The model should be available in the running Ollama instance.
urlstrhttp://localhost:11434The URL of a running Ollama instance.
generation_kwargsOptional[Dict[str, Any]]NoneOptional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, and others. See the available arguments in Ollama docs.
timeoutint120The number of seconds before throwing a timeout error from the Ollama API.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
textstrText to be converted to an embedding.
generation_kwargsOptional[Dict[str, Any]]NoneOptional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, etc. See the Ollama docs.