OllamaTextEmbedder
Embed strings, such as user queries, using Ollama models.
Basic Information
- Type:
haystack_integrations.components.embedders.ollama.text_embedder.OllamaTextEmbedder - Components it can connect with:
Input:OllamaTextEmbedderreceives the query to embed fromInput.- Embedding Retrievers:
OllamaTextEmbeddercan send the embedded query to an embedding retriever that uses it to find matching documents.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to be converted to an embedding. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, etc. See the Ollama docs. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| embedding | List[float] | The embedding of the text. | |
| meta | Dict[str, Any] | Metadata about the request, including the model name. |
Overview
OllamaTextEmbedder uses Ollama models to embed strings, such as user queries. Use this component in apps with embedding retrieval to transform your query into a vector.
Ollama is a project focused on running LLMs locally. This means you can run embedding models on your own infrastructure without relying on external API services.
This component is used in query pipelines to embed user queries. To embed a list of documents in an index, use the OllamaDocumentEmbedder, which enriches the document with the computed embedding.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Compatible Models
Unless specified otherwise, the default embedding model is nomic-embed-text. See other pre-built models in Ollama's library. To load your own custom model, follow the instructions from Ollama.
Prerequisites
You need a running Ollama instance with the embedding model pulled. The component uses http://localhost:11434 as the default URL.
Usage Example
Using the Component in a Pipeline
This is an example of a query pipeline with OllamaTextEmbedder that receives a query to embed and then sends the embedded query to OpenSearchEmbeddingRetriever to find matching documents.
components:
OllamaTextEmbedder:
type: haystack_integrations.components.embedders.ollama.text_embedder.OllamaTextEmbedder
init_parameters:
model: nomic-embed-text
url: http://localhost:11434
generation_kwargs:
timeout: 120
OpenSearchEmbeddingRetriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
filters:
top_k: 10
filter_policy: replace
custom_query:
raise_on_failure: true
efficient_filtering: true
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: ollama-embeddings-index
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
similarity: cosine
connections:
- sender: OllamaTextEmbedder.embedding
receiver: OpenSearchEmbeddingRetriever.query_embedding
max_runs_per_component: 100
metadata: {}
inputs:
query:
- OllamaTextEmbedder.text
outputs:
documents: OpenSearchEmbeddingRetriever.documents
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | nomic-embed-text | The name of the model to use. The model should be available in the running Ollama instance. |
| url | str | http://localhost:11434 | The URL of a running Ollama instance. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, and others. See the available arguments in Ollama docs. |
| timeout | int | 120 | The number of seconds before throwing a timeout error from the Ollama API. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to be converted to an embedding. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, etc. See the Ollama docs. |
Was this page helpful?