OllamaChatGenerator
Generate text using models running on Ollama.
Basic Information
- Type:
haystack_integrations.components.generators.ollama.chat.chat_generator.OllamaChatGenerator - Components it can connect with:
ChatPromptBuilder:OllamaChatGeneratorreceives a rendered prompt fromChatPromptBuilder.DeepsetAnswerBuilder:OllamaChatGeneratorsends the generated replies toDeepsetAnswerBuilderthroughOutputAdapter.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of ChatMessage instances representing the input messages. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | Per-call overrides for Ollama inference options. These are merged on top of the instance-level generation_kwargs. Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, and so on. For a full list, see the Ollama documentation. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. This parameter can accept either a list of Tool objects or a Toolset instance. |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callable to receive StreamingChunk objects as they arrive. Supplying a callback switches the component into streaming mode. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[ChatMessage] | A list of ChatMessages containing the model's response |
Overview
Use OllamaChatGenerator to generate text with models served by Ollama. For a full list of supported models, see Ollama's documentation.
Usage Example
Initializing the Component
components:
OllamaChatGenerator:
type: haystack_integrations.components.generators.ollama.chat.chat_generator.OllamaChatGenerator
init_parameters:
Using the Component in a Pipeline
This is an example RAG pipeline with OllamaChatGenerator and DeepsetAnswerBuilder connected through OutputAdapter:
components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0
query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2
embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate
ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8
meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id
answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm
ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents.\nIf the answer is not in the documents, rely on the web_search tool to find information.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}] :\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
required_variables:
variables:
OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
custom_filters:
unsafe: false
OllamaChatGenerator:
type: haystack_integrations.components.generators.ollama.chat.chat_generator.OllamaChatGenerator
init_parameters:
model: orca-mini
url: http://localhost:11434
generation_kwargs:
timeout: 120
keep_alive:
streaming_callback:
tools:
response_format:
connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: OllamaChatGenerator.replies
receiver: OutputAdapter.replies
- sender: ChatPromptBuilder.prompt
receiver: OllamaChatGenerator.messages
inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"
outputs: # Defines the output of your pipeline
documents: "meta_field_grouping_ranker.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers
max_runs_per_component: 100
metadata: {}
Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | orca-mini | The name of the model to use. The model must already be present (pulled) in the running Ollama instance. |
| url | str | http://localhost:11434 | The base URL of the Ollama server. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, and others. See the available arguments in Ollama documentation. |
| timeout | int | 120 | The number of seconds before throwing a timeout error from the Ollama API. |
| keep_alive | Optional[Union[float, str]] | None | The option that controls how long the model will stay loaded into memory following the request. If not set, it will use the default value from the Ollama (5 minutes). The value can be set to: - a duration string (such as "10m" or "24h") - a number in seconds (such as 3600) - any negative number which will keep the model loaded in memory (e.g. -1 or "-1m") - '0' which will unload the model immediately after generating a response. |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of haystack.tools.Tool or a haystack.tools.Toolset. Duplicate tool names raise a ValueError. Not all models support tools. For a list of models compatible with tools, see the models page. |
| response_format | Optional[Union[None, Literal['json'], JsonSchemaValue]] | None | The format for structured model outputs. The value can be: - None: No specific structure or format is applied to the response. The response is returned as-is. - "json": The response is formatted as a JSON object. - JSON Schema: The response is formatted as a JSON object that adheres to the specified JSON Schema. (needs Ollama ≥ 0.1.34) |
| think | bool | False |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of ChatMessage instances representing the input messages. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | Per-call overrides for Ollama inference options. These are merged on top of the instance-level generation_kwargs. Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, and others. For a complete list of agruments, see the Ollama documentation. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. This parameter can accept either a list of Tool objects or a Toolset instance. |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callable to receive StreamingChunk objects as they arrive. Supplying a callback switches the component into streaming mode. |
Was this page helpful?