OllamaGenerator
Provides an interface to generate text using an LLM running on Ollama.
Basic Information
- Type:
haystack_integrations.components.generators.ollama.generator.OllamaGenerator
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| prompt | str | The prompt to generate a response for. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, and others. See the available arguments in Ollama docs. |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[str] | A dictionary with the following keys: - replies: The responses from the model - meta: The metadata collected during the run | |
| meta | List[Dict[str, Any]] | A dictionary with the following keys: - replies: The responses from the model - meta: The metadata collected during the run |
Overview
Work in Progress
Bear with us while we're working on adding pipeline examples and most common components connections.
Provides an interface to generate text using an LLM running on Ollama.
Usage example:
from haystack_integrations.components.generators.ollama import OllamaGenerator
generator = OllamaGenerator(model="zephyr",
url = "http://localhost:11434",
generation_kwargs={
"num_predict": 100,
"temperature": 0.9,
})
print(generator.run("Who is the best American actor?"))
Usage Example
components:
OllamaGenerator:
type: ollama.src.haystack_integrations.components.generators.ollama.generator.OllamaGenerator
init_parameters:
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | orca-mini | The name of the model to use. The model should be available in the running Ollama instance. |
| url | str | http://localhost:11434 | The URL of a running Ollama instance. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, and others. See the available arguments in Ollama docs. |
| system_prompt | Optional[str] | None | Optional system message (overrides what is defined in the Ollama Modelfile). |
| template | Optional[str] | None | The full prompt template (overrides what is defined in the Ollama Modelfile). |
| raw | bool | False | If True, no formatting will be applied to the prompt. You may choose to use the raw parameter if you are specifying a full templated prompt in your API request. |
| timeout | int | 120 | The number of seconds before throwing a timeout error from the Ollama API. |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
| keep_alive | Optional[Union[float, str]] | None | The option that controls how long the model will stay loaded into memory following the request. If not set, it will use the default value from the Ollama (5 minutes). The value can be set to: - a duration string (such as "10m" or "24h") - a number in seconds (such as 3600) - any negative number which will keep the model loaded in memory (e.g. -1 or "-1m") - '0' which will unload the model immediately after generating a response. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| prompt | str | The prompt to generate a response for. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Ollama generation endpoint, such as temperature, top_p, and others. See the available arguments in Ollama docs. |
| streaming_callback | Optional[Callable[[StreamingChunk], None]] | None | A callback function that is called when a new token is received from the stream. |
Was this page helpful?