LlamaCppGenerator
Provides an interface to generate text using LLM via llama.cpp.
Basic Information
- Type:
haystack_integrations.components.generators.llama_cpp.generator.LlamaCppGenerator
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| prompt | str | the prompt to be sent to the generative model. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | A dictionary containing keyword arguments to customize text generation. For more information on the available kwargs, see llama.cpp documentation. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[str] | A dictionary with the following keys: - replies: the list of replies generated by the model. - meta: metadata about the request. | |
| meta | List[Dict[str, Any]] | A dictionary with the following keys: - replies: the list of replies generated by the model. - meta: metadata about the request. |
Overview
Bear with us while we're working on adding pipeline examples and most common components connections.
Provides an interface to generate text using LLM via llama.cpp.
llama.cpp is a project written in C/C++ for efficient inference of LLMs. It employs the quantized GGUF format, suitable for running these models on standard machines (even without GPUs).
Usage example:
from haystack_integrations.components.generators.llama_cpp import LlamaCppGenerator
generator = LlamaCppGenerator(model="zephyr-7b-beta.Q4_0.gguf", n_ctx=2048, n_batch=512)
print(generator.run("Who is the best American actor?", generation_kwargs={"max_tokens": 128}))
# {'replies': ['John Cusack'], 'meta': [{"object": "text_completion", ...}]}
Usage Example
components:
LlamaCppGenerator:
type: llama_cpp.src.haystack_integrations.components.generators.llama_cpp.generator.LlamaCppGenerator
init_parameters:
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | The path of a quantized model for text generation, for example, "zephyr-7b-beta.Q4_0.gguf". If the model path is also specified in the model_kwargs, this parameter will be ignored. | |
| n_ctx | Optional[int] | 0 | The number of tokens in the context. When set to 0, the context will be taken from the model. |
| n_batch | Optional[int] | 512 | Prompt processing maximum batch size. |
| model_kwargs | Optional[Dict[str, Any]] | None | Dictionary containing keyword arguments used to initialize the LLM for text generation. These keyword arguments provide fine-grained control over the model loading. In case of duplication, these kwargs override model, n_ctx, and n_batch init parameters. For more information on the available kwargs, see llama.cpp documentation. |
| generation_kwargs | Optional[Dict[str, Any]] | None | A dictionary containing keyword arguments to customize text generation. For more information on the available kwargs, see llama.cpp documentation. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| prompt | str | the prompt to be sent to the generative model. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | A dictionary containing keyword arguments to customize text generation. For more information on the available kwargs, see llama.cpp documentation. |
Was this page helpful?