Skip to main content

LlamaCppGenerator

Provides an interface to generate text using LLM via llama.cpp.

Basic Information

  • Type: haystack_integrations.components.generators.llama_cpp.generator.LlamaCppGenerator

Inputs

ParameterTypeDefaultDescription
promptstrthe prompt to be sent to the generative model.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. For more information on the available kwargs, see llama.cpp documentation.

Outputs

ParameterTypeDefaultDescription
repliesList[str]A dictionary with the following keys: - replies: the list of replies generated by the model. - meta: metadata about the request.
metaList[Dict[str, Any]]A dictionary with the following keys: - replies: the list of replies generated by the model. - meta: metadata about the request.

Overview

Work in Progress

Bear with us while we're working on adding pipeline examples and most common components connections.

Provides an interface to generate text using LLM via llama.cpp.

llama.cpp is a project written in C/C++ for efficient inference of LLMs. It employs the quantized GGUF format, suitable for running these models on standard machines (even without GPUs).

Usage example:

from haystack_integrations.components.generators.llama_cpp import LlamaCppGenerator
generator = LlamaCppGenerator(model="zephyr-7b-beta.Q4_0.gguf", n_ctx=2048, n_batch=512)

print(generator.run("Who is the best American actor?", generation_kwargs={"max_tokens": 128}))
# {'replies': ['John Cusack'], 'meta': [{"object": "text_completion", ...}]}

Usage Example

components:
LlamaCppGenerator:
type: llama_cpp.src.haystack_integrations.components.generators.llama_cpp.generator.LlamaCppGenerator
init_parameters:

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrThe path of a quantized model for text generation, for example, "zephyr-7b-beta.Q4_0.gguf". If the model path is also specified in the model_kwargs, this parameter will be ignored.
n_ctxOptional[int]0The number of tokens in the context. When set to 0, the context will be taken from the model.
n_batchOptional[int]512Prompt processing maximum batch size.
model_kwargsOptional[Dict[str, Any]]NoneDictionary containing keyword arguments used to initialize the LLM for text generation. These keyword arguments provide fine-grained control over the model loading. In case of duplication, these kwargs override model, n_ctx, and n_batch init parameters. For more information on the available kwargs, see llama.cpp documentation.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. For more information on the available kwargs, see llama.cpp documentation.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
promptstrthe prompt to be sent to the generative model.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. For more information on the available kwargs, see llama.cpp documentation.