Skip to main content

LlamaCppGenerator

Generate text using LLMs through llama.cpp.

Basic Information

  • Type: haystack_integrations.components.generators.llama_cpp.generator.LlamaCppGenerator
  • Components it can connect with:
    • Prompt Builders: Receives prompts from components like PromptBuilder.
    • Answer Builders: Sends replies to AnswerBuilder for formatting answers.

Inputs

ParameterTypeDefaultDescription
promptstrThe prompt to be sent to the generative model.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. These kwargs are merged with any generation_kwargs set during initialization. For more information on the available kwargs, see llama.cpp documentation.

Outputs

ParameterTypeDefaultDescription
repliesList[str]The list of string replies generated by the model.
metaList[Dict[str, Any]]Metadata about the request, including completion details from llama.cpp.

Overview

llama.cpp is a project written in C/C++ for efficient inference of LLMs. It employs the quantized GGUF format, suitable for running these models on standard machines (even without GPUs). LlamaCppGenerator provides an interface to generate text using an LLM running on Llama.cpp.

Usage Example

This example shows a simple question-answering pipeline using LlamaCppGenerator with a locally hosted GGUF model. The generator outputs string replies that can be used with AnswerBuilder.

components:
PromptBuilder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |
Answer the following question concisely and accurately.

Question: {{ question }}
Answer:
LlamaCppGenerator:
type: haystack_integrations.components.generators.llama_cpp.generator.LlamaCppGenerator
init_parameters:
model: /models/zephyr-7b-beta.Q4_0.gguf
n_ctx: 2048
n_batch: 512
generation_kwargs:
max_tokens: 256
temperature: 0.7
top_p: 0.9
AnswerBuilder:
type: haystack.components.builders.answer_builder.AnswerBuilder
init_parameters:
pattern:
reference_pattern:

connections:
- sender: PromptBuilder.prompt
receiver: LlamaCppGenerator.prompt
- sender: LlamaCppGenerator.replies
receiver: AnswerBuilder.replies

max_runs_per_component: 100

metadata: {}

inputs:
question:
- PromptBuilder.question
- AnswerBuilder.query

outputs:
answers: AnswerBuilder.answers

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrThe path of a quantized model for text generation, for example, "zephyr-7b-beta.Q4_0.gguf". If the model path is also specified in the model_kwargs, this parameter is ignored.
n_ctxOptional[int]0The number of tokens in the context. When set to 0, the context is taken from the model.
n_batchOptional[int]512Prompt processing maximum batch size.
model_kwargsOptional[Dict[str, Any]]NoneDictionary containing keyword arguments used to initialize the LLM for text generation. These keyword arguments provide fine-grained control over the model loading. In case of duplication, these kwargs override model, n_ctx, and n_batch init parameters. For more information on the available kwargs, see llama.cpp documentation.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. For more information on the available kwargs, see llama.cpp documentation.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
promptstrthe prompt to be sent to the generative model.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. For more information on the available kwargs, see llama.cpp documentation.