Skip to main content

LlamaCppChatGenerator

Complete chats using large language models running on llama.cpp.

Basic Information

  • Type: haystack_integrations.components.generators.llama_cpp.chat.chat_generator.LlamaCppChatGenerator
  • Components it can connect with:
    • ChatPromptBuilder: LlamaCppChatGenerator receives a rendered prompt from ChatPromptBuilder.
    • DeepsetAnswerBuilder: LlamaCppChatGenerator sends the generated replies to DeepsetAnswerBuilder through OutputAdapter.

Inputs

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of ChatMessage objects representing the input messages.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. For more information on the arguments you can use, see llama.cpp documentation.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools or a Toolset for which the model can prepare calls.

Outputs

ParameterTypeDefaultDescription
repliesList[ChatMessage]The responses from the model.

Overview

llama.cpp is a library written in C/C++ for efficient inference of LLMs. It employs the quantized GGUF format that reduces memory requiremens and accelerates inference, making it suitable for running these models on standard machines (even without GPUs).

Llama.cpp uses the quantized binary file of the LLM in GGUF format, which you can download from Hugging Face. LlamaCppChatGenerator supports models running on Llama.cpp. Pass the path to the locally saved GGUF file as the model parameter.

Prerequisites

To use LlamaCppChatGenerator, download the GGUF version of the model you want to use from Hugging Face. Then, pass the path to the locally saved GGUF file as the model parameter.

Usage Example

Initializing the Component

components:
LlamaCppChatGenerator:
type: llama_cpp.src.haystack_integrations.components.generators.llama_cpp.chat.chat_generator.LlamaCppChatGenerator
init_parameters:

Using the Component in a Pipeline

This is an example RAG pipeline with LlamaCppChatGenerator and DeepsetAnswerBuilder connected through OutputAdapter:

components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0

query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2

embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return

document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate

ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8

meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm

ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents.\nIf the answer is not in the documents, rely on the web_search tool to find information.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}] :\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
required_variables:
variables:
OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
custom_filters:
unsafe: false

LlamaCppChatGenerator:
type: haystack_integrations.components.generators.llama_cpp.chat.chat_generator.LlamaCppChatGenerator
init_parameters:
model: /Downloads/gemma-3-1b-it-q4_0.gguf
n_ctx: 0
n_batch: 512
model_kwargs:
generation_kwargs:

connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: ChatPromptBuilder.prompt
receiver: LlamaCppChatGenerator.messages
- sender: LlamaCppChatGenerator.replies
receiver: OutputAdapter.replies

inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"

outputs: # Defines the output of your pipeline
documents: "meta_field_grouping_ranker.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers

max_runs_per_component: 100

metadata: {}

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
modelstrThe path of a quantized model for text generation, for example, "zephyr-7b-beta.Q4_0.gguf". If the model path is also specified in the model_kwargs, this parameter is ignored.
n_ctxOptional[int]0The number of tokens in the context. When set to 0, the context is taken from the model.
n_batchOptional[int]512Prompt processing maximum batch size.
model_kwargsOptional[Dict[str, Any]]NoneDictionary containing keyword arguments used to initialize the LLM for text generation. These keyword arguments provide fine-grained control over the model loading. In case of duplication, these kwargs override model, n_ctx, and n_batch init parameters. For more information on the available kwargs, see llama.cpp documentation.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. For more information on the available kwargs, see llama.cpp documentation.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools or a Toolset for which the model can prepare calls. This parameter can accept either a list of Tool objects or a Toolset instance.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of ChatMessage instances representing the input messages.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. For more information on the available kwargs, see llama.cpp documentation.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools or a Toolset for which the model can prepare calls. If set, it will override the tools parameter set in pipeline configuration.