Skip to main content

MetaLlamaChatGenerator

Generate text using models available on Meta's Llama API.

Basic Information

  • Type: haystack_integrations.components.generators.meta_llama.chat.chat_generator.MetaLlamaChatGenerator
  • Components it can connect with:
    • ChatPromptBuilder: MetaLlamaChatGenerator receives a rendered prompt from ChatPromptBuilder.
    • DeepsetAnswerBuilder: MetaLlamaChatGenerator sends the generated replies to DeepsetAnswerBuilder through OutputAdapter.

Inputs

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of ChatMessage instances representing the input messages.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function that is called when a new token is received from the stream.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for the model. For details, see model documentation.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of Tool objects or a Toolset that the model can use.

Outputs

ParameterTypeDefaultDescription
repliesList[ChatMessage]A list containing the generated ChatMessage responses.

Overview

Use MetaLlamaChatGenerator to generate text using models available on Meta's Llama API. For supported models, see Llama API Docs.

You can pass any text generation parameters valid for the Llama Chat Completion API directly to this component using the generation_kwargs parameter.

Use this component to:

  • Work seamlessly with the Llama API Chat Completion endpoint.
  • Stream responses from the Llama API Chat Completion endpoint.

Response Format

MetaLlamaChatGenerator currently only supports json_schema response format.

Usage Example

Using the Component in a Pipeline

This is an example RAG pipeline with MetaLlamaChatGenerator and DeepsetAnswerBuilder:

components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0

query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2

embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return

document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate

ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8

meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm

ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents.\nIf the answer is not in the documents, rely on the web_search tool to find information.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}] :\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
required_variables:
variables:
OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
custom_filters:
unsafe: false

MetaLlamaChatGenerator:
type: haystack_integrations.components.generators.meta_llama.chat.chat_generator.MetaLlamaChatGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- LLAMA_API_KEY
strict: false
model: Llama-4-Scout-17B-16E-Instruct-FP8
api_base_url: https://api.llama.com/compat/v1/
generation_kwargs:
streaming_callback:
tools:

connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: ChatPromptBuilder.prompt
receiver: MetaLlamaChatGenerator.messages
- sender: MetaLlamaChatGenerator.replies
receiver: OutputAdapter.replies

inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"

outputs: # Defines the output of your pipeline
documents: "meta_field_grouping_ranker.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers

max_runs_per_component: 100

metadata: {}

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
api_keySecretSecret.from_env_var('LLAMA_API_KEY')The Llama API key.
modelstrLlama-4-Scout-17B-16E-Instruct-FP8The name of the Llama chat completion model to use.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.
api_base_urlOptional[str]https://api.llama.com/compat/v1/The Llama API Base url. For more details, see LlamaAPI docs.
generation_kwargsOptional[Dict[str, Any]]NoneOther parameters to use for the model. These parameters are all sent directly to the Llama API endpoint. See Llama API docs for more details. Some of the supported parameters: - max_tokens: The maximum number of tokens the output text can have. - temperature: What sampling temperature to use. Higher values mean the model will take more risks. Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer. - top_p: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. - stream: Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. - safe_prompt: Whether to inject a safety prompt before all conversations. - random_seed: The seed to use for random sampling.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools for which the model can prepare calls.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of ChatMessage instances representing the input messages.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function that is called when a new token is received from the stream.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for the model. For details, see model documentation.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of Tool objects or a Toolset that the model can use.