MetaLlamaChatGenerator
Generates text using Meta's Llama models via the Llama API.
Key Features
- Generates text using models available on Meta's Llama API.
- Supports streaming responses for real-time token delivery.
- Accepts tool definitions for function-calling workflows.
- Configurable generation parameters such as
temperature,max_tokens, andtop_p. - Compatible with
ChatPromptBuilderfor structured prompt construction. - Currently supports only
json_schemaresponse format.
Configuration
Create a secret with your Llama API key. Use LLAMA_API_KEY as the secret key. For instructions, see Create Secrets. Get your API key from Meta's Llama API.
- Drag the
MetaLlamaChatGeneratorcomponent onto the canvas from the Component Library. - Click the component to open the configuration panel.
- On the General tab:
- Enter the name of the Llama model to use, such as
Llama-4-Scout-17B-16E-Instruct-FP8. For supported models, see the Llama API docs.
- Enter the name of the Llama model to use, such as
- Go to the Advanced tab to configure the API key, API base URL, timeout, max retries, generation parameters, and streaming callback.
Connections
MetaLlamaChatGenerator accepts a list of messages (ChatMessage objects) and optional streaming_callback, generation_kwargs, and tools as inputs. It outputs a list of replies (ChatMessage objects).
Connect ChatPromptBuilder to its messages input. Connect its replies output to OutputAdapter or DeepsetAnswerBuilder for further processing.
Usage Example
This is an example RAG pipeline with MetaLlamaChatGenerator and DeepsetAnswerBuilder:
components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0
query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2
embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate
ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8
meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id
answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm
ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents.\nIf the answer is not in the documents, rely on the web_search tool to find information.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}] :\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
required_variables:
variables:
OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
custom_filters:
unsafe: false
MetaLlamaChatGenerator:
type: haystack_integrations.components.generators.meta_llama.chat.chat_generator.MetaLlamaChatGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- LLAMA_API_KEY
strict: false
model: Llama-4-Scout-17B-16E-Instruct-FP8
api_base_url: https://api.llama.com/compat/v1/
generation_kwargs:
streaming_callback:
tools:
connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: ChatPromptBuilder.prompt
receiver: MetaLlamaChatGenerator.messages
- sender: MetaLlamaChatGenerator.replies
receiver: OutputAdapter.replies
inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"
outputs: # Defines the output of your pipeline
documents: "meta_field_grouping_ranker.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers
max_runs_per_component: 100
metadata: {}
Parameters
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of ChatMessage instances representing the input messages. | |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function that is called when a new token is received from the stream. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for the model. For details, see model documentation. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of Tool objects or a Toolset that the model can use. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[ChatMessage] | A list containing the generated ChatMessage responses. |
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_key | Secret | Secret.from_env_var('LLAMA_API_KEY') | The Llama API key. |
| model | str | Llama-4-Scout-17B-16E-Instruct-FP8 | The name of the Llama chat completion model to use. |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
| api_base_url | Optional[str] | https://api.llama.com/compat/v1/ | The Llama API Base url. For more details, see LlamaAPI docs. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Other parameters to use for the model. These parameters are all sent directly to the Llama API endpoint. See Llama API docs for more details. Some of the supported parameters: - max_tokens: The maximum number of tokens the output text can have. - temperature: What sampling temperature to use. Higher values mean the model will take more risks. Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer. - top_p: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. - stream: Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. - safe_prompt: Whether to inject a safety prompt before all conversations. - random_seed: The seed to use for random sampling. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools for which the model can prepare calls. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of ChatMessage instances representing the input messages. | |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function that is called when a new token is received from the stream. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for the model. For details, see model documentation. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of Tool objects or a Toolset that the model can use. |
Was this page helpful?