MetaLlamaChatGenerator
Complete chats using models available on Meta's Llama API.
Basic Information
- Type:
haystack_integrations.components.generators.meta_llama.chat.chat_generator.MetaLlamaChatGenerator - Components it can connect with:
ChatPromptBuilder:MetaLlamaChatGeneratorreceives a rendered prompt fromChatPromptBuilder.DeepsetAnswerBuilder:MetaLlamaChatGeneratorsends the generated replies toDeepsetAnswerBuilderthroughOutputAdapter.
Inputs
| Parameter | Type | Default | Description |
|---|
Outputs
| Parameter | Type | Default | Description |
|---|
Overview
Enables text generation using Llama generative models. For supported models, see Llama API Docs.
Users can pass any text generation parameters valid for the Llama Chat Completion API
directly to this component via the generation_kwargs parameter in __init__ or the generation_kwargs
parameter in run method.
Key Features and Compatibility:
- Primary Compatibility: Designed to work seamlessly with the Llama API Chat Completion endpoint.
- Streaming Support: Supports streaming responses from the Llama API Chat Completion endpoint.
- Customizability: Supports parameters supported by the Llama API Chat Completion endpoint.
- Response Format: Currently only supports json_schema response format.
This component uses the ChatMessage format for structuring both input and output, ensuring coherent and contextually relevant responses in chat-based text generation scenarios. Details on the ChatMessage format can be found in the Haystack docs
For more details on the parameters supported by the Llama API, refer to the Llama API Docs.
Usage example:
from haystack_integrations.components.generators.llama import LlamaChatGenerator
from haystack.dataclasses import ChatMessage
messages = [ChatMessage.from_user("What's Natural Language Processing?")]
client = LlamaChatGenerator()
response = client.run(messages)
print(response)
Usage Example
components:
MetaLlamaChatGenerator:
type: haystack_integrations.components.generators.meta_llama.chat.chat_generator.MetaLlamaChatGenerator
init_parameters:
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_key | Secret | Secret.from_env_var('LLAMA_API_KEY') | The Llama API key. |
| model | str | Llama-4-Scout-17B-16E-Instruct-FP8 | The name of the Llama chat completion model to use. |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
| api_base_url | Optional[str] | https://api.llama.com/compat/v1/ | The Llama API Base url. For more details, see LlamaAPI docs. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Other parameters to use for the model. These parameters are all sent directly to the Llama API endpoint. See Llama API docs for more details. Some of the supported parameters: - max_tokens: The maximum number of tokens the output text can have. - temperature: What sampling temperature to use. Higher values mean the model will take more risks. Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer. - top_p: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. - stream: Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. - safe_prompt: Whether to inject a safety prompt before all conversations. - random_seed: The seed to use for random sampling. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools for which the model can prepare calls. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|
Was this page helpful?