NvidiaChatGenerator
Complete chats using generative models hosted on NVIDIA's cloud.
Basic Information
- Type:
haystack_integrations.components.generators.nvidia.chat.chat_generator.NvidiaChatGenerator - Components it can connect with:
ChatPromptBuilder:NvidiaChatGeneratorreceives a rendered prompt fromChatPromptBuilder.DeepsetAnswerBuilder:NvidiaChatGeneratorsends the generated replies toDeepsetAnswerBuilderthroughOutputAdapter.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of ChatMessage objects representing the input messages. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | A dictionary containing keyword arguments to customize text generation. For more information on the arguments you can use, see NVIDIA API docs. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. If set, it overrides the tools parameter set in pipeline configuration. |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
| tools_strict | Optional[bool] | None | Whether to strictly enforce the tools provided in the tools parameter. If set to True, the model will only use the tools provided in the tools parameter. If set to False, the model can use other tools that are not provided in the tools parameter. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[ChatMessage] | A list of ChatMessage objects representing the generated responses. |
Overview
Use NvidiaChatGenerator to generate text using NVIDIA generative models. You can use self-hosted models with NVIDIA NIM or models hosted on the NVIDIA API catalog. For supported models, see NVIDIA Docs.
You can pass additional text generation parameters for the NVIDIA Chat Completion API
directly to this component using the generation_kwargs parameter. For a comprehensive lsit of supported paramters, refer to the
NVIDIA documentation.
Authorization
You need an NVIDIA API key to use this component. Connect deepset to your NVIDIA account on the Integrations page.
Connection Instructions
- Click your profile icon in the top right corner and choose Integrations.

- Click Connect next to the provider.
- Enter your API key and submit it.
Usage Example
Initializing the Component
components:
NvidiaChatGenerator:
type: haystack_integrations.components.generators.nvidia.chat.chat_generator.NvidiaChatGenerator
init_parameters:
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_key | Secret | Secret.from_env_var('NVIDIA_API_KEY') | The NVIDIA API key. |
| model | str | meta/llama-3.1-8b-instruct | The name of the NVIDIA chat completion model to use. |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
| api_base_url | Optional[str] | os.getenv('NVIDIA_API_URL', DEFAULT_API_URL) | The NVIDIA API Base url. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Other parameters to use for the model. These parameters are all sent directly to the NVIDIA API endpoint. See NVIDIA API docs for more details. Some of the supported parameters: - max_tokens: The maximum number of tokens the output text can have. - temperature: What sampling temperature to use. Higher values mean the model will take more risks. Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer. - top_p: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. - stream: Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. This parameter can accept either a list of Tool objects or a Toolset instance. |
| timeout | Optional[float] | None | The timeout for the NVIDIA API call. |
| max_retries | Optional[int] | None | Maximum number of retries to contact NVIDIA after an internal error. If not set, it defaults to either the NVIDIA_MAX_RETRIES environment variable, or set to 5. |
| http_client_kwargs | Optional[Dict[str, Any]] | None | A dictionary of keyword arguments to configure a custom httpx.Clientor httpx.AsyncClient. For more information, see the HTTPX documentation. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of ChatMessage objects representing the input messages. | |
| generation_kwargs | Optional[Dict[str, Any]] | None | A dictionary containing keyword arguments to customize text generation. For more information on the arguments you can use, see NVIDIA API docs. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. If set, it overrides the tools parameter set in pipeline configuration. |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
| tools_strict | Optional[bool] | None | Whether to strictly enforce the tools provided in the tools parameter. If set to True, the model will only use the tools provided in the tools parameter. If set to False, the model can use other tools that are not provided in the tools parameter. |
Was this page helpful?