OpenRouterChatGenerator
Generate text using large language models hosted on OpenRouter.
Basic Information
- Type:
haystack_integrations.components.generators.openrouter.chat.chat_generator.OpenRouterChatGenerator - Components it can connect with:
ChatPromptBuilder:OpenRouterChatGeneratorreceives a rendered prompt fromChatPromptBuilder.DeepsetAnswerBuilder:OpenRouterChatGeneratorsends the generated replies toDeepsetAnswerBuilderthroughOutputAdapter.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of ChatMessage instances representing the input messages. | |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function that is called when a new token is received from the stream. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for text generation. These parameters override the parameters in pipeline configuration. For a list of supported parameters, see OpenRouter documentation. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. If set, it will override the tools parameter set during component initialization. This parameter can accept either a list of Tool objects or a Toolset instance. |
| tools_strict | Optional[bool] | None | Whether to enable strict schema adherence for tool calls. If set to True, the model follows exactly the schema provided in the parameters field of the tool definition, but this may increase latency. If set, it overrides the tools_strict parameter in pipeline configuration. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[ChatMessage] | A list containing the generated responses as ChatMessage instances. |
Overview
Use OpenRouterChatGenerator to generate text using models hosted on OpenRouter. For a list of supported models, see OpenRouter documentation.
You can pass any text generation parameters valid for the OpenRouter chat completion API
directly to this component using the generation_kwargs parameter. For a list of parameters supported by the OpenRouter API, refer to the
OpenRouter API documentation.
Authorization
You need an OpenRouter API key and sufficient credits for your OpenRouter subscription to use this component. Connect deepset to your OpenRouter account by creating a secret called OPENROUTER_API_KEY. For more information about secrets, see Secrets.
Usage Example
Initializing the Component
components:
OpenRouterChatGenerator:
type: haystack_integrations.components.generators.openrouter.chat.chat_generator.OpenRouterChatGenerator
init_parameters:
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_key | Secret | Secret.from_env_var('OPENROUTER_API_KEY') | The OpenRouter API key. |
| model | str | openai/gpt-4o-mini | The name of the OpenRouter chat completion model to use. |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
| api_base_url | Optional[str] | https://openrouter.ai/api/v1 | The OpenRouter API Base url. For more details, see OpenRouter docs. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Other parameters to use for the model. These parameters are all sent directly to the OpenRouter endpoint. See OpenRouter API docs for more details. Some of the supported parameters: - max_tokens: The maximum number of tokens the output text can have. - temperature: What sampling temperature to use. Higher values mean the model will take more risks. Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer. - top_p: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. - stream: Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. - safe_prompt: Whether to inject a safety prompt before all conversations. - random_seed: The seed to use for random sampling. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. This parameter can accept either a list of Tool objects or a Toolset instance. |
| timeout | Optional[float] | None | The timeout for the OpenRouter API call. |
| extra_headers | Optional[Dict[str, Any]] | None | Additional HTTP headers to include in requests to the OpenRouter API. This can be useful for adding site URL or title for rankings on openrouter.ai For more details, see OpenRouter docs. |
| max_retries | Optional[int] | None | Maximum number of retries to contact OpenAI after an internal error. If not set, it defaults to either the OPENAI_MAX_RETRIES environment variable, or set to 5. |
| http_client_kwargs | Optional[Dict[str, Any]] | None | A dictionary of keyword arguments to configure a custom httpx.Clientor httpx.AsyncClient. For more information, see the HTTPX documentation. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of ChatMessage instances representing the input messages. | |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function that is called when a new token is received from the stream. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for text generation. These parameters override the parameters in pipeline configuration. For a list of supported parameters, see OpenRouter documentation. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. If set, it will override the tools parameter set during component initialization. This parameter can accept either a list of Tool objects or a Toolset instance. |
| tools_strict | Optional[bool] | None | Whether to enable strict schema adherence for tool calls. If set to True, the model follows exactly the schema provided in the parameters field of the tool definition, but this may increase latency. If set, it overrides the tools_strict parameter in pipeline configuration. |
Was this page helpful?