AzureOpenAIChatGenerator
Generate text using OpenAI's models on Azure.
Basic Information
- Type:
components.generators.chat.azure.AzureOpenAIChatGenerator - Components it can connect with:
ChatPromptBuilder:AzureOpenAIChatGeneratorreceives a rendered prompt fromChatPromptBuilder.DeepsetAnswerBuilder:AzureOpenAIChatGeneratorsends the generated replies toDeepsetAnswerBuilderthroughOutputAdapter(see Usage Examples below).
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| messages | List[ChatMessage] | A list of ChatMessage instances representing the input messages. | |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function that is called when a new token is received from the stream. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Optional arguments to pass to the Anthropic generation endpoint. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of Tool objects or a Toolset that the model can use. Each tool should have a unique name. If set, it will override the tools parameter set during component initialization. |
| tools_strict | Optional[Bool] | None | Whether to enable strict schema adherence for tool calls. If set to True, the model follows exactly the schema provided in the parameters field of the tool definition, but this may increase latency. If set, it overrides the tools_strict parameter set during component initialization. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| replies | List[ChatMessage] | The responses from the model. |
Overview
AzureOpenAIChatGenerator works with the gpt-4 - type models and supports streaming responses from OpenAI API.
You can customize how the text is generated by passing parameters to the OpenAI API. Use the **generation_kwargs argument when you configure the component in a pipeline. Any parameter that works with openai.ChatCompletion.create will work here too.
For details on OpenAI API parameters, see OpenAI documentation.
Authentication
To use this component, connect deepset with Azure OpenAI first:
Connection Instructions
- Click your profile icon in the top right corner and choose Integrations.

- Click Connect next to the provider.
- Enter your API key and submit it.
Usage Example
Initializing the Component
components:
AzureOpenAIChatGenerator:
type: components.generators.chat.azure.AzureOpenAIChatGenerator
init_parameters:
Using the Component in a Pipeline
This is a RAG pipeline where AzureOpenAIChatGenerator sends the generated replies to DeepsetAnswerBuilder through OutputAdapter:
components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0
query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2
embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate
ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8
meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id
answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm
ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents.\nIf the answer is not in the documents, rely on the web_search tool to find information.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}] :\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
required_variables:
variables:
OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
custom_filters:
unsafe: false
AzureOpenAIChatGenerator:
type: haystack.components.generators.chat.azure.AzureOpenAIChatGenerator
init_parameters:
azure_endpoint:
api_version: '2023-05-15'
azure_deployment: gpt-4o-mini
api_key:
type: env_var
env_vars:
- AZURE_OPENAI_API_KEY
strict: false
azure_ad_token:
type: env_var
env_vars:
- AZURE_OPENAI_AD_TOKEN
strict: false
organization:
streaming_callback:
timeout:
max_retries:
generation_kwargs:
default_headers:
tools:
tools_strict: false
azure_ad_token_provider:
http_client_kwargs:
connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: ChatPromptBuilder.prompt
receiver: AzureOpenAIChatGenerator.messages
- sender: AzureOpenAIChatGenerator.replies
receiver: OutputAdapter.replies
inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"
outputs: # Defines the output of your pipeline
documents: "meta_field_grouping_ranker.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers
max_runs_per_component: 100
metadata: {}
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| azure_endpoint | Optional[str] | None | The endpoint of the deployed model, for example "https://example-resource.azure.openai.com/". |
| api_version | Optional[str] | 2023-05-15 | The version of the API to use. Defaults to 2023-05-15. |
| azure_deployment | Optional[str] | gpt-4o-mini | The deployment of the model, usually the model name. |
| api_key | Optional[Secret] | Secret.from_env_var('AZURE_OPENAI_API_KEY', strict=False) | The API key to use for authentication. |
| azure_ad_token | Optional[Secret] | Secret.from_env_var('AZURE_OPENAI_AD_TOKEN', strict=False) | Azure Active Directory token. |
| organization | Optional[str] | None | Your organization ID, defaults to None. For help, see Setting up your organization. |
| streaming_callback | Optional[StreamingCallbackT] | None | A callback function called when a new token is received from the stream. It accepts StreamingChunk as an argument. |
| timeout | Optional[float] | None | Timeout for OpenAI client calls. If not set, it defaults to either the OPENAI_TIMEOUT environment variable, or 30 seconds. |
| max_retries | Optional[int] | None | Maximum number of retries to contact OpenAI after an internal error. If not set, it defaults to either the OPENAI_MAX_RETRIES environment variable, or set to 5. |
| generation_kwargs | Optional[Dict[str, Any]] | None | Other parameters to use for the model. These parameters are sent directly to the OpenAI endpoint. For details, see OpenAI documentation. Some of the supported parameters: - max_tokens: The maximum number of tokens the output text can have. - temperature: The sampling temperature to use. Higher values mean the model takes more risks. Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer. - top_p: Nucleus sampling is an alternative to sampling with temperature, where the model considers tokens with a top_p probability mass. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. - n: The number of completions to generate for each prompt. For example, with 3 prompts and n=2, the LLM will generate two completions per prompt, resulting in 6 completions total. - stop: One or more sequences after which the LLM should stop generating tokens. - presence_penalty: The penalty applied if a token is already present. Higher values make the model less likely to repeat the token. - frequency_penalty: Penalty applied if a token has already been generated. Higher values make the model less likely to repeat the token. - logit_bias: Adds a logit bias to specific tokens. The keys of the dictionary are tokens, and the values are the bias to add to that token. |
| default_headers | Optional[Dict[str, str]] | None | Default headers to use for the AzureOpenAI client. |
| tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. This parameter can accept either a list of Tool objects or a Toolset instance. |
| tools_strict | bool | False | Whether to enable strict schema adherence for tool calls. If set to True, the model will follow exactly the schema provided in the parameters field of the tool definition, but this may increase latency. |
| azure_ad_token_provider | Optional[Union[AzureADTokenProvider, AsyncAzureADTokenProvider]] | None | A function that returns an Azure Active Directory token, will be invoked on every request. |
| http_client_kwargs | Optional[Dict[str, Any]] | None | A dictionary of keyword arguments to configure a custom httpx.Clientor httpx.AsyncClient. For more information, see the HTTPX documentation. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|
Was this page helpful?