Skip to main content
For the complete documentation index for agents and LLMs, see llms.txt.

AzureOpenAIGenerator

Generate text using OpenAI's large language models (LLMs) hosted on Azure. It works with GPT-4-type models and supports streaming responses.

Key Features

  • Generates text using OpenAI LLMs deployed on Azure.
  • Supports streaming responses for real-time output.
  • Accepts a system prompt for controlling generation behavior.
  • Configurable generation parameters like temperature, max tokens, and stop sequences.
  • Compatible with PromptBuilder for dynamic prompt construction.

Configuration

Authentication

To use this component, connect Haystack Platform with Azure OpenAI first. For detailed instructions, see Use Azure OpenAI Models.

  1. Drag the AzureOpenAIGenerator component onto the canvas from the Component Library.
  2. Click the component to open the configuration panel.
  3. Configure the parameters as needed. You can set the API key and endpoint through environment variables (AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT) or directly in the configuration panel.

Connections

AzureOpenAIGenerator accepts a text prompt as input. It outputs a list of generated text responses and a list of metadata dictionaries.

Connect a PromptBuilder to the prompt input to build dynamic prompts. Connect the replies output to AnswerBuilder for answer extraction.

Usage Example

Here's an example RAG pipeline using AzureOpenAIGenerator:

components:
bm25_retriever:
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
use_ssl: true
verify_certs: false
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
top_k: 10

prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |-
You are a helpful assistant.
Answer the question based on the provided documents.
If the documents don't contain the answer, say so.

Documents:
{% for document in documents %}
{{ document.content }}
{% endfor %}

Question: {{question}}
Answer:

azure_generator:
type: haystack.components.generators.azure.AzureOpenAIGenerator
init_parameters:
azure_endpoint: ${AZURE_OPENAI_ENDPOINT}
azure_deployment: gpt-4.1-mini
api_version: 2024-12-01-preview
generation_kwargs:
temperature: 0.7
max_completion_tokens: 500

answer_builder:
type: haystack.components.builders.answer_builder.AnswerBuilder
init_parameters: {}

connections:
- sender: bm25_retriever.documents
receiver: prompt_builder.documents
- sender: bm25_retriever.documents
receiver: answer_builder.documents
- sender: prompt_builder.prompt
receiver: azure_generator.prompt
- sender: azure_generator.replies
receiver: answer_builder.replies

max_runs_per_component: 100

inputs:
query:
- bm25_retriever.query
- prompt_builder.question
- answer_builder.query

outputs:
answers: answer_builder.answers

Parameters

Inputs

ParameterTypeDescription
promptstrThe text prompt to generate text from.

Outputs

ParameterTypeDescription
repliesList[str]A list of generated text responses.
metaList[Dict]A list of metadata dictionaries for each generated response, including model information, tokens used, and finish reason.

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
azure_endpointOptional[str]NoneThe endpoint of the deployed model, for example https://example-resource.azure.openai.com/. Can also be set via the AZURE_OPENAI_ENDPOINT environment variable.
api_versionOptional[str]2024-12-01-previewThe version of the Azure OpenAI API to use. Defaults to 2024-12-01-preview.
azure_deploymentOptional[str]gpt-4.1-miniThe deployment name of the model, usually the model name.
api_keyOptional[Secret]Secret.from_env_var('AZURE_OPENAI_API_KEY', strict=False)The API key to use for authentication.
azure_ad_tokenOptional[Secret]Secret.from_env_var('AZURE_OPENAI_AD_TOKEN', strict=False)Azure Active Directory token for authentication. See Microsoft Entra ID.
organizationOptional[str]NoneYour organization ID. For help, see Setting up your organization.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function called when a new token is received from the stream. It accepts StreamingChunk as an argument.
system_promptOptional[str]NoneThe system prompt to use for text generation. If not provided, the generator uses the default system prompt.
timeoutOptional[float]30.0Timeout for AzureOpenAI client. If not set, it is inferred from the OPENAI_TIMEOUT environment variable or set to 30.
max_retriesOptional[int]fiveMaximum retries to establish contact with AzureOpenAI if it returns an internal error. If not set, it is inferred from the OPENAI_MAX_RETRIES environment variable or set to five.
http_client_kwargsOptional[Dict[str, Any]]NoneA dictionary of keyword arguments to configure a custom httpx.Client or httpx.AsyncClient. For more information, see the HTTPX documentation.
generation_kwargsOptional[Dict[str, Any]]NoneOther parameters to use for the model, sent directly to the Azure OpenAI endpoint. See OpenAI documentation for more details. Some supported parameters: max_completion_tokens (upper bound for tokens generated), temperature (sampling temperature, higher values mean more risks), top_p (nucleus sampling probability mass), n (number of completions per prompt), stop (sequences to stop generation), presence_penalty (penalty for token presence), frequency_penalty (penalty for token frequency), logit_bias (adds bias to specific tokens).
default_headersOptional[Dict[str, str]]NoneDefault headers to use for the AzureOpenAI client.
azure_ad_token_providerOptional[AzureADTokenProvider]NoneA function that returns an Azure Active Directory token, invoked on every request.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
promptstrThe text prompt to generate text from.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional parameters for text generation. These override the parameters set during component initialization.