AzureOpenAIGenerator
Generate text using OpenAI's large language models (LLMs) hosted on Azure. It works with GPT-4-type models and supports streaming responses.
Key Features
- Generates text using OpenAI LLMs deployed on Azure.
- Supports streaming responses for real-time output.
- Accepts a system prompt for controlling generation behavior.
- Configurable generation parameters like temperature, max tokens, and stop sequences.
- Compatible with
PromptBuilderfor dynamic prompt construction.
Configuration
To use this component, connect Haystack Platform with Azure OpenAI first. For detailed instructions, see Use Azure OpenAI Models.
- Drag the
AzureOpenAIGeneratorcomponent onto the canvas from the Component Library. - Click the component to open the configuration panel.
- Configure the parameters as needed. You can set the API key and endpoint through environment variables (
AZURE_OPENAI_API_KEY,AZURE_OPENAI_ENDPOINT) or directly in the configuration panel.
Connections
AzureOpenAIGenerator accepts a text prompt as input. It outputs a list of generated text responses and a list of metadata dictionaries.
Connect a PromptBuilder to the prompt input to build dynamic prompts. Connect the replies output to AnswerBuilder for answer extraction.
Usage Example
Here's an example RAG pipeline using AzureOpenAIGenerator:
components:
bm25_retriever:
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
use_ssl: true
verify_certs: false
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
top_k: 10
prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |-
You are a helpful assistant.
Answer the question based on the provided documents.
If the documents don't contain the answer, say so.
Documents:
{% for document in documents %}
{{ document.content }}
{% endfor %}
Question: {{question}}
Answer:
azure_generator:
type: haystack.components.generators.azure.AzureOpenAIGenerator
init_parameters:
azure_endpoint: ${AZURE_OPENAI_ENDPOINT}
azure_deployment: gpt-4.1-mini
api_version: 2024-12-01-preview
generation_kwargs:
temperature: 0.7
max_completion_tokens: 500
answer_builder:
type: haystack.components.builders.answer_builder.AnswerBuilder
init_parameters: {}
connections:
- sender: bm25_retriever.documents
receiver: prompt_builder.documents
- sender: bm25_retriever.documents
receiver: answer_builder.documents
- sender: prompt_builder.prompt
receiver: azure_generator.prompt
- sender: azure_generator.replies
receiver: answer_builder.replies
max_runs_per_component: 100
inputs:
query:
- bm25_retriever.query
- prompt_builder.question
- answer_builder.query
outputs:
answers: answer_builder.answers
Parameters
Inputs
| Parameter | Type | Description |
|---|---|---|
prompt | str | The text prompt to generate text from. |
Outputs
| Parameter | Type | Description |
|---|---|---|
replies | List[str] | A list of generated text responses. |
meta | List[Dict] | A list of metadata dictionaries for each generated response, including model information, tokens used, and finish reason. |
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
azure_endpoint | Optional[str] | None | The endpoint of the deployed model, for example https://example-resource.azure.openai.com/. Can also be set via the AZURE_OPENAI_ENDPOINT environment variable. |
api_version | Optional[str] | 2024-12-01-preview | The version of the Azure OpenAI API to use. Defaults to 2024-12-01-preview. |
azure_deployment | Optional[str] | gpt-4.1-mini | The deployment name of the model, usually the model name. |
api_key | Optional[Secret] | Secret.from_env_var('AZURE_OPENAI_API_KEY', strict=False) | The API key to use for authentication. |
azure_ad_token | Optional[Secret] | Secret.from_env_var('AZURE_OPENAI_AD_TOKEN', strict=False) | Azure Active Directory token for authentication. See Microsoft Entra ID. |
organization | Optional[str] | None | Your organization ID. For help, see Setting up your organization. |
streaming_callback | Optional[StreamingCallbackT] | None | A callback function called when a new token is received from the stream. It accepts StreamingChunk as an argument. |
system_prompt | Optional[str] | None | The system prompt to use for text generation. If not provided, the generator uses the default system prompt. |
timeout | Optional[float] | 30.0 | Timeout for AzureOpenAI client. If not set, it is inferred from the OPENAI_TIMEOUT environment variable or set to 30. |
max_retries | Optional[int] | five | Maximum retries to establish contact with AzureOpenAI if it returns an internal error. If not set, it is inferred from the OPENAI_MAX_RETRIES environment variable or set to five. |
http_client_kwargs | Optional[Dict[str, Any]] | None | A dictionary of keyword arguments to configure a custom httpx.Client or httpx.AsyncClient. For more information, see the HTTPX documentation. |
generation_kwargs | Optional[Dict[str, Any]] | None | Other parameters to use for the model, sent directly to the Azure OpenAI endpoint. See OpenAI documentation for more details. Some supported parameters: max_completion_tokens (upper bound for tokens generated), temperature (sampling temperature, higher values mean more risks), top_p (nucleus sampling probability mass), n (number of completions per prompt), stop (sequences to stop generation), presence_penalty (penalty for token presence), frequency_penalty (penalty for token frequency), logit_bias (adds bias to specific tokens). |
default_headers | Optional[Dict[str, str]] | None | Default headers to use for the AzureOpenAI client. |
azure_ad_token_provider | Optional[AzureADTokenProvider] | None | A function that returns an Azure Active Directory token, invoked on every request. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
prompt | str | The text prompt to generate text from. | |
generation_kwargs | Optional[Dict[str, Any]] | None | Additional parameters for text generation. These override the parameters set during component initialization. |
Was this page helpful?