Skip to main content

AzureOpenAIGenerator

Generate text using OpenAI's large language models (LLMs) hosted on Azure.

Basic Information

  • Type: haystack.components.generators.chat.azure.AzureOpenAIGenerator
  • Components it can connect with:
    • PromptBuilder: Sends formatted prompts to AzureOpenAIGenerator
    • AnswerBuilder: Receives generated text from AzureOpenAIGenerator

Inputs

ParameterTypeDescription
promptstrThe text prompt to generate text from.

Outputs

ParameterTypeDescription
repliesList[str]A list of generated text responses.
metaList[Dict]A list of metadata dictionaries for each generated response, including model information, tokens used, and finish reason.

Overview

AzureOpenAIGenerator generates text using OpenAI's large language models (LLMs) hosted on Azure. It works with GPT-4-type models and supports streaming responses from the Azure OpenAI API.

You can customize text generation by passing parameters to the Azure OpenAI API. Use the generation_kwargs argument when you initialize the component or when you run it. Any parameter that works with openai.ChatCompletion.create will work here too.

To use this component, you need:

  • An Azure OpenAI endpoint (or set via AZURE_OPENAI_ENDPOINT environment variable)
  • An API key or Azure Active Directory token for authentication
  • A deployed model (azure_deployment parameter)

For details on OpenAI API parameters, see OpenAI documentation.

Authentication

To use this component, connect deepset with Azure OpenAI first. For detailed instructions, see Use Azure OpenAI Models.

Usage Example

Here's an example RAG pipeline using AzureOpenAIGenerator:

components:
bm25_retriever:
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
use_ssl: true
verify_certs: false
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
top_k: 10

prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |-
You are a helpful assistant.
Answer the question based on the provided documents.
If the documents don't contain the answer, say so.

Documents:
{% for document in documents %}
{{ document.content }}
{% endfor %}

Question: {{question}}
Answer:

azure_generator:
type: haystack.components.generators.azure.AzureOpenAIGenerator
init_parameters:
azure_endpoint: ${AZURE_OPENAI_ENDPOINT}
azure_deployment: gpt-4.1-mini
api_version: 2024-12-01-preview
generation_kwargs:
temperature: 0.7
max_completion_tokens: 500

answer_builder:
type: haystack.components.builders.answer_builder.AnswerBuilder
init_parameters: {}

connections:
- sender: bm25_retriever.documents
receiver: prompt_builder.documents
- sender: bm25_retriever.documents
receiver: answer_builder.documents
- sender: prompt_builder.prompt
receiver: azure_generator.prompt
- sender: azure_generator.replies
receiver: answer_builder.replies

max_runs_per_component: 100

inputs:
query:
- bm25_retriever.query
- prompt_builder.question
- answer_builder.query

outputs:
answers: answer_builder.answers

Parameters

Init parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
azure_endpointOptional[str]NoneThe endpoint of the deployed model, for example https://example-resource.azure.openai.com/. Can also be set via the AZURE_OPENAI_ENDPOINT environment variable.
api_versionOptional[str]2024-12-01-previewThe version of the Azure OpenAI API to use. Defaults to 2024-12-01-preview.
azure_deploymentOptional[str]gpt-4.1-miniThe deployment name of the model, usually the model name.
api_keyOptional[Secret]Secret.from_env_var('AZURE_OPENAI_API_KEY', strict=False)The API key to use for authentication.
azure_ad_tokenOptional[Secret]Secret.from_env_var('AZURE_OPENAI_AD_TOKEN', strict=False)Azure Active Directory token for authentication. See Microsoft Entra ID.
organizationOptional[str]NoneYour organization ID. For help, see Setting up your organization.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function called when a new token is received from the stream. It accepts StreamingChunk as an argument.
system_promptOptional[str]NoneThe system prompt to use for text generation. If not provided, the generator uses the default system prompt.
timeoutOptional[float]30.0Timeout for AzureOpenAI client. If not set, it is inferred from the OPENAI_TIMEOUT environment variable or set to 30.
max_retriesOptional[int]fiveMaximum retries to establish contact with AzureOpenAI if it returns an internal error. If not set, it is inferred from the OPENAI_MAX_RETRIES environment variable or set to five.
http_client_kwargsOptional[Dict[str, Any]]NoneA dictionary of keyword arguments to configure a custom httpx.Client or httpx.AsyncClient. For more information, see the HTTPX documentation.
generation_kwargsOptional[Dict[str, Any]]NoneOther parameters to use for the model, sent directly to the Azure OpenAI endpoint. See OpenAI documentation for more details. Some supported parameters: max_completion_tokens (upper bound for tokens generated), temperature (sampling temperature, higher values mean more risks), top_p (nucleus sampling probability mass), n (number of completions per prompt), stop (sequences to stop generation), presence_penalty (penalty for token presence), frequency_penalty (penalty for token frequency), logit_bias (adds bias to specific tokens).
default_headersOptional[Dict[str, str]]NoneDefault headers to use for the AzureOpenAI client.
azure_ad_token_providerOptional[AzureADTokenProvider]NoneA function that returns an Azure Active Directory token, invoked on every request.

Run method parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
promptstrThe text prompt to generate text from.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional parameters for text generation. These override the parameters set during component initialization.