Skip to main content

OpenAIChatGenerator

Complete chats using OpenAI's large language models (LLMs).

Basic Information

  • Type: haystack.components.generators.chat.openai.OpenAIChatGenerator
  • Components it can connect with:
    • ChatPromptBuilder: Sends rendered chat prompts to OpenAIChatGenerator
    • DeepsetAnswerBuilder: Receives generated replies from OpenAIChatGenerator through OutputAdapter
    • OutputAdapter: Converts chat messages to the format needed by downstream components

Inputs

ParameterTypeDescription
messagesList[ChatMessage]A list of ChatMessage instances representing the input messages.
streaming_callbackOptional[StreamingCallbackT]A callback function called when a new token is received from the stream.
generation_kwargsOptional[Dict[str, Any]]Additional keyword arguments for text generation. These parameters override the parameters in pipeline configuration. For supported parameters, see OpenAI documentation.
toolsOptional[Union[List[Tool], Toolset]]A list of tools or a Toolset for which the model can prepare calls. If set, it overrides the tools parameter set during component initialization. Can accept either a list of Tool objects or a Toolset instance.
tools_strictOptional[bool]Whether to enable strict schema adherence for tool calls. If set to True, the model follows the schema exactly, but this may increase latency. If set, it overrides the tools_strict parameter in pipeline configuration.

Outputs

ParameterTypeDescription
repliesList[ChatMessage]A list containing the generated responses as ChatMessage instances.

Overview

OpenAIChatGenerator works with GPT-4, GPT-5, and o-series models and supports streaming responses from OpenAI API. It's designed for conversational AI applications where you need to maintain chat history and context.

You can customize text generation by passing parameters to the OpenAI API. Use the generation_kwargs argument when you initialize the component or when you run it. Any parameter that works with openai.ChatCompletion.create will work here too.

For a list of supported OpenAI API parameters, see OpenAI documentation.

Authorization

You need an OpenAI API key to use this component. Connect deepset to your OpenAI account on the Integrations page. For details, see Use OpenAI Models.

Usage Example

This is an example RAG pipeline with OpenAIChatGenerator and DeepsetAnswerBuilder connected through OutputAdapter:

components:
bm25_retriever:
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
use_ssl: true
verify_certs: false
top_k: 20

query_embedder:
type: haystack.components.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder
init_parameters:
model: intfloat/e5-base-v2

embedding_retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
use_ssl: true
verify_certs: false
top_k: 20

document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate

ranker:
type: haystack.components.rankers.transformers_similarity.TransformersSimilarityRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8

chat_prompt_builder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering questions based on the provided documents.\nIf the documents don't contain the answer, say so.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Documents:\n{% for document in documents %}\nDocument [{{ loop.index }}]:\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user

openai_chat_generator:
type: haystack.components.generators.chat.openai.OpenAIChatGenerator
init_parameters:
model: gpt-5-mini
generation_kwargs:
temperature: 0.7
max_tokens: 500

output_adapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]

answer_builder:
type: haystack.components.builders.answer_builder.AnswerBuilder
init_parameters: {}

connections:
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: chat_prompt_builder.documents
- sender: ranker.documents
receiver: answer_builder.documents
- sender: chat_prompt_builder.prompt
receiver: openai_chat_generator.messages
- sender: openai_chat_generator.replies
receiver: output_adapter.replies
- sender: output_adapter.output
receiver: answer_builder.replies

max_runs_per_component: 100

inputs:
query:
- bm25_retriever.query
- query_embedder.text
- ranker.query
- chat_prompt_builder.query
- answer_builder.query
filters:
- bm25_retriever.filters
- embedding_retriever.filters

outputs:
documents: ranker.documents
answers: answer_builder.answers

metadata: {}

Parameters

Init parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
api_keySecretSecret.from_env_var('OPENAI_API_KEY')The OpenAI API key. Set it on the Integrations page.
modelstrgpt-5-miniThe name of the model to use.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.
api_base_urlOptional[str]NoneAn optional base URL.
organizationOptional[str]NoneYour organization ID. See production best practices.
generation_kwargsOptional[Dict[str, Any]]NoneOther parameters to use for the model, sent directly to the OpenAI endpoint. See OpenAI documentation for more details. Some supported parameters: max_tokens (maximum number of tokens in output), temperature (sampling temperature, higher values mean more risks), top_p (nucleus sampling probability mass), n (number of completions per prompt), stop (sequences to stop generation), presence_penalty (penalty for token presence), frequency_penalty (penalty for token frequency), logit_bias (adds bias to specific tokens).
timeoutOptional[float]30.0Timeout for OpenAI client calls. If not set, it defaults to the OPENAI_TIMEOUT environment variable or 30 seconds.
max_retriesOptional[int]fiveMaximum number of retries to contact OpenAI after an internal error. If not set, it defaults to the OPENAI_MAX_RETRIES environment variable or five.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools or a Toolset for which the model can prepare calls. This parameter can accept either a list of Tool objects or a Toolset instance.
tools_strictbooleanFalseWhether to enable strict schema adherence for tool calls. If set to True, the model follows exactly the schema provided in the parameters field of the tool definition, but this may increase latency.
http_client_kwargsOptional[Dict[str, Any]]NoneA dictionary of keyword arguments to configure a custom httpx.Client or httpx.AsyncClient. For more information, see the HTTPX documentation.

Run method parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of ChatMessage instances representing the input messages.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function called when a new token is received from the stream.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for text generation. These parameters override the parameters in pipeline configuration. For supported parameters, see OpenAI documentation.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools or a Toolset for which the model can prepare calls. If set, it overrides the tools parameter in pipeline configuration. Can accept either a list of Tool objects or a Toolset instance.
tools_strictOptional[bool]NoneWhether to enable strict schema adherence for tool calls. If set to True, the model follows exactly the schema provided in the parameters field of the tool definition, but this may increase latency. If set, it overrides the tools_strict parameter in pipeline configuration.