OpenAIChatGenerator
Complete chats using OpenAI's large language models (LLMs).
Basic Information
- Type:
haystack.components.generators.chat.openai.OpenAIChatGenerator - Components it can connect with:
ChatPromptBuilder: Sends rendered chat prompts toOpenAIChatGeneratorDeepsetAnswerBuilder: Receives generated replies fromOpenAIChatGeneratorthroughOutputAdapterOutputAdapter: Converts chat messages to the format needed by downstream components
Inputs
| Parameter | Type | Description |
|---|---|---|
messages | List[ChatMessage] | A list of ChatMessage instances representing the input messages. |
streaming_callback | Optional[StreamingCallbackT] | A callback function called when a new token is received from the stream. |
generation_kwargs | Optional[Dict[str, Any]] | Additional keyword arguments for text generation. These parameters override the parameters in pipeline configuration. For supported parameters, see OpenAI documentation. |
tools | Optional[Union[List[Tool], Toolset]] | A list of tools or a Toolset for which the model can prepare calls. If set, it overrides the tools parameter set during component initialization. Can accept either a list of Tool objects or a Toolset instance. |
tools_strict | Optional[bool] | Whether to enable strict schema adherence for tool calls. If set to True, the model follows the schema exactly, but this may increase latency. If set, it overrides the tools_strict parameter in pipeline configuration. |
Outputs
| Parameter | Type | Description |
|---|---|---|
replies | List[ChatMessage] | A list containing the generated responses as ChatMessage instances. |
Overview
OpenAIChatGenerator works with GPT-4, GPT-5, and o-series models and supports streaming responses from OpenAI API. It's designed for conversational AI applications where you need to maintain chat history and context.
You can customize text generation by passing parameters to the OpenAI API. Use the generation_kwargs argument when you initialize the component or when you run it. Any parameter that works with openai.ChatCompletion.create will work here too.
For a list of supported OpenAI API parameters, see OpenAI documentation.
Authorization
You need an OpenAI API key to use this component. Connect deepset to your OpenAI account on the Integrations page. For details, see Use OpenAI Models.
Usage Example
This is an example RAG pipeline with OpenAIChatGenerator and DeepsetAnswerBuilder connected through OutputAdapter:
components:
bm25_retriever:
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
use_ssl: true
verify_certs: false
top_k: 20
query_embedder:
type: haystack.components.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder
init_parameters:
model: intfloat/e5-base-v2
embedding_retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
use_ssl: true
verify_certs: false
top_k: 20
document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate
ranker:
type: haystack.components.rankers.transformers_similarity.TransformersSimilarityRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8
chat_prompt_builder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering questions based on the provided documents.\nIf the documents don't contain the answer, say so.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Documents:\n{% for document in documents %}\nDocument [{{ loop.index }}]:\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
openai_chat_generator:
type: haystack.components.generators.chat.openai.OpenAIChatGenerator
init_parameters:
model: gpt-5-mini
generation_kwargs:
temperature: 0.7
max_tokens: 500
output_adapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
answer_builder:
type: haystack.components.builders.answer_builder.AnswerBuilder
init_parameters: {}
connections:
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: chat_prompt_builder.documents
- sender: ranker.documents
receiver: answer_builder.documents
- sender: chat_prompt_builder.prompt
receiver: openai_chat_generator.messages
- sender: openai_chat_generator.replies
receiver: output_adapter.replies
- sender: output_adapter.output
receiver: answer_builder.replies
max_runs_per_component: 100
inputs:
query:
- bm25_retriever.query
- query_embedder.text
- ranker.query
- chat_prompt_builder.query
- answer_builder.query
filters:
- bm25_retriever.filters
- embedding_retriever.filters
outputs:
documents: ranker.documents
answers: answer_builder.answers
metadata: {}
Parameters
Init parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | Secret | Secret.from_env_var('OPENAI_API_KEY') | The OpenAI API key. Set it on the Integrations page. |
model | str | gpt-5-mini | The name of the model to use. |
streaming_callback | Optional[StreamingCallbackT] | None | A callback function called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument. |
api_base_url | Optional[str] | None | An optional base URL. |
organization | Optional[str] | None | Your organization ID. See production best practices. |
generation_kwargs | Optional[Dict[str, Any]] | None | Other parameters to use for the model, sent directly to the OpenAI endpoint. See OpenAI documentation for more details. Some supported parameters: max_tokens (maximum number of tokens in output), temperature (sampling temperature, higher values mean more risks), top_p (nucleus sampling probability mass), n (number of completions per prompt), stop (sequences to stop generation), presence_penalty (penalty for token presence), frequency_penalty (penalty for token frequency), logit_bias (adds bias to specific tokens). |
timeout | Optional[float] | 30.0 | Timeout for OpenAI client calls. If not set, it defaults to the OPENAI_TIMEOUT environment variable or 30 seconds. |
max_retries | Optional[int] | five | Maximum number of retries to contact OpenAI after an internal error. If not set, it defaults to the OPENAI_MAX_RETRIES environment variable or five. |
tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. This parameter can accept either a list of Tool objects or a Toolset instance. |
tools_strict | boolean | False | Whether to enable strict schema adherence for tool calls. If set to True, the model follows exactly the schema provided in the parameters field of the tool definition, but this may increase latency. |
http_client_kwargs | Optional[Dict[str, Any]] | None | A dictionary of keyword arguments to configure a custom httpx.Client or httpx.AsyncClient. For more information, see the HTTPX documentation. |
Run method parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
messages | List[ChatMessage] | A list of ChatMessage instances representing the input messages. | |
streaming_callback | Optional[StreamingCallbackT] | None | A callback function called when a new token is received from the stream. |
generation_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for text generation. These parameters override the parameters in pipeline configuration. For supported parameters, see OpenAI documentation. |
tools | Optional[Union[List[Tool], Toolset]] | None | A list of tools or a Toolset for which the model can prepare calls. If set, it overrides the tools parameter in pipeline configuration. Can accept either a list of Tool objects or a Toolset instance. |
tools_strict | Optional[bool] | None | Whether to enable strict schema adherence for tool calls. If set to True, the model follows exactly the schema provided in the parameters field of the tool definition, but this may increase latency. If set, it overrides the tools_strict parameter in pipeline configuration. |
Was this page helpful?