Skip to main content
For the complete documentation index for agents and LLMs, see llms.txt.

OpenAIDocumentEmbedder

Computes document embeddings using OpenAI models and stores them in each document's embedding field for use in indexing pipelines.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Key Features

  • Supports all OpenAI embedding models, including text-embedding-3-small and text-embedding-ada-002.
  • Configurable embedding dimensions for text-embedding-3 and later models.
  • Processes documents in batches with an optional progress bar.
  • Embeds metadata fields alongside document content.
  • Adds optional prefix and suffix strings to each document before embedding.
  • Configurable timeout, retries, and custom HTTP client settings.

Configuration

Authentication

To use this component, connect Haystack Platform with OpenAI first. For detailed instructions, see Use OpenAI Models.

  1. Drag the OpenAIDocumentEmbedder component onto the canvas from the Component Library.
  2. Click the component to open the configuration panel.
  3. On the General tab:
    1. Enter the name of the OpenAI embedding model to use.
  4. Go to the Advanced tab to configure the API key, API base URL, organization, timeout, maximum retries, HTTP client settings, and other options.

Connections

OpenAIDocumentEmbedder accepts a list of documents as input. It outputs the same documents with embeddings stored in the embedding field.

Use this component in indexing pipelines. Connect a preprocessor like DocumentSplitter to its documents input, and connect its documents output to DocumentWriter.

Usage Example

This is an index that uses OpenAIDocumentEmbedder to embed documents before storing them:

components:
document_embedder:
type: haystack.components.embedders.openai_document_embedder.OpenAIDocumentEmbedder
init_parameters:
api_key:
type: env_var
env_vars:
- OPENAI_API_KEY
strict: false
model: text-embedding-ada-002

document_writer:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
policy: NONE
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
- ${OPENSEARCH_HOST}
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
use_ssl: true
verify_certs: false
index: my_index
embedding_dim: 1536

connections:
- sender: document_embedder.documents
receiver: document_writer.documents

inputs:
documents:
- document_embedder.documents

max_runs_per_component: 100

metadata: {}

Parameters

Inputs

ParameterTypeDefaultDescription
documentsList[Document]A list of documents to embed.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]A list of documents with embeddings.
metaDict[str, Any]Information about the usage of the model, including model name and token usage.

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
api_keySecretSecret.from_env_var('OPENAI_API_KEY')The OpenAI API key. You can set it with an environment variable OPENAI_API_KEY, or pass with this parameter during initialization.
modelstrtext-embedding-ada-002The name of the model to use for calculating embeddings. The default model is text-embedding-ada-002.
dimensionsOptional[int]NoneThe number of dimensions of the resulting embeddings. Only text-embedding-3 and later models support this parameter.
api_base_urlOptional[str]NoneOverrides the default base URL for all HTTP requests.
organizationOptional[str]NoneYour OpenAI organization ID. See OpenAI's Setting Up Your Organization for more information.
prefixstr""A string to add at the beginning of each text.
suffixstr""A string to add at the end of each text.
batch_sizeint32Number of documents to embed at once.
progress_barboolTrueIf True, shows a progress bar when running.
meta_fields_to_embedOptional[List[str]]NoneList of metadata fields to embed along with the document text.
embedding_separatorstr\nSeparator used to concatenate the metadata fields to the document text.
timeoutOptional[float]NoneTimeout for OpenAI client calls. If not set, it defaults to either the OPENAI_TIMEOUT environment variable, or 30 seconds.
max_retriesOptional[int]NoneMaximum number of retries to contact OpenAI after an internal error. If not set, it defaults to either the OPENAI_MAX_RETRIES environment variable, or 5 retries.
http_client_kwargsOptional[Dict[str, Any]]NoneA dictionary of keyword arguments to configure a custom httpx.Clientor httpx.AsyncClient. For more information, see the HTTPX documentation.
raise_on_failureboolFalseWhether to raise an exception if the embedding request fails. If False, the component will log the error and continue processing the remaining documents. If True, it will raise an exception on failure.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
documentsList[Document]A list of documents to embed.