OpenAITextEmbedder
Embed strings, such as user queries, using OpenAI models.
Basic Information
- Type:
haystack_integrations.embedders.openai_text_embedder.OpenAITextEmbedder - Components it can connect with:
Input:OpenAITextEmbeddercan receive a string to embed from theInputcomponent.Retrievers:OpenAITextEmbeddercan send the embedded text toRetrieversthat use the embeddings to retrieve documents from a document store.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| embedding | List[float] | A dictionary with the following keys: - embedding: The embedding of the input text. - meta: Information about the usage of the model. | |
| meta | Dict[str, Any] | A dictionary with the following keys: - embedding: The embedding of the input text. - meta: Information about the usage of the model. |
Overview
Use OpenAITextEmbedder to embed strings, such as user queries, using OpenAI models. This component is used in query pipelines when you want to perform semantic search.
You can use it to embed user query and send it to an embedding Retriever.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Authorization
You must have an OpenAI API key to use this component. Connect Haystack Platform to your OpenAI account on the Integrations page. For detailed instructions, see Use OpenAI Models.
Usage Example
This is a query pipeline that uses OpenAITextEmbedder to embed a query and retrieve documents:
components:
embedding_retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
- ${OPENSEARCH_HOST}
index: ''
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
- ${OPENSEARCH_USER}
- ${OPENSEARCH_PASSWORD}
use_ssl: true
verify_certs: false
timeout:
top_k: 20
OpenAITextEmbedder:
type: haystack.components.embedders.openai_text_embedder.OpenAITextEmbedder
init_parameters:
api_key:
type: env_var
env_vars:
- OPENAI_API_KEY
strict: false
model: text-embedding-ada-002
dimensions:
api_base_url:
organization:
prefix: ''
suffix: ''
timeout:
max_retries:
http_client_kwargs:
connections:
- sender: OpenAITextEmbedder.embedding
receiver: embedding_retriever.query_embedding
inputs:
query:
- OpenAITextEmbedder.text
filters:
- embedding_retriever.filters
outputs:
documents: embedding_retriever.documents
max_runs_per_component: 100
metadata: {}
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_key | Secret | Secret.from_env_var('OPENAI_API_KEY') | The OpenAI API key. You can set it with an environment variable OPENAI_API_KEY, or pass with this parameter during initialization. |
| model | str | text-embedding-ada-002 | The name of the model to use for calculating embeddings. The default model is text-embedding-ada-002. |
| dimensions | Optional[int] | None | The number of dimensions of the resulting embeddings. Only text-embedding-3 and later models support this parameter. |
| api_base_url | Optional[str] | None | Overrides default base URL for all HTTP requests. |
| organization | Optional[str] | None | Your organization ID. See OpenAI's production best practices for more information. |
| prefix | str | A string to add at the beginning of each text to embed. | |
| suffix | str | A string to add at the end of each text to embed. | |
| timeout | Optional[float] | None | Timeout for OpenAI client calls. If not set, it defaults to either the OPENAI_TIMEOUT environment variable, or 30 seconds. |
| max_retries | Optional[int] | None | Maximum number of retries to contact OpenAI after an internal error. If not set, it defaults to either the OPENAI_MAX_RETRIES environment variable, or set to 5. |
| http_client_kwargs | Optional[Dict[str, Any]] | None | A dictionary of keyword arguments to configure a custom httpx.Clientor httpx.AsyncClient. For more information, see the HTTPX documentation. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to embed. |
Was this page helpful?