AzureOpenAITextEmbedder
Embed strings, like user queries, using OpenAI models deployed on Azure.
Basic Information
- Type:
haystack.components.embedders.azure_text_embedder.AzureOpenAITextEmbedder - Components it can connect with:
Query:AzureOpenAITextEmbeddercan receive a string to embed from theQueryinput.Retrievers:AzureOpenAITextEmbeddercan send the embedded text toRetrieversthat use the embeddings to retrieve documents from a document store.
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to embed. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| embedding | List[float] | A list of floats representing the embedding of the input text. | |
| meta | Dict[str, Any] | Information about the usage of the model, including model name and token usage. |
Overview
You can use AzureOpenAITextEmbedder in your query pipelines to turn user queries into vector representations (embreddings). You need this to perform semantic-based retrieval, where you can search for documents that are similar to the user query. The retriever then compares the documents and query embeddings to find the most relevant documents.
For a list of supported models, see Azure documentation.
Embedding Models in Query Pipelines and Indexes
The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.
This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.
Authentication
You need an Azure OpenAI API key to use this component. Connect deepset AI Platform to your Azure OpenAI account. For more information, see Using Azure OpenAI Models.
Usage Example
This is an example of a query pipeline that uses AzureOpenAITextEmbedder to embed the user query and send it to the retriever.
components:
...
query_embedder:
type: haystack.components.embedders.azure_text_embedder.AzureOpenAITextEmbedder
init_parameters:
azure_endpoint: "https://your-company.azure.openai.com/"
azure_deployment: "text-embedding-ada-002" #this is the name of the model you want to use
retriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
init_parameters:
use_ssl: True
verify_certs: False
http_auth:
- "${OPENSEARCH_USER}"
- "${OPENSEARCH_PASSWORD}"
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
top_k: 20
prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |-
You are a technical expert.
You answer questions truthfully based on provided documents.
For each document check whether it is related to the question.
Only use documents that are related to the question to answer it.
Ignore documents that are not related to the question.
If the answer exists in several documents, summarize them.
Only answer based on the documents provided. Don't make things up.
If the documents can't answer the question or you are unsure say: 'The answer can't be found in the text'.
These are the documents:
{% for document in documents %}
Document[{{ loop.index }}]:
{{ document.content }}
{% endfor %}
Question: {{question}}
Answer:
generator:
type: haystack.components.generators.azure.AzureOpenAIGenerator
init_parameters:
generation_kwargs:
temperature: 0.0
azure_deployment: gpt-35-turbo #this is the model you want to use
answer_builder:
init_parameters: {}
type: haystack.components.builders.answer_builder.AnswerBuilder
...
connections: # Defines how the components are connected
...
- sender: query_embedder.embedding # AmazonBedrockTextEmbedder sends the embedded query to the retriever
receiver: retriever.query_embedding
- sender: retriever.documents
receiver: prompt_builder.documents
- sender: prompt_builder.prompt
receiver: generator.prompt
- sender: generator.replies
receiver: answer_builder.replies
...
inputs:
query:
..
- "query_embedder.text" # TextEmbedder needs query as input and it's not getting it
- "retriever.query" # from any component it's connected to, so it needs to receive it from the pipeline.
- "prompt_builder.question"
- "answer_builder.query"
...
...
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| azure_endpoint | Optional[str] | None | The endpoint of the model deployed on Azure. |
| api_version | Optional[str] | 2023-05-15 | The version of the API to use. |
| azure_deployment | str | text-embedding-ada-002 | The name of the model deployed on Azure. The default model is text-embedding-ada-002. |
| dimensions | Optional[int] | None | The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models. |
| api_key | Optional[Secret] | Secret.from_env_var('AZURE_OPENAI_API_KEY', strict=False) | The Azure OpenAI API key. You can set it with an environment variable AZURE_OPENAI_API_KEY, or pass with this parameter during initialization. |
| azure_ad_token | Optional[Secret] | Secret.from_env_var('AZURE_OPENAI_AD_TOKEN', strict=False) | Microsoft Entra ID token, see Microsoft's Entra ID documentation for more information. You can set it with an environment variable AZURE_OPENAI_AD_TOKEN, or pass with this parameter during initialization. Previously called Azure Active Directory. |
| organization | Optional[str] | None | Your organization ID. See OpenAI's Setting Up Your Organization for more information. |
| timeout | Optional[float] | None | The timeout for AzureOpenAI client calls, in seconds. If not set, defaults to either the OPENAI_TIMEOUT environment variable, or 30 seconds. |
| max_retries | Optional[int] | None | Maximum number of retries to contact AzureOpenAI after an internal error. If not set, defaults to either the OPENAI_MAX_RETRIES environment variable, or to 5 retries. |
| prefix | str | A string to add at the beginning of each text. | |
| suffix | str | A string to add at the end of each text. | |
| default_headers | Optional[Dict[str, str]] | None | Default headers to send to the AzureOpenAI client. |
| azure_ad_token_provider | Optional[AzureADTokenProvider] | None | A function that returns an Azure Active Directory token, will be invoked on every request. |
| http_client_kwargs | Optional[Dict[str, Any]] | None | A dictionary of keyword arguments to configure a custom httpx.Clientor httpx.AsyncClient. For more information, see the HTTPX documentation. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| text | str | Text to embed. |
Was this page helpful?