Skip to main content

MistralTextEmbedder

Embed strings, like user queries, using Mistral embedding models.

Basic Information

  • Type: haystack_integrations.components.embedders.mistral.text_embedder.MistralTextEmbedder
  • Components it can connect with:
    • Retrievers: Sends the computed embedding to an embedding Retriever for semantic search.
    • Input: Receives a query string as input in a query pipeline.

Inputs

ParameterTypeDefaultDescription
textstrThe string to embed.

Outputs

ParameterTypeDefaultDescription
embeddingList[float]The embedding of the input text.
metaDict[str, Any]Information about the usage of the model, including model name and token usage.

Overview

MistralTextEmbedder embeds strings using Mistral AI embedding models. Use this component in query pipelines to convert user queries into embeddings for semantic search.

For embedding documents in indexes, use MistralDocumentEmbedder instead.

Embedding Models in Query Pipelines and Indexes

The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

Authorization

Create a secret with your Mistral API key. Type MISTRAL_API_KEY as the secret key. For detailed instructions on creating secrets, see Create Secrets.

Get your API key from Mistral AI.

Usage Example

This example shows a query pipeline that embeds a user query using Mistral and retrieves relevant documents.

components:
MistralTextEmbedder:
type: haystack_integrations.components.embedders.mistral.text_embedder.MistralTextEmbedder
init_parameters:
api_key:
type: env_var
env_vars:
- MISTRAL_API_KEY
strict: false
model: mistral-embed
EmbeddingRetriever:
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'mistral-embeddings'
max_chunk_bytes: 104857600
embedding_dim: 1024
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 10

connections:
- sender: MistralTextEmbedder.embedding
receiver: EmbeddingRetriever.query_embedding

max_runs_per_component: 100

metadata: {}

inputs:
query:
- MistralTextEmbedder.text
- EmbeddingRetriever.query

outputs:
documents: EmbeddingRetriever.documents

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
api_keySecretSecret.from_env_var('MISTRAL_API_KEY')The Mistral API key.
modelstrmistral-embedThe name of the Mistral embedding model to be used.
api_base_urlOptional[str]https://api.mistral.ai/v1The Mistral API Base url. For more details, see Mistral docs.
prefixstrA string to add to the beginning of each text.
suffixstrA string to add to the end of each text.
timeoutOptional[float]NoneTimeout for Mistral client calls. If not set, it defaults to either the OPENAI_TIMEOUT environment variable, or 30 seconds.
max_retriesOptional[int]NoneMaximum number of retries to contact Mistral after an internal error. If not set, it defaults to either the OPENAI_MAX_RETRIES environment variable, or set to 5.
http_client_kwargsOptional[Dict[str, Any]]NoneA dictionary of keyword arguments to configure a custom httpx.Clientor httpx.AsyncClient. For more information, see the HTTPX documentation.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
textstrThe string to embed.