Skip to main content

MistralChatGenerator

Complete chats using Mistra's text generation models through the Mistral API.

Basic Information

  • Type: haystack_integrations.components.generators.mistral.chat.chat_generator.MistralChatGenerator
  • Components it can connect with:
    • ChatPromptBuilder: MistralChatGenerator receives a rendered prompt from ChatPromptBuilder.
    • DeepsetAnswerBuilder: MistralChatGenerator sends the generated replies to DeepsetAnswerBuilder through OutputAdapter.

Inputs

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of ChatMessage objects representing the input messages.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. For more information on the arguments you can use, see Mistral API docs.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools or a Toolset for which the model can prepare calls. If set, it overrides the tools parameter set in pipeline configuration.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.
tools_strictOptional[bool]NoneWhether to strictly enforce the tools provided in the tools parameter. If set to True, the model will only use the tools provided in the tools parameter. If set to False, the model can use other tools that are not provided in the tools parameter.

Outputs

ParameterTypeDefaultDescription
repliesList[ChatMessage]A list of ChatMessage objects representing the generated responses.

Overview

Enables text generation using Mistral AI generative models. For supported models, see Mistral AI docs.

Users can pass any text generation parameters valid for the Mistral Chat Completion API directly to this component in the generation_kwargs parameter. For a complete list of supported parameters, refer to the Mistral API documentation.

Authorization

You need a Mistral API key to use this components. Connect deepset to your Mistral account:

  1. In deepset AI Platform click your profile icon and choose Secrets.
  2. Create a secret called MISTRAL_API_KEY.

For details on secrets, see Secrets.

Usage Example

Initializing the Component

components:
MistralChatGenerator:
type: mistral.src.haystack_integrations.components.generators.mistral.chat.chat_generator.MistralChatGenerator
init_parameters:

Using the Component in a Pipeline

This is an example RAG pipeline with MistralChatGenerator and DeepsetAnswerBuilder connected through OutputAdapter:

components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0

query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2

embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return

document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate

ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8

meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm

ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents.\nIf the answer is not in the documents, rely on the web_search tool to find information.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}] :\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
required_variables:
variables:
OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
custom_filters:
unsafe: false

MistralChatGenerator:
type: haystack_integrations.components.generators.mistral.chat.chat_generator.MistralChatGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- MISTRAL_API_KEY
strict: false
model: mistral-small-latest
streaming_callback:
api_base_url: https://api.mistral.ai/v1
generation_kwargs:
tools:
timeout:
max_retries:
http_client_kwargs:

connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: ChatPromptBuilder.prompt
receiver: MistralChatGenerator.messages
- sender: MistralChatGenerator.replies
receiver: OutputAdapter.replies

inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"

outputs: # Defines the output of your pipeline
documents: "meta_field_grouping_ranker.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers

max_runs_per_component: 100

metadata: {}

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
api_keySecretSecret.from_env_var('MISTRAL_API_KEY')The Mistral API key.
modelstrmistral-small-latestThe name of the Mistral chat completion model to use.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.
api_base_urlOptional[str]https://api.mistral.ai/v1The Mistral API Base url. For more details, see Mistral docs.
generation_kwargsOptional[Dict[str, Any]]NoneOther parameters to use for the model. These parameters are all sent directly to the Mistral endpoint. See Mistral API docs for more details. Some of the supported parameters:
- max_tokens: The maximum number of tokens the output text can have.
- temperature: What sampling temperature to use. Higher values mean the model will take more risks. Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer.
- top_p: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. - stream: Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
- safe_prompt: Whether to inject a safety prompt before all conversations.
- random_seed: The seed to use for random sampling.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools or a Toolset for which the model can prepare calls. This parameter can accept either a list of Tool objects or a Toolset instance.
timeoutOptional[float]NoneThe timeout for the Mistral API call. If not set, it defaults to either the OPENAI_TIMEOUT environment variable, or 30 seconds.
max_retriesOptional[int]NoneMaximum number of retries to contact OpenAI after an internal error. If not set, it defaults to either the OPENAI_MAX_RETRIES environment variable, or set to 5.
http_client_kwargsOptional[Dict[str, Any]]NoneA dictionary of keyword arguments to configure a custom httpx.Clientor httpx.AsyncClient. For more information, see the HTTPX documentation.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
messagesList[ChatMessage]A list of ChatMessage objects representing the input messages.
generation_kwargsOptional[Dict[str, Any]]NoneA dictionary containing keyword arguments to customize text generation. For more information on the arguments you can use, see Mistral API docs.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools or a Toolset for which the model can prepare calls. If set, it overrides the tools parameter set in pipeline configuration.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.
tools_strictOptional[bool]NoneWhether to strictly enforce the tools provided in the tools parameter. If set to True, the model will only use the tools provided in the tools parameter. If set to False, the model can use other tools that are not provided in the tools parameter.