Skip to main content
For the complete documentation index for agents and LLMs, see llms.txt.

CohereChatGenerator

Completes chats using Cohere's models via the cohere.ClientV2 chat endpoint.

Key Features

  • Supports Cohere's command model family for chat completion.
  • Customizable generation behavior via generation_kwargs.
  • Supports streaming responses with a configurable callback.
  • Accepts Tool objects or a Toolset for function calling.
  • Connects to ChatPromptBuilder for dynamic prompt construction.
  • Works with DeepsetAnswerBuilder to build answers with references.

Configuration

Authentication

To use this component, connect Haystack Platform with Cohere first. For detailed instructions, see Use Cohere Models.

  1. Drag the CohereChatGenerator component onto the canvas from the Component Library.
  2. Click the component to open the configuration panel.
  3. On the General tab:
    1. Enter the model name, for example command-r-08-2024.
  4. Go to the Advanced tab to configure the API key, API base URL, generation kwargs, streaming callback, and tools.

Connections

CohereChatGenerator accepts a list of ChatMessage objects as messages input. It outputs a list of ChatMessage objects as replies.

Connect ChatPromptBuilder to the messages input to build dynamic prompts. To pass the generated replies to DeepsetAnswerBuilder, connect an OutputAdapter between them to extract the first reply.

Usage Example

Using the Component in a Pipeline

This is a RAG chat pipeline with CohereChatGenerator sending replies to DeepsetAnswerBuilder through OutputAdapter:

components:
bm25_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.bm25_retriever.OpenSearchBM25Retriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return
fuzziness: 0

query_embedder:
type: deepset_cloud_custom_nodes.embedders.nvidia.text_embedder.DeepsetNvidiaTextEmbedder
init_parameters:
normalize_embeddings: true
model: intfloat/e5-base-v2

embedding_retriever: # Selects the most similar documents from the document store
type: haystack_integrations.components.retrievers.opensearch.embedding_retriever.OpenSearchEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
hosts:
index: 'Standard-Index-English'
max_chunk_bytes: 104857600
embedding_dim: 768
return_embedding: false
method:
mappings:
settings:
create_index: true
http_auth:
use_ssl:
verify_certs:
timeout:
top_k: 20 # The number of results to return

document_joiner:
type: haystack.components.joiners.document_joiner.DocumentJoiner
init_parameters:
join_mode: concatenate

ranker:
type: deepset_cloud_custom_nodes.rankers.nvidia.ranker.DeepsetNvidiaRanker
init_parameters:
model: intfloat/simlm-msmarco-reranker
top_k: 8

meta_field_grouping_ranker:
type: haystack.components.rankers.meta_field_grouping_ranker.MetaFieldGroupingRanker
init_parameters:
group_by: file_id
subgroup_by:
sort_docs_by: split_id

answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm

ChatPromptBuilder:
type: haystack.components.builders.chat_prompt_builder.ChatPromptBuilder
init_parameters:
template:
- _content:
- text: "You are a helpful assistant answering the user's questions based on the provided documents.\nIf the answer is not in the documents, rely on the web_search tool to find information.\nDo not use your own knowledge.\n"
_role: system
- _content:
- text: "Provided documents:\n{% for document in documents %}\nDocument [{{ loop.index }}] :\n{{ document.content }}\n{% endfor %}\n\nQuestion: {{ query }}\n"
_role: user
required_variables:
variables:
OutputAdapter:
type: haystack.components.converters.output_adapter.OutputAdapter
init_parameters:
template: '{{ replies[0] }}'
output_type: List[str]
custom_filters:
unsafe: false

CohereChatGenerator:
type: haystack_integrations.components.generators.cohere.chat.chat_generator.CohereChatGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- COHERE_API_KEY
- CO_API_KEY
strict: false
model: command-r-08-2024
streaming_callback:
api_base_url:
generation_kwargs:
tools:

connections: # Defines how the components are connected
- sender: bm25_retriever.documents
receiver: document_joiner.documents
- sender: query_embedder.embedding
receiver: embedding_retriever.query_embedding
- sender: embedding_retriever.documents
receiver: document_joiner.documents
- sender: document_joiner.documents
receiver: ranker.documents
- sender: ranker.documents
receiver: meta_field_grouping_ranker.documents
- sender: meta_field_grouping_ranker.documents
receiver: answer_builder.documents
- sender: meta_field_grouping_ranker.documents
receiver: ChatPromptBuilder.documents
- sender: OutputAdapter.output
receiver: answer_builder.replies
- sender: ChatPromptBuilder.prompt
receiver: CohereChatGenerator.messages
- sender: CohereChatGenerator.replies
receiver: OutputAdapter.replies

inputs: # Define the inputs for your pipeline
query: # These components will receive the query as input
- "bm25_retriever.query"
- "query_embedder.text"
- "ranker.query"
- "answer_builder.query"
- "ChatPromptBuilder.query"
filters: # These components will receive a potential query filter as input
- "bm25_retriever.filters"
- "embedding_retriever.filters"

outputs: # Defines the output of your pipeline
documents: "meta_field_grouping_ranker.documents" # The output of the pipeline is the retrieved documents
answers: "answer_builder.answers" # The output of the pipeline is the generated answers

max_runs_per_component: 100

metadata: {}


Parameters

Inputs

ParameterTypeDefaultDescription
messagesList[ChatMessage]list of ChatMessage instances representing the input messages.
generation_kwargsOptional[Dict[str, Any]]NoneAdditional keyword arguments for chat generation. For details on the parameters supported by the Cohere API, refer to the Cohere documentation.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools or a Toolset for which the model can prepare calls.

Outputs

ParameterTypeDefaultDescription
repliesList[ChatMessage]A list of ChatMessage instances representing the generated responses.

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
api_keySecretSecret.from_env_var(['COHERE_API_KEY', 'CO_API_KEY'])The API key for the Cohere API.
modelstrcommand-r-08-2024The name of the model to use. You can use models from the command family.
streaming_callbackOptional[StreamingCallbackT]NoneA callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.
api_base_urlOptional[str]NoneThe base URL of the Cohere API.
generation_kwargsOptional[Dict[str, Any]]NoneOther parameters to use for the model during generation. For a list of parameters, see Cohere Chat endpoint. Some of the parameters are:
- 'messages': A list of messages between the user and the model, meant to give the model conversational context for responding to the user's message.
- 'system_message': When specified, adds a system message at the beginning of the conversation.
- 'citation_quality': Defaults to accurate. Dictates the approach taken to generating citations as part of the RAG flow by allowing the user to specify whether they want accurate results or fast results.
- 'temperature': A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of Tool objects or a Toolset that the model can use. Each tool should have a unique name.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
messagesList[ChatMessage]list of ChatMessage instances representing the input messages.
generation_kwargsOptional[Dict[str, Any]]Noneadditional keyword arguments for chat generation. These parameters will potentially override the parameters passed in the init method. For more details on the parameters supported by the Cohere API, refer to the Cohere documentation.
toolsOptional[Union[List[Tool], Toolset]]NoneA list of tools or a Toolset for which the model can prepare calls. If set, it will override the tools parameter set during component initialization.