DeepsetTogetherAIGenerator
Generate text using large language models hosted on Together AI.
Basic Information
- Pipeline type: Query
- Type:
deepset_cloud_custom_nodes.generators.togetherai.DeepsetTogetherAIGenerator
- Components it can connect with:
- PromptBuilder: DeepsetTogetherAIGenerator receives the prompt from PromptBuilder.
- DeepsetAnswerBuilder: DeepsetTogetherAIGenerator sends the generated replies to DeepsetAnswerBuilder, which uses them to build
GeneratedAnswer
objects.
Inputs
Required Inputs
Name | Type | Description |
---|---|---|
prompt | String | The prompt with instructions for the model. |
Optional Inputs
Name | Type | Default | Description |
---|---|---|---|
generation_kwargs | Dictionary of string and any | None | Additional keyword arguments you want to pass to the model. For a list of parameters you can use, see together.ai API documentation. |
system_prompt | String | None | A a set of instructions for the model that shapes how the model behaves and responds throughout the interaction. It can include guidelines about the AI's personality, tone, capabilities, constraints, and specific rules it should follow when generating responses. Optional. |
streaming_callback | StreamingChunk | None | A callback function invoked when the model receives a new token from the stream. |
Outputs
Name | Type | Description |
---|---|---|
replies | List of strings | Generated responses. |
meta | List of dictionaries | Metadata for each response. |
Overview
DeepsetTogetherAIGenerator
generates answers to queries using models hosted on Together AI. For a complete list of models you can use, check Together AI documentation.
Authentication
You need an API key from Together AI to use their models. For details on obtaining it, see Together AI Quickstart.
Once you have the API key, connect deepset Cloud to Together AI on the Connections page. For detailed instructions, see Use Together AI Models.
Usage Example
This query pipeline uses the DeepSeek-R§ model hosted on Together AI:
components:
...
prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
template: |-
You are a technical expert.
You answer questions truthfully based on provided documents.
For each document check whether it is related to the question.
Only use documents that are related to the question to answer it.
Ignore documents that are not related to the question.
If the answer exists in several documents, summarize them.
Only answer based on the documents provided. Don't make things up.
If the documents can't answer the question or you are unsure say: 'The answer can't be found in the text'.
These are the documents:
{% for document in documents %}
Document[{{ loop.index }}]:
{{ document.content }}
{% endfor %}
Question: {{question}}
Answer:
llm:
type: deepset_cloud_custom_nodes.generators.togetherai.DeepsetTogetherAIGenerator
init_parameters:
api_key: {"type": "env_var", "env_vars": ["TOGETHERAI_API_KEY"], "strict": false}
model: deepseek-ai/DeepSeek-R1
generation_kwargs:
max_tokens: 650
temperature: 0
seed: 0
answer_builder:
type: haystack.components.builders.answer_builder.AnswerBuilder
...
connections:
...
- sender: prompt_builder.prompt
receiver: llm.prompt
- sender: llm.replies
receiver: answer_builder.replies
...
When building your pipeline in Pipeline Builder, simply drag DeepsetTogetherAIGenerator
from the Connectors group onto the canvas. Then, connect its input prompt
to PromptBuilder
and output replies
to DeepsetAnswerBuilder
.

Init Parameters
Parameter | Type | Possible Values | Description |
---|---|---|---|
api_key | Secret | Default: Secret.from_env_var("TOGETHERAI_API_KEY") | Together AI API key. Required. (If you connect deepset Cloud to Together AI on the Connections page, you can ignore it.) |
model | String | Default: deepseek-ai/DeepSeek-R1 | The path to the model to use. Required. |
api_base_url | String | Default: None | The base URL ofd the Together AI API. |
streaming_callback | StreamingChunk | Default: None | A callback function called when a new token is received from the stream. This parameter specifies if the generator should stream. To make it stream, set streaming_callback to deepset_cloud_custom_nodes.callbacks.streaming.streaming_callback Optional. |
system_prompt | String | Default: None | A a set of instructions for the model that shapes how the model behaves and responds throughout the interaction. It can include guidelines about the AI's personality, tone, capabilities, constraints, and specific rules it should follow when generating responses. Optional. |
generation_kwargs | Dictionary | Default: None | Other parameters to use for the model. These parameters are all sent directly to the together.ai endpoint. See together.ai documentation for more details. Some of the supported parameters: - max_tokens : The maximum number of tokens the output text can have.- temperature : What sampling temperature to use. Higher values mean the model will take more risks.Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer. - top_p : An alternative to sampling with temperature, called nucleus sampling, where the modelconsiders the results of the tokens with top_p probability mass. So, 0.1 means only the tokens comprising the top 10% probability mass are considered. - n : How many completions to generate for each prompt. For example, if the LLM gets 3 prompts and n is 2,it will generate two completions for each of the three prompts, ending up with 6 completions in total. - stop : One or more sequences after which the LLM should stop generating tokens.- presence_penalty : What penalty to apply if a token is already present at all. Bigger values meanthe model will be less likely to repeat the same token in the text. - frequency_penalty : What penalty to apply if a token has already been generated in the text.Bigger values mean the model will be less likely to repeat the same token in the text. - logit_bias : Add a logit bias to specific tokens. The keys of the dictionary are tokens, and thevalues are the bias to add to that token. Optional. |
timeout | Float | Default: None | Timeout for together.ai Client calls, if not set it is inferred from the TOGETHERAI_TIMEOUT environment variable or set to 30.Optional. |
max_retries | Integer | Default: None | Maximum retries to establish contact with together.ai if it returns an internal error, if not set it is inferred from the TOGETHER_MAX_RETRIES environment variable or set to 5.Optional. |
Updated 10 days ago