Set Up Tool Calling For Your Model
Configure tools your chat model can use to answer the query.
About This Task
Tool calling, also known as function calling, lets the LLM in your app call a pipeline component. This is useful when the model needs information or capabilities beyond what it knows, such as retrieving real-time data. Here is how it works:
- You pass the tool configuration to the LLM.
- The LLM generates a JSON schema with all parameters and values needed to call the tool, but the model does not call the tool itself.
- The LLM sends the tool call to
ToolInvoker
that runs the tool. - You then send the messages from
ToolInvoker
back to the LLM so that it can use them to resolve the query.
Tool calling is supported by chat models used through ChatGenerators
. For detailed information on chat models and tool calling, see Chat models. For information on using chat models in deepset AI Platform, see Generators Overview.
Prerequisites
- Ensure the model you're using supports tool calling.
- To learn about the Haystack Tool data class, see Haystack documentation.
Set Up the Tool Call
If you're using a custom component, first check if it works correctly in deepset AI Platform.
- Add
ToolInvoker
to your pipeline. - On the
ToolInvoker
card, click Configure and add the following information to the YAML:- type: haystack.tools.component_tool.ComponentTool # this is a component wrapper that allows components to be used as tools data: component: type: # the import path to the component, to quickly check a component type, add it to your pipeline and switch to the YAML view init_parameters: # configure your component, pass all init parameters here parameter1: value paramter2: value name: # give your component a name, you can refer to it in the prompt description: # describe what the component does, it will help the LLM decide when to use it
This is an example of how to configure SerperDevWebSearch
as a tool for the LLM:
- type: haystack.tools.component_tool.ComponentTool
data:
component:
type: haystack.components.websearch.serper_dev.SerperDevWebSearch
init_parameters:
api_key:
type: env_var
env_vars:
- SERPERDEV_API_KEY
strict: false
top_k: 10
name: web_search
description: Search the web for current information on any topic
-
Configure a
ToolInvoker
for each tool you want to add. -
Configure the
ChatGenerator
:-
Expand optional parameters on the
ChatGenerator
card and click Configure undertools
: -
Add the same configuration as you added in
ToolInvoker
. List each tool you want to give the LLM:# tool 1 - type: haystack.tools.component_tool.ComponentTool # this is a component wrapper that allows components to be used as tools data: component: type: # the import path to the component, to quickly check a component type, add it to your pipeline and switch to the YAML view init_parameters: # configure your component, pass all init parameters here parameter1: value paramter2: value name: # give your component a name, you can refer to it in the prompt description: # describe what the component does, it will help the LLM decide when to use it # tool 2 - type: haystack.tools.component_tool.ComponentTool a data: component: type: # component type init_parameters: # component configuration parameter1: value paramter2: value name: # give your component a name, you can refer to it in the prompt description: # describe what the component does, it will help the LLM decide when
-
-
Ensure you mention the tools in
ChatPromptBuilder
's prompt, for example:- _content: - text: | You are a helpful assistant answering the user's questions based on the provided documents. If the answer is not in the documents, rely on the web_search tool to find information. Do not use your own knowledge. _role: system
Depending on the model, you may find that some rely more on the instructions in the prompt when choosing a tool, while others on the tool descriptions.
- Ensure each
ToolInvoker
receives theChatGenerator
'sreplies
. You can either connect theChatGenerator
'sreplies
output directly toToolInvoker
'smessages
or if you're using any component in between, ensure it passes thereplies
on toToolInvoker
. - Pass each
TooInvoker
's messages back to theChatGenerator
so that the LLM can use them to resolve the query.
Important: The LLM must also receive the tool call it sent to the tool. The tool call must precede the message from the tool. You can useOutputAdapter
to convert all the messages into a list ofChatMessage
objects.
Updated 7 days ago