Skip to main content

Set Up Tool Calling For Your Model

Configure tools your chat model can use to answer the query.

Tool Calling with Agent

You can also use tools with an Agent component. For an overview of different ways to set up tool calling and their advantages, see Agentic Pipelines and AI Agent.

About This Task

Tool calling, also known as function calling, lets the LLM in your app call a pipeline component. This is useful when the model needs information or capabilities beyond what it knows, such as retrieving real-time data. Here is how it works:

  1. You pass the tool configuration to the LLM.
  2. The LLM generates a JSON schema with all parameters and values needed to call the tool, but the model does not call the tool itself.
  3. The LLM sends the tool call to ToolInvoker that runs the tool.
  4. You then send the messages from ToolInvoker back to the LLM so that it can use them to resolve the query.

Tool calling is supported by chat models used through ChatGenerators. For detailed information on chat models and tool calling, see Chat models. For information on using chat models in deepset AI Platform, see Language Models.

Prerequisites

  • Ensure the model you're using supports tool calling.
  • To learn about the Haystack Tool data class, see Haystack documentation.

Set Up the Tool Call

Custom Components

If you're using a custom component, first check if it works correctly in deepset AI Platform.

  1. Add ToolInvoker to your pipeline.

  2. On the ToolInvoker card, click Configure and add the following information to the YAML:

    - type: haystack.tools.component_tool.ComponentTool # this is a component wrapper that allows components to be used as tools
    data:
    component:
    type: # the import path to the component, to quickly check a component type, add it to your pipeline and switch to the YAML view
    init_parameters: # configure your component, pass all init parameters here
    parameter1: value
    paramter2: value
    name: # give your component a name, you can refer to it in the prompt
    description: # describe what the component does, it will help the LLM decide when to use it

    This is an example of how to configure SerperDevWebSearch as a tool for the LLM:

    - type: haystack.tools.component_tool.ComponentTool
    data:
    component:
    type: haystack.components.websearch.serper_dev.SerperDevWebSearch
    init_parameters:
    api_key:
    type: env_var
    env_vars:
    - SERPERDEV_API_KEY
    strict: false
    top_k: 10
    name: web_search
    description: Search the web for current information on any topic
  3. Configure a ToolInvoker for each tool you want to add.

  4. Configure the ChatGenerator:

    1. Expand optional parameters on the ChatGenerator card and click Configure under tools:
      Tools configuration on the Generator component card
    2. Add the same configuration as you added in ToolInvoker. List each tool you want to give the LLM:
      # tool 1
      - type: haystack.tools.component_tool.ComponentTool # this is a component wrapper that allows components to be used as tools
      data:
      component:
      type: # the import path to the component, to quickly check a component type, add it to your pipeline and switch to the YAML view
      init_parameters: # configure your component, pass all init parameters here
      parameter1: value
      paramter2: value
      name: # give your component a name, you can refer to it in the prompt
      description: # describe what the component does, it will help the LLM decide when to use it
      # tool 2
      - type: haystack.tools.component_tool.ComponentTool a
      data:
      component:
      type: # component type
      init_parameters: # component configuration
      parameter1: value
      paramter2: value
      name: # give your component a name, you can refer to it in the prompt
      description: # describe what the component does, it will help the LLM decide when
  5. Ensure you mention the tools in ChatPromptBuilder's prompt, for example:

        - _content:
    - text: |
    You are a helpful assistant answering the user's questions based on the provided documents.
    If the answer is not in the documents, rely on the web_search tool to find information.
    Do not use your own knowledge.
    _role: system

    Depending on the mode, you may find that some rely more on the instructions in the prompt when choosing a tool, while others on the tool descriptions.

  6. Ensure each ToolInvoker receives the ChatGenerator's replies. You can either connect the ChatGenerator's replies output directly to ToolInvoker's messages or if you're using any component in between, ensure it passes the replies on to ToolInvoker.

  7. Pass each TooInvoker's messages back to theChatGenerator so that the LLM can use them to resolve the query.
    Important: The LLM must also receive the tool call it sent to the tool. The tool call must precede the message from the tool. You can use OutputAdapter to convert all the messages into a list of ChatMessage objects.