Create an Experiment Run

Experiments let you evaluate your pipeline in a systematic way. An experiment run is a single trial of an experiment that lets you monitor your pipeline performance.


You must be an Admin user to perform this task.

About This Task

When creating an experiment run, you can run it immediately or save it as a draft and run it later.

To learn more about experiments, see About Pipeline Evaluation.


Create an Experiment Run from the UI

  1. Log in to deepset Cloud and go to Experiments > New Experiment.
  2. Choose the pipeline that you want to evaluate.
  3. Choose the evaluation dataset that you want to use for this experiment.
  4. Give your experiment a meaningful name. You can also add tags that will let you identify the experiment later.
    You create tags for the whole workspace; they're not tied to a single experiment. You can use the same tag for multiple experiments.
  5. Choose one of the following:
  • To start the experiment now, click Start Experiment. The experiment starts running.
  • To save your experiment as a draft, click Save as Draft.

Create an Experiment Run with a Python SDK

For this task, you can also use Jupyter Notebooks within deepset Cloud; just go to Notebooks in the left navigation. We created a Notebook template to help you get started with experiments. You can find it in the examples folder in 02_getting_started_experiments.ipynb.

There are a couple of prerequisites for this task:

Here's the code you can use:

# Imports and setup:
import os
from haystack.utils import DeepsetCloudExperiments

# Set the API key and API endpoint:

# Create the experiment:
    comment="An optional comment",

# Inspect the experiment:
DeepsetCloudExperiments.get_run(workspace="<your_workspace>", eval_run_name="<eval_run_name>")

# Start your experiment:
DeepsetCloudExperiments.start_run(workspace="<your_workspace>", eval_run_name="<eval_run_name")

Create an Experiment Run with REST API

Before you start, you must prepare a couple of things:

You're now ready to create an experiment run:

  1. Define your experiment run. Use the create eval run API endpoint. Here's the code:
curl --request POST \
     --url<YOUR_WORKSPACE>/eval_runs \
     --header 'Accept: application/json' \
     --header 'Authorization: Bearer <YOUR_API_KEY>' \
     --header 'Content-Type: application/json' \
     --data '
     "tags": [
     "comment": "This is a comment",
     "debug": true,
     "evaluation_set_name": "<eval_set_name>",
     "name": "<experiment_run_name>",
     "pipeline_name": "<pipeline_name>"
  1. Start the experiment run using the start eval run endpoint:
curl --request POST \
     --url<your_workspace>/eval_runs/<your_eval_run/start \
     --header 'Accept: application/json' \
     --header 'Authorization: Bearer <your API key>'