Label Answers

Use labeling projects to create gold labels for your search system. Gold labels indicate what answers you expect as results to your queries. You can later export the labels and use them as evaluation datasets in experiments.

About This Task

To evaluate your pipeline through experiments, you need an evaluation dataset. The evaluation dataset contains gold labels. These are the answers you expect in response to your queries.

Currently, deepset Cloud supports labeling projects for document search only.

Prerequisites

  • There is a labeling project created with all the mandatory steps configured.
  • You have access to the labeling project.

Labeling

  1. Log in to deepset Cloud.
  2. Click Labeling and open the labeling project you're participating in.
  3. On the project page, click Start Labeling. You're redirected to the Labeling Query page. If the project creator added guidelines for you and the query target, you can see them here.
  4. Ask a query. You'll see a couple of documents that show the results. For each document indicate if it's:
    1. Relevant (answers the query well)
    2. Not relevant (doesn't match the query)
    3. Flagged for review (you're not sure if it's relevant or not, and you'd like another labeler to check)
  5. Continue until you've run the number of queries the project owner expected. To change the label for a previous query, navigate back to it using the Prev. Query option at the bottom of the page.

🚧

Coming soon - query history

We're working on adding a query history where you can check all the labels you've added so far. We'll let you know when it's ready.

When you're finished, let the project creator know. When the project reaches the target query number, the project creator can export the labels and use them in pipeline experiments.