About Experiments
After you created a pipeline, it’s time to evaluate it using experiments. Test your pipeline against all files in your workspace and then share it with other users to collect their feedback.
Supported Nodes
Currently, deepset Cloud supports the evaluation of Preprocessor, Retriever, and Reader nodes. You can combine all types of nodes in your pipelines, but only these three are evaluated.
Experiments
Experiments are an essential step in designing your pipeline as they help you to:
- Find the best pipeline configuration for your use case.
- Evaluate if the pipeline performance is sufficient to move it to production.
- Generate new hypotheses for improving your pipeline.
- Track the settings you used for previous versions of your pipeline.
Before you start experimenting, you must prepare an evaluation dataset containing the questions and gold answers and a pipeline to evaluate. The experiment runs on all the files you have in your workspace at the time when you create the experiment.
Here's what happens during an experiment run:
- deepset Cloud runs all the questions from your evaluation dataset through the pipeline that you chose for evaluation.
- The pipeline searches for answers in all the files that were present in your workspace when you created the experiment.
- deepset Cloud then compares the results that the pipeline returned with the answers that you labeled in the evaluation dataset and calculates the metrics that you can use to tweak your pipeline.
Running experiments is a formal way to evaluate your model and see how often it predicts the correct answer.
Overview of Pipeline Evaluation
Evaluating a pipeline involves a couple of steps. This image shows you an overview of the whole process:
- First, make sure you upload files to your workspace. The pipeline you're evaluating will run on these files. The more files, the more difficult the task of finding the right answer is.
Coming soon!
Currently, the evaluation runs on all the files in your workspace but we're working on making it possible to choose a subset of files for an experiment.
- The evaluation dataset contains the annotated data used to estimate your model skills. You can choose one dataset per evaluation run.
- An experiment is the part where you check how your pipeline is performing. When an experiment finishes, you can review its details on the Experiment Details page, where you can check:
- The experiment status. If the experiment failed, check the Debug section to see what went wrong.
- The details of the experiment: the pipeline and the evaluation set used.
- Metrics for pipeline components. You can see both metrics for integrated and isolated evaluation. For more information about metrics, see Experiment Metrics.
- The pipeline parameters and configuration used for this experiment. It may be different from the actual pipeline as you can update your pipeline just for an experiment run, without modifying the actual pipeline.
You can't edit your pipeline in this view. - Detailed predictions. Here you can see how your pipeline did and what answers it returned (predicted answer) compared to the expected answers. For each predicted answer, deepset Cloud displays the exact match, F1 score, and rank. The predictions are shown for each node separately.
You can export these data into a CSV file. Open the node whose predictions you want to export and click Download CSV.
If you're unhappy with your pipeline performance during the experiment, check how to tweak your pipeline for better results.
- Demo your pipeline to others. Invite people to your organization, let them test your search, and collect their feedback. Everyone listed on the Organization page can run a search with your pipelines. Have a look at Guidelines for Onboarding Your Users to ensure you set the right expectations for your pipeline performance.
The users can give each answer a thumbs-up or thumbs-down evaluation. You can then export this feedback to a CSV file and analyze it. - Reviewing search statistics is another way to check how your pipeline is doing. You can find all the data on the Dashboard in the LATEST REQUESTS section. Here you can check:
- The actual query.
- The answer with the highest score.
- The pipeline used for the search.
- The top file, which for a QA pipeline is the file that contains the top answer, and for a document retrieval pipeline, is the file with the highest score.
- Who ran the search.
- How many seconds it took to find the answer.
Updated 26 days ago