You must be an Admin to perform this task.
You can review the following information about your experiment:
- The experiment status. If the experiment failed, check the Debug section to see what went wrong.
- The details of the experiment: the pipeline and evaluation set used.
- Metrics for pipeline components. You can see both metrics for integrated and isolated evaluation. For more information about metrics, see Experiment Metrics.
- The pipeline parameters and configuration used for this experiment. It may be different from the actual pipeline as you can update your pipeline just for an experiment run, without modifying the actual pipeline.
You can't edit your pipeline in this view.
- Detailed predictions. Here you can see how your pipeline did and what answers it returned (predicted answer) compared to the expected answers. For each predicted answer, deepset Cloud displays the exact match, F1 score, and rank. The predictions are shown for each node separately.
You can export these data into a CSV file. Open the node whose predictions you want to export and click Download CSV.
All your experiments show on the Experiments page in a table. You can configure the table colums to show the metrics you want to compare. Use the Customize the columns icon to choose the metrics.
- Log in to deepset Cloud and go to Experiments.
- Click the name of the experiment whose details you want to see. The Experiment Details page opens. You can see all the information about your experiment here.
curl --request GET \ --url https://api.cloud.deepset.ai/api/v1/workspaces/<WORKSPACE_NAME>/eval_runs/<EVAL_RUN_NAME> \ --header 'Accept: application/json' \ --header 'Authorization: Bearer <YOUR_API_KEY>'
Updated 23 days ago