The EvaluationSetClient
class contains a variety of methods that you can use to manage your evaluation sets in deepset Cloud from your SDK.
Prerequisites
- You must be an Admin to perform this task.
- If you work from your local SDK, you must have Haystack installed. For more information, see Haystack Installation. If you use Jupyter Notebooks in deepset Cloud, you don't have to worry about that.
- Add API endpoint and API key to the environment variables. The API endpoint is
<https://api.cloud.deepset.ai/api/v1>
. See Generate an API Key.
Server restart
If the Notebooks server closes while you're still working, all the files you saved are still there. When you restart the server, you'll be able to work on them again.
Display Labels
List all labels that exist in a specific evaluation set in deepset Cloud.
This table lists the parameters that you can use with the get_labels()
method:
Method | Parameters | Description |
---|---|---|
get_labels() | evaluation_set - specifies the name of the evaluation set whose labels you want to display. String. Optional.workspace - specifies the workspace where the evaluation set exists. String. Optional. | Lists all labels that exist in the specified evaluation set. It returns a list of labels. |
Example of usage
from haystack.utils import DeepsetCloud
evaluation_set_client = DeepsetCloud.get_evaluation_set_client()
my_labels = evaluation_set_client.get_labels(evaluation_set="eval_set1")
print(my_labels)
Get the Label Count
Find out how many labels are there in an evaluation set.
This table lists the parameters that you can use with the get_labels_count()
method:
Method | Parameters | Description |
---|---|---|
get_labels_count | evaluation_set - the name of the evaluation set whose labels you want to count. String. Optional.workspace - the name of the workspace where the evaluation set is. String. Optional. | Counts the labels that exist in an evaluation set in deepset Cloud. Returns the number of labels. |
Example of usage
from haystack.utils import DeepsetCloud
evaluation_set_client = DeepsetCloud.get_evaluation_set_client()
label_count = evaluation_set_client.get_labels_count(evaluation_set="my_set")
print(label_count)
List Evaluation Sets
Get a list of all evaluation sets that exist in a deepset Cloud workspace. You can also retrieve information about a particular evaluation set.
This table describes the parametes that you can use with the methods for listing evaluation sets:
Method | Parameters | Description |
---|---|---|
get_evaluation_sets | workspace - the workspace with the evaluation sets. String. Optional. | Lists all evaluation sets in a given workspace in deepset Cloud. Returns a list of dictionaries that contain the following fields: name, evaluation_set_id, created_at, matched_labels, total_labels. |
get_evaluation_set | evaluation_set - specifies the name of the evaluation set that you want to retrieve. String. Optional.workspace - specifies the workspace where the evaluation set exists. String. Optional. | Lists information about a particular evaluation set. It returns a dictionary that contains the following fields: name, evaluation_set_id, created_at, matched_labels, total_labels. |
Example of usage
To list all evaluation sets in deepset Cloud:
from haystack.utils import DeepsetCloud
evaluation_set_client = DeepsetCloud.get_evaluation_set_client()
evaluation_sets = evaluation_set_client.get_evaluation_sets()
print(evaluation_sets)
To view a specific evaluation set:
from haystack.utils import DeepsetCloud
evaluation_set_client = DeepsetCloud.get_evaluation_set_client()
my_eval_set = evaluation_set_client.get_evaluation_set(evaluation_set="my_set")
print(my_eval_set)
Upload Evaluation Sets
Upload your sets into deepset Cloud.
The name of file that you uploaded becomes the name of the evaluation set in deepset Cloud.
Evaluation sets must be in the CSV format with the following columns:
- question (or query) - contains the labeled question or query
- text - the answer or the relevant text passage to the question or query
- context - the words that surround the text, should be more than 100 characters
- file_name - the name of the file in the workspace that contains the text
- answer_start - the character position in the file that marks the start of the text
- answer_end - the character position in the file that marks the end of the text
This table lists the parameters that you can use with the upload_evaluation_set
method:
Method | Parameters | Description |
---|---|---|
upload_evaluation_set | file_path - the path to the evaluation set that you want to upload. Path. Required.workspace - the name of the deepset Cloud workspace where you want to upload your evaluation set. | Uploads an evaluation set to deepset Cloud. The name of the file that you upload becomes the name of the evaluation set in deepset Cloud. |
Example of usage
from haystack.utils import DeepsetCloud
from pathlib import Path
evaluation_set_client = DeepsetCloud.get_evaluation_set_client()
my_set = evaluation_set_client.upload_evaluation_set(
file_path=Path("C:\Users\OneDrive\Documents\eval_set.csv"))