Monitor users' feedback to give you an indication of how your search system is doing.
How It Works
You or your users can give thumbs up or down for each search result. When giving thumbs down, you can indicate the reason for this - either the answer was wrong, or the context wasn't helpful. You can see an example implementation of this in the Search UI in deepset Cloud:
Collecting Feedback in Production
In production, you typically want to collect feedback from your users to monitor the performance of your model over time, identify potential issues (such as data drift), or use the collected feedback as labels for future model training.
To collect this type of feedback from your own application, you need two things:
- Add thumbs-up and thumbs-down buttons in your UI.
- Send requests with the feedback to deepset Cloud's Add Feedback API endpoint.
You can analyze the feedback in deepset Cloud on the Pipeline Details page. Click the name of the pipeline whose feedback you want to view. You can check what the users searched for and which queries performed well. You can also assign tags to the feedback items to make it easier to sort and group them. Just click a feedback item and add a tag to it.
Another option is to export your user's feedback into a CSV file. To do so, go to the Pipelines page and click the ellipsis next to the pipeline whose feedback you want to export. You'll see the Export feedback option there.
We recommend you instruct the users that they shouldn't be too strict about the results. If a result helps to answer their question, ask them to select the thumbs-up icon. This includes answers that, for example, lack a word or have the whole sentence highlighted even though just a part of it would be enough.
If a result is garbage text, completely false, or not helpful at all, they should use the thumbs-down icon.
Updated about 1 month ago