- Ensure that the users understand that the information that they're looking for must exist in the text data stored in deepset Cloud. The application cannot make up answers by itself.
- What the system returns depends on the pipeline you defined. For example, for a Retriever-only pipeline, the system returns text passages that fit the query, while for a question-answering pipeline, the system also highlights the answers within the text passages it returns.
- Explain what queries work well (for example, natural language questions as opposed to simple keywords or copy-pasted error messages).
- Make your users aware of what data is indexed in their deepset Cloud search application.
- Ensure that your users understand that if they ask for information about documents that don't exist in deepset Cloud, the system won't be able to find an answer.
- Ensure that your users understand relevance scores that are displayed beneath each model prediction. You can ask them to read Relevance Scores.
- Set the right threshold for the relevance scores if you think some users might be confused by model predictions with very low relevance.
- If you want to collect users' feedback, ask them to click thumbs up or thumbs down for each answer. User feedback helps in evaluating the underlying Machine Learning Models and can be used for improving them. See User Feedback for more details.
We recommend that you instruct the users that they shouldn't be too strict about the results. If a result helps to answer their question, ask them to select the thumbs-up icon. This includes answers that, for example, lack a word or have the whole sentence highlighted even though just a part of it would be enough.
If a result is garbage text, completely false, or not helpful at all, they should use the thumbs-down icon.
Updated 3 months ago