Tutorial: Building a Robust RAG System

Build a retrieval augmented generation (RAG) system running on your own data that can generate answers in a friendly and conversational tone. Learn how to test different prompts and insert them into your pipeline from Prompt Explorer.

  • Level: Basic
  • Time to complete: 10 minutes
  • Prerequisites:
    • You must be an Admin to complete this tutorial.
    • You must have an API key from an active OpenAI account as this pipeline uses the gpt-3.5-turbo model by OpenAI.
  • Goal: After completing this tutorial, you will have built a RAG system that can answer questions about treating various diseases based on the documents from Mayo Clinic. This system will run on the data you provide to it to minimize the possibility of hallucinations.
  • Keywords: PromptBuilder, Generator, large language models, retrieval augmented generation, RAG, gpt-3.5-turbo, Prompt Explorer

Create a Workspace

We need a deepset Cloud workspace to store our files and the generative pipeline.

  1. Log in to deepset Cloud.

  2. In the upper left corner, click the name of the workspace, type RAG as the workspace name, and click Create.

    The workspace creation window expanded

Result: You have created a workspace called RAG, where you'll upload the Mayo Clinic files.

Upload Files to Your Workspace

  1. First, download the mayoclinic.zip file and unpack it on your computer. (You can also use your own files.)
  2. In deepset Cloud, make sure you're in the RAG workspace, and go to Files> Upload Files .
  3. Open the folder you downloaded and unzipped in step 1, select all the files in it, and drag them to the Upload Files window. Click Upload.
  4. Wait until the upload finishes. You should have 1096 files in your workspace.

Result: Your files are in the RAG workspace and you can see them on the Files page.

The Files page with the uploaded files showing in a list

Connect Your OpenAI Account

Once you connect deepset Cloud to your OpenAI account, you can use OpenAI models without passing the API keys in the pipeline.

  1. Click your initials in the top right corner and choose Connections.
The personal menu expanded with the Connections option underlined.
  1. Next to OpenAI, click Connect, paste your OpenAI API key, and click Submit.

Result: You're connected to your OpenAI account and can use OpenAI models in your pipelines.

The integrations section with the OpenAI option showing as connected.

Create a Draft Pipeline

Let's create a pipeline that will be a starting point for the generative question answering app:

  1. In the navigation, go to Pipeline Templates.

  2. Choose Basic QA, find RAG Question Answering GPT-3.5, and click Use Template.

    Alt-text: "Screenshot of a 'Basic QA' section on a webpage showcasing pipeline templates for question-answering systems. The section is one of several categories listed in a sidebar on the left, with 'Basic QA' showing a count of '6'. The main pane shows three templates: 'Extractive Question Answering', 'Extractive Question Answering (German)', and 'Generative Question Answering GPT-3.5'. Each template offers a brief description of its functionality, emphasizing the use of semantic similarity in searching for answers. Icons indicate the creator of the templates, 'deepset'. At the bottom of each template, there are options to 'View Details' and 'Use Template', with the latter having a notification bubble with a curled arrow symbol, suggesting an update or new feature. The interface is clean with a color scheme consisting primarily of blue, white, and gray.
  3. Type RAG as the pipeline name and click Create Pipeline. You're redirected to Pipeline Builder, where you can view and edit your pipeline.

  4. Click Deploy and wait until the pipeline is deployed and indexed.
    Tip: You can check the indexing status of your pipeline by hovering over the status tag.

    The mouse over the indexing tag with files indexing status showing

Result: You now have an indexed RAG pipeline that generates answers based on your data. Your pipeline status is Indexed, and it's ready for use. Your pipeline is at the development service level. We recommend you test it before setting it to the production service level.

The pipelines page with the generative QA pipeline showing as indexed

Work on Your Prompt

The default prompt makes the model act as a matter-of-fact technical expert, while we want our system to be friendly and empathetic. Let's experiment with different prompts to achieve this effect.

  1. In the navigation, click Prompt Explorer.

  2. Choose the RAG pipeline (1). Your current prompt is shown in the Prompt Editor pane (2).

    Prompt Studio with the RAG pipeline selected and the prompt text showing in the Prompt Editor.
  3. In the Type your query here field, ask some questions about treating medical conditions, such as: "I had my wisdom tooth removed, but my gum hurts and is swollen. What should I do?"

The prompt explorer window with the Generative QA pipeline selected and marked with a red number 1. Below pipeline selection, there's a welcome page. At the bottom of the page, there's prompt editor with the prompt text displayed. And below prompt editor there's the question about wisdom tooth marked with step 2.

The model generates an answer and provides its sources, which are the documents it's based on.

  1. Now, let's try a different prompt. In Prompt Editor, change the prompt to adjust the tone of the answer. Replace "You are a technical expert." with "You are a friendly nurse." and add "Your answers are friendly, clear, and conversational.", like in the prompt below:
You are a friendly, empathetic nurse.
You answer questions truthfully based on provided documents.
Your answers are friendly, clear, and conversational. 
For each document check whether it is related to the question.
Only use documents that are related to the question to answer it.
Ignore documents that are not related to the question.
If the answer exists in several documents, summarize them.
Only answer based on the documents provided. Don't make things up.
If the documents can't answer the question or you are unsure say: 'The answer can't be found in the text'.
These are the documents:
{% for document in documents %}
Document[{{ loop.index }}]:
{{ document.content }}
{% endfor %}
Question: {{question}}
Answer:
  1. Try the same query or experiment with other queries related to treating medical conditions. The answers should now be in a more empathetic and friendly tone. Here are some example questions you can ask:
    "I have been diagnosed with a wheat allergy, what do I do now?"
    "How do you treat swollen wrists?"
    "What is meningitis?"
  2. Insert the updated prompt into your RAG pipeline. Click Update in Prompt Editor and confirm your action. This replaces the current prompt.

Result: You have tweaked your prompt to generate more friendly and conversational answers. You updated your pipeline with this prompt.


Test the Pipeline

Time to see your pipeline in action!

  1. In the navigation, click Playground and make sure the RAG pipeline is selected.
  2. Try asking something like "my eyes hurt, what should I do?".
  3. Once the answer is generated, check the sources to see if the answers are actually in the documents.
The answer to the query "my eyes hurt, what should I do?" with each sentence underlined in either green or red.

You can also check the prompt by clicking the More Actions button next to the search result.

Congratulations! You have built a generative question answering system that can answer questions about treating various diseases in a friendly and conversational tone. Your system also shows references to documents it based its answers on.

What To Do Next

Once you have a RAG pipeline, you can monitor its groundedness score to see how reliable it is and if it sticks to the documents.

Your pipeline is now a development pipeline. Once it's ready for production, change its service level to Production. You can do this on the Pipeline Details page shown after clicking a pipeline name. To learn more, see Pipeline Service Levels.