Tutorial: Building a Robust RAG System
Build a retrieval augmented generation (RAG) system running on your own data that can generate answers in a friendly and conversational tone. Learn how to test different prompts and save them for future.
- Level: Intermediate
- Time to complete: 15 minutes
- Prerequisites:
- You must be an Admin to complete this tutorial.
- You must have an API key from an active OpenAI account as this pipeline is using the gpt-3.5-turbo model by OpenAI.
- Goal: After completing this tutorial, you will have built a RAG system that can answer questions about treating various diseases based on the documents from Mayo Clinic. This system will run on the data you provide to it to minimize the possibility of hallucinations.
- Keywords: PromptNode, large language models, retrieval augmented generation, RAG, gpt-3.5-turbo, Prompt Studio
Create a Workspace
We need a deepset Cloud workspace to store our files and the generative pipeline.
-
Log in to deepset Cloud.
-
In the upper left corner, click the name of the workspace, type RAG as the workspace name, and click Create.
Result: You have created a workspace called RAG
, where you'll upload the Mayo Clinic files.
Upload Files to Your Workspace
- First, download the mayoclinic.zip file and unpack it on your computer. (You can also use your own files.)
- In deepset Cloud, make sure you're in the RAG workspace, and go to Files in the navigation.
- Click Upload Files.
- Drop the files you unpacked in step 1 into the Upload Files window and click Upload.
- Wait until the upload finishes. It may take a while until the files are processed and visible in deepset Cloud.
You should have 1096 files in your workspace.
Result: Your files are in the RAG
workspace, and you can see them on the Files page.
Connect Your OpenAI Account
You'll be able to use OpenAI models without having to pass the API keys in the pipeline itself.
- Click your initials in the top right corner and choose Connections.
- Next to OpenAI, click Connect, paste your OpenAI API key, and click Submit.
Result: You're connected to your OpenAI account and can use OpenAI models in your pipelines.
Create a Draft Pipeline
Let's create a pipeline that will be a starting point for the generative question answering app:
-
In the navigation, go to Pipeline Templates.
-
Choose Basic QA, find Retrieval Augmented Generation Question Answering GPT-3.5, and click Use Template.
-
Type RAG as the pipeline name and click Create Pipeline. You're redirected to the Pipelines page. You can find your pipeline in the All tab.
Info: Newly created undeployed pipelines are automatically classified as drafts, so you can also find your pipeline in the _Drafts tab. But once you start deploying it, it changes to a Development pipeline and is moved from the _Drafts to the Development tab. -
Click Deploy next to your pipeline and wait until the pipeline is deployed and indexed.
Result: You now have an indexed RAG pipeline that generates answers based on your data. Your pipeline status is Indexed, and it's ready for use. Your pipeline is at the development service level. We recommend you test it before setting it to the production service level.
Test Your Prompt
The default prompt makes the model act as a matter-of-fact technical expert, while we want our system to be friendly and empathetic. Let's experiment with different prompts to achieve this effect.
-
In the navigation, click Prompt Studio.
-
Choose the RAG pipeline. Your current prompt is showing in the Prompt Editor pane.
-
In the
Type your query here
placeholder, try asking some questions related to treating medical conditions, for example: "I had my wisdom tooth removed but my gum hurts and is swollen. What should I do?"
The model generates an answer and provides its sources, which are the documents it's based on.
- Now, let's try a different prompt. In Prompt Editor, click the Menu button and choose deepset. You can see all prompts curated by deepset.
-
Scroll down through the templates, choose deepset/question-answering, and click Use Prompt. The prompt is now showing in Prompt Editor.
-
Submit the query from step 3. You can now compare the two answers to check which prompt performs better.
-
To return to the original prompt, reload the whole page and choose the RAG pipeline again.
-
In Prompt Editor, change the prompt to adjust the tone of the answer. Replace "You are a technical expert." with "You are a friendly nurse." and add "Your answers are friendly, clear, and conversational.", like in the prompt below:
You are a friendly nurse.\
You answer questions truthfully based on provided documents. \
Your answers are friendly, clear, and conversational. \
For each document check whether it is related to the question. \
Only use documents that are related to the question to answer it. \
Ignore documents that are not related to the question. \
If the answer exists in several documents, summarize them. \
Only answer based on the documents provided. Don't make things up. \
Always use references in the form [NUMBER OF DOCUMENT] when using information from a document. e.g. [3], for Document[3]. \
The reference must only refer to the number that comes in square brackets after passage. \
Otherwise, do not use brackets in your answer and reference ONLY the number of the passage without mentioning the word passage. \
If the documents can't answer the question or you are unsure say: 'I'm sorry I don't know that'. \
{new_line}\
These are the documents:\
{join(documents, delimiter=new_line, pattern=new_line+'Document[$idx]:'+new_line+'$content')}\
{new_line}\
Question: {query}\
{new_line}\
Answer:{new_line}
-
Try the same query or experiment with other queries related to treating medical conditions. The answers should now be in a more empathetic and friendly tone. Here are some example questions you can ask:
"I have been diagnosed with a wheat allergy, what do I do now?"
"How do you treat swollen wrists?"
"What is meningitis?" -
Update the RAG pipeline with the new prompt. Click Update in Prompt Editor. This replaces the current prompt.
-
Save the prompt as a template:
-
In Prompt Editor, click Prompt Templates. You land on the Custom tab of the Prompt Templates window.
-
Click Create Custom Prompt.
-
Paste the copied prompt in the text field, type
friendly_tone
as the prompt name, and save your prompt. You'll be able to reuse it in the future.
-
Result: You have tweaked your prompt to generate more friendly and conversational answers. You updated your pipeline with this prompt. You then saved this prompt as a template and can reuse it in other pipelines.
Test the Pipeline
Time to see your pipeline in action!
- In the navigation, click Playground and make sure the RAG pipeline is selected.
- Try asking something like "my eyes hurt, what should I do?".
- Once the answer is generated, check the sources to see if the answers are actually in the documents.
You can also check the prompt by clicking the More Actions button next to the search result.
Congratulations! You have built a generative question answering system that can answer questions about treating various diseases in a friendly and conversational tone. Your system also shows you which parts of the answer are hallucinations.
What To Do Next
Once you have a RAG pipeline, you can monitor its groundedness score to see how reliable it is and if it sticks to the documents.
Your pipeline is now a development pipeline. Once it's ready for production, change its service level to Production. You can do this on the Pipeline Details page shown after clicking a pipeline name. To learn more, see Pipeline Service Levels.
Updated 6 months ago