Use Case: A Live QA System

This is an example of how to create an indexing and a query pipeline for a live question-answering system. It describes the data you need, the users, and the actual pipelines.

Description

A live question-answering (QA) system returns answers that are highlighted in passages of text. Thanks to that, you can find the answer easily, without having to read through returned documents.

A live QA system is best for:

  • Users looking for Google-style answers for their natural language questions
  • Users who want to quickly verify their answers
  • Finding answers in large amounts of text data

For this type of search to work best, queries should be constrained to a specific topic, such as an IT product documentation. They should be using natural language rather than, for example, copied error messages.

Data

You can use any text data. For a fast prototype, your data should be restricted to one domain.

You can divide your data into underlying text data and an annotated question-answer set for evaluating your pipelines.

Users

  • Data scientists: Design the QA system, create the pipelines, and supervise domain experts
  • Domain experts: Prepare annotated data
  • End users: Use the system and evaluate how useful it is for business, provide their feedback in the deepset Cloud UI

Pipelines

Here is an example of a pipeline definition file for this use case. It contains both the indexing and the query pipeline.

# If you need help with the YAML format, have a look at https://docs.cloud.deepset.ai/docs/create-a-pipeline-using-a-yaml-file.
# This is a friendly editor that helps you create your pipelines with autosuggestions. To use them, press control + space on your keyboard.
# Whenever you need to specify a model, this editor helps you out as well. Just type your Hugging Face organization and a forward slash (/) to see available models.


# This is default Question Answering pipeline for English with a good embedding-based Retriever and a small, fast Reader
version: '1.10.0'
name: "QA_en"

# This section defines nodes that you want to use in your pipelines. Each node must have a name and a type. You can also set the node's parameters here.
# The name is up to you, you can give your component a friendly name. You then use components' names when specifying their order in the pipeline.
# Type is the class name of the component. 
components:   
  - name: DocumentStore
    type: DeepsetCloudDocumentStore #the only supported document store in deepset Cloud
  - name: Retriever #selects the most relevant documents from the document store and passes them on to the Reader
    type: EmbeddingRetriever #uses a Transformer model to encode the document and the query
    params:
      document_store: DocumentStore
      embedding_model: sentence-transformers/multi-qa-mpnet-base-dot-v1 #model optimized for semantic search
      model_format: sentence_transformers
      pooling_strategy: cls_token #specifies the embeddings from the model should be combined
      top_k: 20 #the number of results to return
  - name: Reader #the component that actually fetches answers from among the 20 documents returned by retriever 
    type: FARMReader #Transformer-based reader, specializes in extractive QA
    params:
      model_name_or_path: deepset/roberta-base-squad2 #an optimized variant of BERT, a strong all-round model
      context_window_size: 700 #the size of the window around the answer span
  - name: TextFileConverter #converts files to documents
    type: TextConverter
  - name: Preprocessor #splits documents into smaller ones, and cleans them up
    type: PreProcessor
    params:
      split_by: word #the unit by which you want to split your documents
      split_length: 250 #the maximum number of words in a document
      split_overlap: 30 #enables the sliding window approach
      split_respect_sentence_boundary: True #retains complete sentences in split documents
      language: en #used by NLTK to best detect the sentence boundaries for that language

pipelines: #Here you define the pipelines. For each component, specify its input.
  - name: query 
    nodes:
      - name: Retriever
        inputs: [Query]
      - name: Reader
        inputs: [Retriever]
  - name: indexing
    nodes:
      - name: TextFileConverter
        inputs: [File]
      - name: Preprocessor
        inputs: [TextFileConverter]
      - name: Retriever #We use the Retriever here to create embeddings
        inputs: [Preprocessor]
      - name: DocumentStore
        inputs: [Retriever]

If you want to learn more about the sentence-transformers/multi-qa-mpnet-base-dot-v1 model, see Hugging Face documentation. If it doesn't work for your domain, you can use the BM25 Retriever instead of EmbeddingRetriever. BM25 works on word overlap between the query and documents and may be a better choice for domains with complex domain vocabulary.

What To Do Next?

You can now demo your search system to the users. Invite users to your organization and have them test your pipelines. Have a look at the Guidelines for Onboarding Your Users to ensure that your demo is successful.


Related Links