Create a Pipeline in Pipeline Builder

Use an intuitive drag-and-drop interface to build your pipelines. Easily switch between visual and code representations.

About Pipeline Builder

Pipeline Builder is an easy way to build and visualize your pipelines. In Pipeline Builder, you simply drag components from the components library and drop them onto a canvas, where you can customize their parameters and define connections. It helps you visualize your pipeline and offers guidance on component compatibility. You can also switch to the YAML view anytime; everything you do in Pipeline Builder is synchronized with the pipeline YAML configuration.

Using Pipeline Builder

This image shows how to access the basic functionalities in Pipeline Builder. The numbers in the list below correspond to the numbers in the image.

  1. Component library. Expand a component group and drag a selected component onto the canvas to add it to your pipeline.

  2. A component card. Click the component name to change it.

  3. Component connections. Click an input connection on one component and an output connection on another to link them. Hover over a connection point next to an input or output to see a list of popular and compatible components you can connect.

    Connection suggestions modal showing popular and compatible connections
  4. Click a component card to access the menu for deleting, duplicating, and accessing the component's documentation.

  5. Export your pipeline as a Python or YAML file you can save on your computer.

  6. Switch to the YAML view.

Considerations for Building Pipelines

There are a couple of things you should know when building in Pipeline Builder:

  • Pipeline start: Your pipeline must start with an input component. Query pipelines always take Query and, optionally, Filters as the first components.
  • Pipeline end: Query pipelines finish with the Output component connected to a component that passes answers and often also documents to it.
  • Complex parameters: Some components take parameters that are not Python primitives. These parameters are configured as YAML.
    For example, PromptBuilder's template or ConditionalRouter's routes use Jinja2 templates. These parameters configurations can affect the component's inputs and outputs, depending on the variables you add to the template. For instance, if you add Query and Documents as variables in the PromptBuilder's template, they'll be listed as required inputs. Otherwise, they won't be.
    For configuration examples, check the component's documentation in the Pipeline Components section.

Prerequisites

  • To learn about how pipelines and components work in deepset, see Pipeline Components and Pipelines.
  • To use a hosted model, Connect to Model Providers first so that you don't have to pass the API key within the pipeline. For Hugging Face, this is only required for private models. Once deepset AI Platform is connected to a model provider, just pass the model name in the model parameter of the component that uses it in the pipeline. deepset AI Platform will download and load the model. For more information, see Language Models in deepset.
  • If your pipeline will query files from a deepset workspace, create and enable an index. For instructions, see Create an Index. Index prepares your files for search. You can reuse indexes among your query pipelines.
    To learn about indexes, see Indexes.

Create a Pipeline From an Empty File

  1. Log in to deepset AI Platform and go to Pipeline Templates.
  2. In the top right corner, click Create empty pipeline.
  3. Give your pipeline a name and click Create Pipeline.
    You're redirected to Pipeline Builder.
  4. Add the inputs for your pipeline. Query pipelines must start with the Query component. You can also optionally add Filters.
  5. Add components from the components library and define their connections.
  6. To give your pipeline access to files from a deepset workspace, add Retrievers with a matching Document Store. For example, to use OpenSearchDocumentStore, add OpenSearchEmbeddingRetriever or OpenSearchBM25Retriever to your pipeline and connect them to the OpenSearchDocumentStore. Choose the index for the document store.
  7. Add the Output component as the last component in your pipeline and connect it to the component generating answers (in LLM-based pipelines, this is DeepsetAnswerBuilder). Optionally, connect the documents output to it if you want them included in the pipeline's output.
  8. Save your pipeline.

Create a Pipeline From a Template

  1. Log in to deepset AI Platform and go to Pipeline Templates.
    There are templates available for various tasks. They work out of the box or you can use them as a starting point for your pipeline.

  2. Find a template that best matches your use case, hover over it, and click Use Template.

    The template selection process shown, first you click on a template group, then you choose Use template on the template card.
  3. Give your pipeline a name and click Create Pipeline. You're redirected to Pipeline Builder, where you can view and edit your pipeline. Make sure you choose an index for your pipeline on the OpenSearchDocumentStore card.

  4. Depending on what you want to do:

    1. To test your pipeline, deploy it first. Click Deploy in the upper right corner, wait until it's indexed, and then test your pipeline in Playground.
    2. To edit your pipeline, see Step 4 in Create a pipeline from an empty file.

What To Do Next

  • To use your pipeline, deploy it. Click Deploy in the top right corner of Pipeline Builder.
  • To test your pipeline, wait until it's indexed and then go to Playground. Make sure your pipeline is selected, and type your query.
  • To view pipeline details, such as statistics, feedback, or logs, click the pipeline name. This opens the Pipeline Details page.
  • To let others test your pipeline, share your pipeline prototype.