Create a Pipeline in Pipeline Builder
Use an intuitive drag-and-drop interface to build your pipelines. Easily switch between visual and code representations.
About Pipeline Builder
Pipeline Builder is an easy way to build and visualize your pipelines. In Pipeline Builder, you simply drag components from the components library and drop them onto a canvas, where you can customize their parameters and define connections. It helps you visualize your pipeline and offers guidance on component compatibility. You can also switch to the YAML view anytime; everything you do in Pipeline Builder is synchronized with the pipeline YAML configuration.
Pipeline Builder is available for 2.0 pipelines only. For 1.0 pipelines, you can use visualizer.
Using Pipeline Builder
This image shows how to access the basic functionalities in Pipeline Builder. The numbers in the list below correspond to the numbers in the image.
-
Component library. Expand a component group and drag a selected component onto the canvas to add it to your pipeline.
-
A component card. Click the component name to change it.
-
Click a component card to access the menu for deleting, duplicating, and accessing the component's documentation.
-
Draw lines joining components outputs and inputs. Hover over the output connection point to view compatible components.
-
Export your pipeline as a Python or YAML file you can save on your computer.
-
Switch to the YAML view.
Considerations for Building Pipelines
There are a couple of things you should know when building in Pipeline Builder:
-
Pipeline start: Your pipeline must start with an input component. There are three input components:
Query
,Filters
, andFilesInput
.- Indexing pipelines always take
FilesInput
as the first component. This is a mapping to your files in deepset Cloud. - Query pipelines always take
Query
and, optionally,Filters
as the first components.
- Indexing pipelines always take
-
Pipeline end:
- Indexing pipelines usually end with the
DocumentWriter
component that writes the processed documents into the document store where the query pipeline can access them. - Query pipelines finish with the
Output
component connected to a component that passes answers and often also documents to it.
- Indexing pipelines usually end with the
-
Complex parameters: Some components take parameters that are not Python primitives. These parameters are configured as YAML.
For example, PromptBuilder'stemplate
or ConditionalRouter'sroutes
use Jinja2 templates. These parameters configurations can affect the component's inputs and outputs, depending on the variables you add to the template. For instance, if you addQuery
andDocuments
as variables in the PromptBuilder'stemplate
, they'll be listed as required inputs. Otherwise, they won't be.
For configuration examples, check the component's documentation in the Pipeline Components section. -
Indexing and query pipelines: When creating a pipeline, you can see two tabs:
-
Indexing: Here, you're building your indexing pipeline that defines how your files are preprocessed. Whenever you add a file, it is preprocessed by all deployed pipelines.
Indexing pipelines are optional. You will need them in most cases to prepare your files and write them into the document store. But if you're building a summarization pipeline where you pass the text as query or for pipelines that use external database, like Snowflake, you can skip the indexing pipeline. -
Query: Here, you're building your query pipeline that describes how the query is resolved.
-
Prerequisites
- To learn about how pipelines and components work in deepset Cloud, see Pipeline Components and Pipelines.
- To use a hosted model, Connect to Model Providers first so that you don't have to pass the API key within the pipeline. For Hugging Face, this is only required for private models. Once deepset Cloud is connected to a model provider, just pass the model name in the
model
parameter of the component that uses it in the pipeline. deepset Cloud will download and load the model. For more information, see Language Models in deepset Cloud.
Create a Pipeline From an Empty File
-
Log in to deepset Cloud and go to Pipeline Templates.
-
In the top right corner, click Create empty pipeline.
-
Give your pipeline a name and click Create Pipeline.
You're redirected to Pipeline Builder with the canvas for the indexing pipeline open. -
Build your indexing pipeline if you need one:
-
Open the
Inputs
component group and drag theFilesInput
component to the canvas. This represents the files your pipeline will process. -
Choose
Preprocessors
andConverters
, and any other components as needed. -
Connect the components by dragging a line from one component's input to another component's output. The connections are immediately validated.
Tip: Hover your mouse over the output connection icon to see compatible components. -
For the query pipeline to be able to access your documents, add the
DocumentWriter
component as the last component in your pipeline. It writes the processed documents into the document store where the query pipeline can access it.
-
-
Switch to the Query tab:
- Add the inputs for your pipeline. Query pipelines must start with the
Query
component. You can also optionally addFilters
. - Add components from the components library and define their connections.
- Add the
Output
component as the last component in your pipeline and connect it to the component generating answers (in LLM-based pipelines, this isAnswerBuilder
). Optionally, connect the documents output to it if you want them included in the pipeline's output.
- Add the inputs for your pipeline. Query pipelines must start with the
-
Save your pipeline.
Create a Pipeline From a Template
-
Log in to deepset Cloud and go to Pipeline Templates.
There are templates available for various tasks. They work out of the box or you can use them as a starting point for your pipeline. -
Find a template that best matches your use case, hover over it, and click Use Template.
-
Give your pipeline a name and click Create Pipeline. You're redirected to Pipeline Builder, where you can view and edit your pipeline.
-
Depending on what you want to do:
- To test your pipeline, deploy it first. Click Deploy in the upper right corner, wait until it's indexed, and then test your pipeline in Playground.
- To edit your pipeline, see Step 5 in Create a pipeline from an empty file.
What To Do Next
- To use your pipeline, deploy it. Click Deploy in the top right corner of Pipeline Builder.
- To test your pipeline, wait until it's indexed and then go to Playground. Make sure your pipeline is selected, and type your query.
- To view pipeline details, such as statistics, feedback, or logs, click the pipeline name. This opens the Pipeline Details page.
- To let others test your pipeline, share your pipeline prototype.
Updated about 1 month ago