deepset AI Platform 2.0
Welcome to deepset AI Platform v2.0! We've introduced many new features designed to enhance your experience with greater flexibility and new possibilities.
What Changed
The new release comes with significant changes in how pipelines and components work bringing you more freedom and flexibility when building your apps. With this update, you'll find:
- New components that replace pipeline nodes to expand your capabilities.
- Enhanced connections within pipelines for streamlined workflows, including loops and multiple branches.
- Clear validation and meaningful error messages.
Pipelines
Pipelines in version 2.0 are much more flexible. You can use them to create sophisticated loops that cycle back to a component, branch out, loop back and retry, allowing for the integration of complex AI tasks. Components remain the building blocks of pipelines. Read more in Pipelines.
Version 2.0 also introduces changes to the YAML configuration file, including new formatting, component connections, and explicit pipeline inputs and outputs definitions.
Nodes
Pipeline nodes in deepset AI Platform 2.0 are called components. Components are the core elements of pipelines. In new pipelines, you explicitly define how components connect and which outputs they send or receive. For details on how specific components work, see Pipeline Components.
This table summarizes the components that replace nodes from the previous version:
| Version 1.0 | Description | Version 2.0 |
|---|---|---|
AnswerDeduplication | Used in extractive QA pipelines to ensure there are no overlapping answers from the same document. | Not available, this functionality is built into ExtractiveReader, so no additional component is needed. |
Converters | Convert files to documents. There are various converters available for different file types. | Converters |
DeepsetCloudDocumentStore | A database for storing documents the pipeline can access at search time. | OpenSearchDocumentStore |
EntityExtractor | Extracts predefined entities out of the text. | NamedEntityExtractor |
FileDownloader | Download source files and stores them locally. | DeepsetFileDownloader |
FileTypeClassifer | Routes files to appropriate pipeline branches based on their type. | FileTypeRouter |
JoinAnswers | Joins answers from different components into a single list of answers. | AnswerJoiner |
JoinDocuments | Joins documents from different components into a single list. | DocumentJoiner |
PreProcessor | Cleans and splits documents before writing them into the document store. | PreProcessors: v2.0 adds a number of preprocessors, each performing a separate task, like cleaning or splitting documents. |
PromptNode | Uses an LLM of your choice in the pipeline. | PromptBuilder, ChatPromptBuilder and Generators. v2.0 introduces PromptBuilder and ChatPromptBuilder that renders the prompt you can then send to a generator. |
QueryClassifier | Categorizes queries into keyword-based and natural language queries. | TransformersZeroShotTextRouter and TransformersTextRouter |
RetrievalScoreAdjuster | Adjusts document scores assigned by a retriever or a ranker. | This is now handled by TransformersSimilarityRanker through the calibration_factor and score_threshold parameters. |
Ranker | Ranks documents based on specific criteria, such as their similarity to the query. | Rankers |
Reader | Locates and highlights answers in documents. | ExtractiveReader |
ReferencePredictor | Predicts references for the generated answer. | ReferencePredictor and LLM-generated references. |
Retriever | Retrieves documents from the document store based on their relevance to the query. | Retrievers |
Shaper | Modifies the input and output types. | OutputAdapter |
Filters
Version v2.0 changes the syntax of filters towards a more Python-oriented approach with logical operators explicitly defined. The filter syntax used in v1.0 will continue to be supported.
For details on how to construct filters in v2.0, see Filtering Logic.
Was this page helpful?