A document store is a kind of database that stores text and metadata and then provides them to the Retriever at query time. Learn how it works.

Working with pipelines in different environments requires a DocumentStore that can be shared among them and is compatible with all retrievers. This is why we created the DeepsetCloudDocumentStore. It makes it possible to interact with Documents stored in deepset Cloud without having to index your data again.

When a pipeline is deployed, it indexes the files. This means it turns them into Documents, and then stores these Documents together with their metadata in the DocumentStore. These Documents are then used at query time. The Retriever fetches them from the DocumentStore.


DeepsetCloudDocumentStore is designed to access data that's already stored in deepset Cloud. It's read-only and cannot be used for production-like scenarios. For these scenarios, use API endpoints.

Basic Information

  • Pipeline type: Used in indexing pipelines
  • Position in the pipeline: After the Retriever.

Usage Example

In most cases, you use DeepsetCloudDocumentStore within a pipeline:

  - name: DocumentStore
    type: DeepsetCloudDocumentStore
  - name: indexing
      - name: FileTypeClassifier
        inputs: [File]
      - name: TextConverter
        inputs: [FileTypeClassifier.output_1] # Ensures this converter gets TXT files
      - name: PDFConverter
        inputs: [FileTypeClassifier.output_2] # Ensures this converter gets PDF files
      - name: Preprocessor
        inputs: [TextConverter, PDFConverter]
      - name: Retriever
        inputs: [Preprocessor]
      - name: DocumentStore
        inputs: [Retriever]
import os
os.environ["DEEPSET_CLOUD_API_KEY"] = "<your_api_key>"

from haystack.document_stores import DeepsetCloudDocumentStore
document_store = DeepsetCloudDocumentStore(index=pipeline_name)



When you create DeepsetCloudDocumentStore using a pipeline YAML in the deepset Cloud pipeline editor, these parameters are ignored:

  • api_key
  • workspace
  • index
  • api_endpoint
  • label_index

In the Python SDK, all parameters are used.

These are the arguments the DeepsetCloudDocumentStore takes:

ArgumentTypePossible ValuesDescription
api_keyStringThe secret value of the API key. This is the value that you copy in step 4 of Generate an API Key.
If you don't specify it, it is read from the DEEPSET_CLOUD_API_KEY environment variable.
workspaceStringDefault: defaultSpecifies the deepset Cloud workspace you want to use.
indexStringDefault: NoneThe name of the pipeline to access within the deepset Cloud workspace.
In deepset Cloud, indexes share the names with their respective pipelines.
duplicate_documentsStringskip - Ignores duplicate documents.
overwrite - Updates any existing documents with the same ID when adding documents.
fail - Raises an error if a document ID of the document that is being added already exists.
Default: overwrite
Specifies how to handle duplicate documents.
api_endpointStringDefault: NoneSpecifies the URL of the deepset Cloud API. The API endpoint is: <>.

If you don't specify it, it's read from the DEEPSET_CLOUD_API_ENDPOINT environment variable.
similarityStringdot_product - Default, use it if an embedding model was optimized for dot_product similarity.
cosine - Recommended if the embedding model was optimized for cosine similarity.
Default: dot_product
Specifies the similarity function used to compare document vectors.
label_indexStringDefault: defaultSpecifies the name of the evaluation set uploaded to deepset Cloud.
In deepset Cloud, label indexes share the name with their corresponding evaluation sets.
Default: False
Returns document embeddings.
embedding_dimintDefault: 768Specifies the dimensionality of the embedding vector. You only need this parameter if you're using a vector-based retriever, such as a DensePassageRetriever or EmbeddingRetriever.
Default: False
Specifies when to apply filters to search. This is only relevant if you use an EmbeddingRetriever. With EmbeddingRetriever, DeepsetCloudDocumentStore defaults to post-filtering when querying with filters. This means the filters are applied after the documents are retrieved. You can change it to pre-filtering, where the filters are applied before retrieving the documents. this comes at the cost of higher latency, though. For the BM25Retriever filtering is always applied before a search.
search_fieldsUnion[str, list]Default: contentThe names of fields BM25Retriever uses to find matches to the incoming query in the documents. For example: ["content", "title"].

Related Links