Skip to main content

DocumentSplitter

Splits long documents into smaller chunks.

Basic Information

  • Type: haystack.components.preprocessors.document_splitter.DocumentSplitter

Inputs

ParameterTypeDefaultDescription
documentsList[Document]The documents to split.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]A list of documents with the split texts. Each document includes:
- A metadata field source_id to track the original document.
- A metadata field page_number to track the original page number.
- All other metadata copied from the original document.

Overview

Work in Progress

Bear with us while we're working on adding pipeline examples and most common components connections.

Splits long documents into smaller chunks.

This is a common preprocessing step during indexing. It helps Embedders create meaningful semantic representations and prevents exceeding language model context limits.

The DocumentSplitter is compatible with the following DocumentStores:

Usage Example

components:
DocumentSplitter:
type: components.preprocessors.document_splitter.DocumentSplitter
init_parameters:

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
split_byLiteral['function', 'page', 'passage', 'period', 'word', 'line', 'sentence']wordThe unit for splitting your documents. Choose from: - word for splitting by spaces (" ") - period for splitting by periods (".") - page for splitting by form feed ("\f") - passage for splitting by double line breaks ("\n\n") - line for splitting each line ("\n") - sentence for splitting by NLTK sentence tokenizer
split_lengthint200The maximum number of units in each split.
split_overlapint0The number of overlapping units for each split.
split_thresholdint0The minimum number of units per split. If a split has fewer units than the threshold, it's attached to the previous split.
splitting_functionOptional[Callable[[str], List[str]]]NoneNecessary when split_by is set to "function". This is a function which must accept a single str as input and return a list of str as output, representing the chunks after splitting.
respect_sentence_boundaryboolFalseChoose whether to respect sentence boundaries when splitting by "word". If True, uses NLTK to detect sentence boundaries, ensuring splits occur only between sentences.
languageLanguageenChoose the language for the NLTK tokenizer. The default is English ("en").
use_split_rulesboolTrueChoose whether to use additional split rules when splitting by sentence.
extend_abbreviationsboolTrueChoose whether to extend NLTK's PunktTokenizer abbreviations with a list of curated abbreviations, if available. This is currently supported for English ("en") and German ("de").
skip_empty_documentsboolTrueChoose whether to skip documents with empty content. Set to False when downstream components in the Pipeline (like LLMDocumentContentExtractor) can extract text from non-textual documents.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
documentsList[Document]The documents to split.