Skip to main content

LLMDocumentContentExtractor

Extract textual content from image-based documents using a vision-enabled LLM (Large Language Model).

Basic Information

  • Type: haystack.components.extractors.image.LLMDocumentContentExtractor
  • Components it can connect with:
    • Converters: LLMDocumentContentExtractor can receive documents from Converters in an index.
    • DocumentSplitter: LLMDocumentContentExtractor can send extracted documents to DocumentSplitter for further processing.

Inputs

ParameterTypeDefaultDescription
documentsList[Document]List of image-based documents to process. Each document must have a valid file path in its metadata.

Outputs

ParameterTypeDefaultDescription
documentsList[Document]Successfully processed documents, updated with extracted content.
failed_documentsList[Document]Documents that failed processing, annotated with failure metadata.

Overview

Work in Progress

Bear with us while we're working on adding pipeline examples and most common components connections.

LLMDocumentContentExtractor converts each input document into an image using the DocumentToImageContent component. It uses a prompt to instruct the LLM on how to extract content and the processes the image through a vision-capable ChatGenerator to extract structured textual content.

The prompt must not contain Jinja variables. It should only include instructions for the LLM. Image data and the prompt are passed together to the LLM as a chat message.

Documents for which the LLM fails to extract content are returned in a separate failed_documents list. These failed documents have a content_extraction_error entry in their metadata. You can use this metadata for debugging or for reprocessing the documents later.

Usage Example

Initializing the Component

components:
LLMDocumentContentExtractor:
type: haystack.components.extractors.image.llm_document_content_extractor.LLMDocumentContentExtractor
init_parameters:
chat_generator:
type: haystack.components.generators.chat.openai.OpenAIChatGenerator
init_parameters:
model: gpt-4-vision-preview
prompt: |
You are part of an information extraction pipeline that extracts the content of image-based documents.

Extract the content from the provided image.
You need to extract the content exactly.
Format everything as markdown.
Make sure to retain the reading order of the document.

**Visual Elements**
Do not extract figures, drawings, maps, graphs or any other visual elements.
Instead, add a caption that describes briefly what you see in the visual element.
You must describe each visual element.
If you only see a visual element without other content, you must describe this visual element.
Enclose each image caption with [img-caption][/img-caption]

**Tables**
Make sure to format the table in markdown.
Add a short caption below the table that describes the table's content.
Enclose each table caption with [table-caption][/table-caption].
The caption must be placed below the extracted table.

**Forms**
Reproduce checkbox selections with markdown.

Go ahead and extract!

Document:
file_path_meta_field: file_path
detail: high
size: [1024, 768]
raise_on_failure: false
max_workers: 3

Parameters

Init Parameters

These are the parameters you can configure in Pipeline Builder:

ParameterTypeDefaultDescription
chat_generatorChatGeneratorA ChatGenerator instance representing the LLM used to extract text. This generator must support vision-based input and return a plain text response.
promptstrDEFAULT_PROMPT_TEMPLATEInstructional text provided to the LLM. It must not contain Jinja variables. The prompt should only contain instructions on how to extract the content of the image-based document.
file_path_meta_fieldstrfile_pathThe metadata field in the Document that contains the file path to the image or PDF.
root_pathOptional[str]NoneThe root directory path where document files are located. If provided, file paths in document metadata will be resolved relative to this path. If None, file paths are treated as absolute paths.
detailOptional[Literal]NoneOptional detail level of the image (only supported by OpenAI). Can be "auto", "high", or "low". This will be passed to chat_generator when processing the images.
sizeOptional[Tuple[int, int]]NoneIf provided, resizes the image to fit within the specified dimensions (width, height) while maintaining aspect ratio. This reduces file size, memory usage, and processing time.
raise_on_failureboolFalseIf True, exceptions from the LLM are raised. If False, failed documents are logged and returned.
max_workersint3Maximum number of threads used to parallelize LLM calls across documents using a ThreadPoolExecutor.

Run Method Parameters

These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.

ParameterTypeDefaultDescription
documentsList[Document]List of image-based documents to process. Each must have a valid file path in its metadata.