Back to Catalog

Process large documents with OCR using SubworkflowAI and Gemini

JimleukJimleuk
8424 views
2/3/2026
Official Page

Working with Large Documents In Your VLM OCR Workflow

Document workflows are popular ways to use AI but what happens when your document is too large for your app or your AI to handle? Whether its context window or application memory that's grinding to a halt, Subworkflow.ai is one approach to keep you going.

> Subworkflow.ai is a third party API service to help AI developers work with documents too large for context windows and runtime memory.

Prequisites

  1. You'll need a Subworkflow.ai API key to use the Subworkflow.ai service.
  2. Add the API key as a header auth credential. More details in the official docs https://docs.subworkflow.ai/category/api-reference

How it Works

  1. Import your document into your n8n workflow
  2. Upload it to the Subworkflow.ai service via the Extract API using the HTTP node. This endpoint takes files up to 100mb.
  3. Once uploaded, this will trigger an Extract job on the service's side and the response is a "job" record to track progress.
  4. Poll Subworkflow.ai's Jobs endpoint and keep polling until the job is finished. You can use the "IF" node looping back unto itself to achieve this in n8n.
  5. Once the job is done, the Dataset of the uploaded document is ready for retrieval. Use the Datasets and DatasetItems API to retrieve whatever you need to complete your AI task.
  6. In this example, all pages are retrieved and run through a multimodal LLM to parse into markdown. A well-known process when parsing data tables or graphics are required.

How to use

  • Integrate Subworkflow's Extract API seemlessly into your existing document workflows to support larger documents from 100mb+ to up to 5000 pages.

Customising the workflow

  • Sometimes you don't want the entire document back especially if the document is quite large (think 500+ pages!), instead, use query parameters on the DatasetItems API to pick individual pages or a range of pages to reduce the load.

Need Help?

Process Large Documents with OCR using SubworkflowAI and Gemini

This n8n workflow demonstrates a robust solution for processing large documents, extracting text via OCR, and then summarizing or analyzing the content using Google Gemini. It's designed to handle documents stored in Google Drive, making it ideal for automating data extraction and content analysis from scanned PDFs, images, or other non-text-searchable files.

What it does

This workflow automates the following steps:

  1. Manual Trigger: Initiates the workflow upon a manual click, allowing for on-demand processing.
  2. Google Drive File Listing: Retrieves a list of files from a specified Google Drive folder.
  3. Filter for Documents: Uses an "If" node to filter the retrieved files, likely based on file type (e.g., PDF, image) or a specific naming convention, to ensure only relevant documents are processed.
  4. Split Out Files: Processes each filtered document individually, ensuring that subsequent steps operate on one document at a time.
  5. SubworkflowAI OCR: Sends each document to a SubworkflowAI API endpoint for Optical Character Recognition (OCR). This step extracts all text content from the document, including from scanned images.
  6. Wait for OCR Completion: Pauses the workflow execution, waiting for the OCR process from SubworkflowAI to complete. This is crucial for handling asynchronous OCR operations.
  7. Google Gemini Analysis: Once the OCR text is available, it's fed into the Google Gemini node. This node can then perform various AI tasks, such as summarizing the document, extracting key information, answering questions about the content, or generating further insights.

Prerequisites/Requirements

To use this workflow, you will need:

  • n8n Instance: A running n8n instance (cloud or self-hosted).
  • Google Drive Account: Configured with n8n credentials to access your documents.
  • SubworkflowAI Account/API Key: An account with SubworkflowAI (or a similar OCR service) and the necessary API key/credentials to perform OCR operations. The workflow uses an HTTP Request node, implying a direct API call to SubworkflowAI.
  • Google Gemini API Key: Access to the Google Gemini API, configured as a credential in n8n.

Setup/Usage

  1. Import the Workflow:
    • Download the provided JSON file for this workflow.
    • In your n8n instance, go to "Workflows" and click "New".
    • Click the "Import from JSON" button and paste the workflow JSON or upload the file.
  2. Configure Credentials:
    • Locate the "Google Drive" node and configure your Google Drive OAuth2 credentials.
    • Locate the "HTTP Request" node (for SubworkflowAI) and set up the necessary authentication (e.g., API Key in headers or query parameters) according to SubworkflowAI's documentation.
    • Locate the "Google Gemini" node and configure your Google Gemini API Key credential.
  3. Customize Nodes:
    • Google Drive: Adjust the "Folder ID" to point to the specific Google Drive folder containing the documents you want to process. You might also want to refine the file search parameters.
    • If: Review and adjust the conditions in the "If" node if you need to filter documents based on different criteria (e.g., file name patterns, specific MIME types).
    • HTTP Request (SubworkflowAI):
      • Update the URL to your specific SubworkflowAI OCR endpoint.
      • Ensure the Body Parameters correctly send the document data (e.g., file URL or binary data) required by SubworkflowAI.
      • Verify the Headers for any API keys or authentication tokens.
    • Wait: The wait time might need adjustment based on the typical processing time of your SubworkflowAI OCR tasks.
    • Google Gemini: Customize the prompt and model parameters in the "Google Gemini" node to define how you want to analyze or summarize the OCR'd text.
  4. Activate the Workflow: Once all configurations are complete, save and activate the workflow. You can then execute it manually using the "When clicking β€˜Execute workflow’" trigger node.

Related Templates

Automate Dutch Public Procurement Data Collection with TenderNed

TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint - Retrieves the complete tender documentation in XML format JSON endpoint - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch πŸ”— LinkedIn – Wessel Bulte

Wessel BulteBy Wessel Bulte
247

πŸŽ“ How to transform unstructured email data into structured format with AI agent

This workflow automates the process of extracting structured, usable information from unstructured email messages across multiple platforms. It connects directly to Gmail, Outlook, and IMAP accounts, retrieves incoming emails, and sends their content to an AI-powered parsing agent built on OpenAI GPT models. The AI agent analyzes each email, identifies relevant details, and returns a clean JSON structure containing key fields: From – sender’s email address To – recipient’s email address Subject – email subject line Summary – short AI-generated summary of the email body The extracted information is then automatically inserted into an n8n Data Table, creating a structured database of email metadata and summaries ready for indexing, reporting, or integration with other tools. --- Key Benefits βœ… Full Automation: Eliminates manual reading and data entry from incoming emails. βœ… Multi-Source Integration: Handles data from different email providers seamlessly. βœ… AI-Driven Accuracy: Uses advanced language models to interpret complex or unformatted content. βœ… Structured Storage: Creates a standardized, query-ready dataset from previously unstructured text. βœ… Time Efficiency: Processes emails in real time, improving productivity and response speed. *βœ… Scalability: Easily extendable to handle additional sources or extract more data fields. --- How it works This workflow automates the transformation of unstructured email data into a structured, queryable format. It operates through a series of connected steps: Email Triggering: The workflow is initiated by one of three different email triggers (Gmail, Microsoft Outlook, or a generic IMAP account), which constantly monitor for new incoming emails. AI-Powered Parsing & Structuring: When a new email is detected, its raw, unstructured content is passed to a central "Parsing Agent." This agent uses a specified OpenAI language model to intelligently analyze the email text. Data Extraction & Standardization: Following a predefined system prompt, the AI agent extracts key information from the email, such as the sender, recipient, subject, and a generated summary. It then forces the output into a strict JSON structure using a "Structured Output Parser" node, ensuring data consistency. Data Storage: Finally, the clean, structured data (the from, to, subject, and summarize fields) is inserted as a new row into a specified n8n Data Table, creating a searchable and reportable database of email information. --- Set up steps To implement this workflow, follow these configuration steps: Prepare the Data Table: Create a new Data Table within n8n. Define the columns with the following names and string type: From, To, Subject, and Summary. Configure Email Credentials: Set up the credential connections for the email services you wish to use (Gmail OAuth2, Microsoft Outlook OAuth2, and/or IMAP). Ensure the accounts have the necessary permissions to read emails. Configure AI Model Credentials: Set up the OpenAI API credential with a valid API key. The workflow is configured to use the model, but this can be changed in the respective nodes if needed. Connect the Nodes: The workflow canvas is already correctly wired. Visually confirm that the email triggers are connected to the "Parsing Agent," which is connected to the "Insert row" (Data Table) node. Also, ensure the "OpenAI Chat Model" and "Structured Output Parser" are connected to the "Parsing Agent" as its AI model and output parser, respectively. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. The triggers will begin polling for new emails according to their schedule (e.g., every minute), and the automation will start processing incoming messages. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
1616

Tax deadline management & compliance alerts with GPT-4, Google Sheets & Slack

AI-Driven Tax Compliance & Deadline Management System Description Automate tax deadline monitoring with AI-powered insights. This workflow checks your tax calendar daily at 8 AM, uses GPT-4 to analyze upcoming deadlines across multiple jurisdictions, detects overdue and critical items, and sends intelligent alerts via email and Slack only when immediate action is required. Perfect for finance teams and accounting firms who need proactive compliance management without manual tracking. πŸ›οΈπŸ€–πŸ“Š Good to Know AI-Powered: GPT-4 provides risk assessment and strategic recommendations Multi-Jurisdiction: Handles Federal, State, and Local tax requirements automatically Smart Alerts: Only notifies executives when deadlines are overdue or critical (≀3 days) Priority Classification: Categorizes deadlines as Overdue, Critical, High, or Medium priority Dual Notifications: Critical alerts to leadership + daily summaries to team channel Complete Audit Trail: Logs all checks and deadlines to Google Sheets for compliance records How It Works Daily Trigger - Runs at 8:00 AM every morning Fetch Data - Pulls tax calendar and company configuration from Google Sheets Analyze Deadlines - Calculates days remaining, filters by jurisdiction/entity type, categorizes by priority AI Analysis - GPT-4 provides strategic insights and risk assessment on upcoming deadlines Smart Routing - Only sends alerts if overdue or critical deadlines exist Critical Alerts - HTML email to executives + Slack alert for urgent items Team Updates - Slack summary to finance channel with all upcoming deadlines Logging - Records compliance check results to Google Sheets for audit trail Requirements Google Sheets Structure Sheet 1: TaxCalendar DeadlineID | DeadlineName | DeadlineDate | Jurisdiction | Category | AssignedTo | IsActive FED-Q1 | Form 1120 Q1 | 2025-04-15 | Federal | Income | John Doe | TRUE Sheet 2: CompanyConfig (single row) Jurisdictions | EntityType | FiscalYearEnd Federal, California | Corporation | 12-31 Sheet 3: ComplianceLog (auto-populated) Date | AlertLevel | TotalUpcoming | CriticalCount | OverdueCount 2025-01-15 | HIGH | 12 | 3 | 1 Credentials Needed Google Sheets - Service Account OAuth2 OpenAI - API Key (GPT-4 access required) SMTP - Email account for sending alerts Slack - Bot Token with chat:write permission Setup Steps Import workflow JSON into n8n Add all 4 credentials Replace these placeholders: YOURTAXCALENDAR_ID - Tax calendar sheet ID YOURCONFIGID - Company config sheet ID YOURLOGID - Compliance log sheet ID C12345678 - Slack channel ID tax@company.com - Sender email cfo@company.com - Recipient email Share all sheets with Google service account email Invite Slack bot to channels Test workflow manually Activate the trigger Customizing This Workflow Change Alert Thresholds: Edit "Analyze Deadlines" node: Critical: Change <= 3 to <= 5 for 5-day warning High: Change <= 7 to <= 14 for 2-week notice Medium: Change <= 30 to <= 60 for 2-month lookout Adjust Schedule: Edit "Daily Tax Check" trigger: Change hour/minute for different run time Add multiple trigger times for tax season (8 AM, 2 PM, 6 PM) Add More Recipients: Edit "Send Email" node: To: cfo@company.com, director@company.com CC: accounting@company.com BCC: archive@company.com Customize Email Design: Edit "Format Email" node to change colors, add logo, or modify layout Add SMS Alerts: Insert Twilio node after "Is Critical" for emergency notifications Integrate Task Management: Add HTTP Request node to create tasks in Asana/Jira for critical deadlines Troubleshooting | Issue | Solution | |-------|----------| | No deadlines found | Check date format (YYYY-MM-DD) and IsActive = TRUE | | AI analysis failed | Verify OpenAI API key and account credits | | Email not sending | Test SMTP credentials and check if critical condition met | | Slack not posting | Invite bot to channel and verify channel ID format | | Permission denied | Share Google Sheets with service account email | πŸ“ž Professional Services Need help with implementation or customization? Our team offers: 🎯 Custom workflow development 🏒 Enterprise deployment support πŸŽ“ Team training sessions πŸ”§ Ongoing maintenance πŸ“Š Custom reporting & dashboards πŸ”— Additional API integrations Discover more workflows – Get in touch with us

Oneclick AI SquadBy Oneclick AI Squad
93