Convert web page to PDF using ConvertAPI
Who is this for?
For developers and organizations that need to convert web page to PDF.
What problem is this workflow solving?
The web page conversion to PDF problem.
What this workflow does
- Converts web page to PDF.
- Stores the PDF file in the local file system.
How to customize this workflow to your needs
- Open the
HTTP Requestnode. - Adjust the URL parameter (all endpoints can be found here).
- Use your
API Tokenfor authentication. Pass the token in theAuthorizationheader as a Bearer token. You can manage your API Tokens in the User panel → Authentication. - Change the parameter
urlto the webpage you want to convert to pdf - Optionally, additional
Body Parameterscan be added for the converter.
Convert Web Page to PDF using ConvertAPI
This n8n workflow demonstrates how to convert a web page URL into a PDF document using the ConvertAPI service. It's a simple yet powerful automation for archiving web content or generating printable versions of online resources.
What it does
This workflow performs the following steps:
- Triggers Manually: The workflow starts when you manually click "Execute workflow" in the n8n editor.
- Provides Instructions: A "Sticky Note" node offers guidance on how to use the workflow, specifically mentioning the need to configure the "HTTP Request" node.
- Converts URL to PDF: An "HTTP Request" node is configured to interact with ConvertAPI. It takes a web page URL as input and sends a request to ConvertAPI to convert it into a PDF.
- Saves PDF to Disk: The resulting PDF binary data from ConvertAPI is then saved to a file on the local disk using the "Read/Write Files from Disk" node.
Prerequisites/Requirements
- ConvertAPI Account: You will need an account with ConvertAPI (https://www.convertapi.com/) and an API Secret.
- n8n Instance: A running instance of n8n.
Setup/Usage
- Import the workflow: Import the provided JSON into your n8n instance.
- Configure ConvertAPI Credentials:
- Locate the "HTTP Request" node.
- You will need to configure the API call to ConvertAPI. This typically involves setting the
URL,Method(usuallyPOST), andHeadersorQuery Parametersto include your ConvertAPI Secret. - The body of the request will contain the URL of the web page you want to convert. An example of a ConvertAPI request for URL to PDF conversion would look something like this (you'll need to adapt it to the HTTP Request node's interface):
- URL:
https://v2.convertapi.com/convert/web/to/pdf?Secret=<YOUR_CONVERTAPI_SECRET> - Method:
POST - Body (JSON):
{ "Url": "https://n8n.io" } - Note: Replace
<YOUR_CONVERTAPI_SECRET>with your actual ConvertAPI secret. You can also pass the URL dynamically from a previous node if you extend the workflow.
- URL:
- Configure File Output:
- Locate the "Read/Write Files from Disk" node.
- Set the
File Name(e.g.,output.pdf) and theFolder Pathwhere you want to save the generated PDF.
- Execute the workflow: Click the "Execute workflow" button in the "When clicking ‘Execute workflow’" node to run the workflow. The PDF will be saved to the specified location.
This workflow provides a basic framework. You can extend it further by:
- Adding a "Webhook" trigger to convert URLs submitted via an API call.
- Using an "Email" or "Slack" node to send the generated PDF.
- Integrating with cloud storage services like Google Drive or S3 to store the PDFs.
Related Templates
Automated YouTube publishing from Drive with GPT & Gemini metadata generation
How it works This workflow turns a Google Drive folder into a fully automated YouTube publishing pipeline. Whenever a new video file is added to the folder, the workflow generates all YouTube metadata using AI, uploads the video to your YouTube channel, deletes the original file from Drive, sends a Telegram confirmation, and can optionally post to Instagram and Facebook using permanent system tokens. High-level flow: Detects new video uploads in a specific Google Drive folder. Downloads the file and uses AI to generate: • a polished first-person YouTube description • an SEO-optimized YouTube title • high-ranking YouTube tags Uploads the video to YouTube with the generated metadata. Deletes the original Drive file after upload. Sends a Telegram notification with video details. (Optional) Posts to Instagram & Facebook using permanent system user tokens. Set up steps Setup usually takes a few minutes. Add Google Drive OAuth2 credentials for the trigger and download/delete nodes. Add your OpenAI (or Gemini) API credentials for title/description/tag generation. Add YouTube OAuth2 credentials in the YouTube Upload node. Add Facebook/Instagram Graph API credentials if enabling cross-posting. Replace placeholder IDs (Drive folder ID, Page ID, IG media endpoint). Review sticky notes in the workflow—they contain setup guidance and token info. Activate the Google Drive trigger to start automated uploads.
Automate Zoom 🎦 user onboarding with OAuth token management and data tables
This workflow automates the management of Zoom OAuth tokens and the creation of new Zoom users through the Zoom API. This workflow automates the process of creating a new Zoom user by first ensuring a valid OAuth access token is available. It is designed to handle the fact that Zoom access tokens are short-lived (1 hour) by using a longer-lived refresh token (90 days) stored in an n8n Data Table. It includes two main phases: Token Generation & Management The workflow initially requests a Zoom access token using the OAuth 2.0 “authorization code” method. The resulting access token (valid for 1 hour) and refresh token (valid for 90 days) are stored in an n8n Data Table. When executed again, the workflow checks for the most recent token, refreshes it using the refresh token, and updates the Data Table automatically. User Creation in Zoom Once a valid token is retrieved, the workflow collects the user’s first name, last name, and email (set in the “Data” node). It then generates a secure random password for the new user. Using the Zoom API, it sends a POST request to create the new user, automatically triggering an invitation email from Zoom. --- Key Features ✅ Full Automation of Zoom Authentication Eliminates manual token handling by automatically refreshing and updating OAuth credentials. ✅ Centralized Token Storage Securely stores access and refresh tokens in an n8n Data Table, simplifying reuse across workflows. ✅ Error Prevention Ensures that expired tokens are replaced before API requests, avoiding failed Zoom operations. 4.✅ Automatic User Provisioning Creates Zoom users automatically with prefilled credentials and triggers Zoom’s built-in invitation process. ✅ Scalability Can be easily extended to handle bulk user creation, role assignments, or integration with other systems (e.g., HR, CRM). ✅ Transparency & Modularity Each node is clearly labeled with “Sticky Notes” explaining every step, making maintenance and handover simple. --- How it works Trigger and Data Retrieval: The workflow starts manually. It first retrieves user data (first name, last name, email) from the "Data" node. In parallel, it fetches all stored token records from a Data Table. Token Management: The retrieved tokens are sorted and limited to get only the most recent one. This latest token (which contains the refreshtoken) is then used in an HTTP Request to Zoom's OAuth endpoint to generate a fresh, valid accesstoken. User Creation: The new accesstoken and refreshtoken are saved back to the Data Table for future use. The workflow then generates a random password for the new user, merges this password with the initial user data, and finally sends a POST request to the Zoom API to create the new user. If the creation is successful, Zoom automatically sends an invitation email to the new user. --- Set up steps Prepare the Data Table: Create a new Data Table in your n8n project. Add two columns to it: accessToken and refreshToken. Configure Zoom OAuth App: Create a standard OAuth app in the Zoom Marketplace (not a Server-to-Server app). Note your Zoom account_id. Encode your Zoom app's clientid and clientsecret in Base64 format (as clientid:clientsecret). In both the "Get new token" and "Zoom First Access Token" nodes, replace the "XXX" in the Authorization header with this Base64-encoded string. Generate Initial Tokens (First Run Only): Manually execute the "Zoom First Access Token" node once. This node uses an authorization code to fetch the first-ever access and refresh tokens and saves them to your Data Table. The main workflow will use these stored tokens from this point forward. Configure User Data: In the "Data" node, set the default values for the new Zoom user by replacing the "XXX" placeholders for firstname, lastname, and email. After these setup steps, the main workflow (triggered via "When clicking 'Execute workflow'") can be run whenever you need to create a new Zoom user. It will automatically refresh the token and use the provided user data to create the account. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Automate travel expense extraction with OCR, Mistral AI and Supabase
Travel Reimbursement - OCR & Expense Extraction Workflow Overview This is a lightweight n8n workflow that accepts chat input and uploaded receipts, runs OCR, stores parsed results in Supabase, and uses an AI agent to extract structured travel expense data and compute totals. Designed for zero retention operation and fast integration. --- Workflow Structure Frontend: Chat UI trigger that accepts text and file uploads. Preprocessing: Binary normalization + per-file OCR request. Storage: Store OCR-parsed blocks in Supabase temp_table. Core AI: Travel reimbursement agent that extracts fields, infers missing values, and calculates totals using the Calculator tool. Output: Agent responds to the chat with a concise expense summary and breakdowns. --- Chat Trigger (Frontend) Trigger node: When chat message received public: true, allowFileUploads: true, sessionId used to tie uploads to the chat session. Custom CSS + initial messages configured for user experience. Binary Presence Check Node: CHECK IF BINARY FILE IS PRESENT OR NOT (IF) Checks whether incoming payload contains files. If files present -> route to Split Out -> NORMALIZE binary file -> OCR (ANY OCR API) -> STORE OCR OUTPUT -> Merge. If no files -> route directly to Merge -> Travel reimbursement agent. Binary Normalization Node: Split Out and NORMALIZE binary file (Code) Split Out extracts binary entries into a data field. NORMALIZE binary file picks the first binary key and rewrites payload to binary.data for consistent downstream shape. OCR Node: OCR (ANY OCR API ) (HTTP Request) Sends multipart/form-data to OCR endpoint, expects JSONL or JSON with blocks. Body includes mode=single, outputtype=jsonl, includeimages=false. Store OCR Output Node: STORE OCR OUTPUT (Supabase) Upserts into temptable with sessionid, parsed blocks, and file_name. Used by agent to fetch previously uploaded receipts for same session. Memory & Tooling Nodes: Simple Memory and Simple Memory1 (memoryBufferWindow) Keep last 10 messages for session context. Node: Calculator1 (toolCalculator) Used by agent to sum multiple charges, handle currency arithmetic and totals. Travel Reimbursement Agent (Core) Node: Travel reimbursement agent (LangChain agent) Model: Mistral Cloud Chat Model (mistral-medium-latest) Behavior: Parse OCR blocks and non-file chat input. Extract required fields: vendorname, category, invoicedate, checkindate, checkoutdate, time, currency, total_amount, notes, estimated. When fields are missing, infer logically and mark estimated: true. Use Calculator tool to sum totals across multiple receipts. Fetch stored OCR entries from Supabase when user asks for session summaries. Always attempt extraction; never reply with "unclear" or ask for a reupload unless user requests audit-grade precision. Final output: Clean expense table and Grand Total formatted for chat. Data Flow Summary User sends chat message plus or minus file. IF file present -> Split Out -> Normalize -> OCR -> Store OCR output -> Merge with chat payload. Travel reimbursement agent consumes merged item, extracts fields, uses Calculator tool for sums, and replies with a formatted expense summary. --- Integrations Used | Service | Purpose | Credential | |---------|---------|-----------| | Mistral Cloud | LLM for agent | Mistral account | | Supabase | Store parsed OCR blocks and session data | Supabase account | | OCR API | Text extraction from images/PDFs | Configurable HTTP endpoint | | n8n Core | Flow control, parsing, editing | Native | --- Agent System Prompt Summary > You are a Travel Expense Extraction and Calculation AI. Extract vendor, dates, currency, category, and total amounts from uploaded receipts, invoices, hotel bills, PDFs, and images. Infer values when necessary and mark them as estimated. When asked, fetch session entries from Supabase and compute totals using the Calculator tool. Respond in a concise business professional format with a category wise breakdown and a Grand Total. Never reply "unclear" or ask for a reupload unless explicitly asked. Required final response format example: --- Key Features Zero retention friendly design: OCR output stored only to temp_table per session. Robust extraction with inference when OCR quality is imperfect. Session aware: agent retrieves stored receipts for consolidated totals. Calculator integration for accurate numeric sums and currency handling. Configurable OCR endpoint so you can swap providers without changing logic. --- Setup Checklist Add Mistral Cloud and Supabase credentials. Configure OCR endpoint to accept multipart uploads and return blocks. Create temptable schema with sessionid, file, file_name. Test with single receipts, multipage PDFs, and mixed uploads. Validate agent responses and Calculator totals. --- Summary A practical n8n workflow for travel expense automation: accept receipts, run OCR, store parsed data per session, extract structured fields via an AI agent, compute totals, and return clean expense summaries in chat. Built for reliability and easy integration. --- Need Help or More Workflows? We can integrate this into your environment, tune the agent prompt, or adapt it for different OCR providers. We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.