7 templates found
Category:
Author:
Sort:

Send email via Gmail on workflow error

Send an email via Gmail when a workflow error occurs. The email subject line will contain the workflow name; the message body will contain the following information: Workflow name Error message Last node executed Execution URL Stacktrace Error workflows do not need to be activated in order to be used, but they do need to be selected in the Settings menu of whatever workflows you want to use it. To use this workflow, you'll need to: Create and select credentials in the Gmail node Choose the email recipient(s) in the Gmail node Save and select the created workflow as the "Error Workflow" in the Settings menu of whatever workflows you want to email on error

TreyBy Trey
8749

Beginner Outlook calendar summary with OpenAI

A step-by-step demo that shows how to pull your Outlook calendar events for the week and ask GPT-4o to write a short summary. Along the way you’ll practice basic data-transform nodes (Code, Filter, Aggregate) and see where to attach the required API credentials. --- 1️⃣ Manual Trigger — Run Workflow | Why | Lets you click “Execute” in the n8n editor so you can test each change. | | --- | --- | --- 2️⃣ Get Outlook Events — Get many events Node type: Microsoft Outlook → Event → Get All Fields selected: subject, start API setup (inside this node): Click Credentials ▸ Microsoft Outlook OAuth2 API If you haven’t connected before: Choose “Microsoft Outlook OAuth2 API” → “Create New”. Sign in and grant the Calendars.Read permission. Save the credential (e.g., “Microsoft Outlook account”). Output: A list of events with the raw ISO start time. > Teaching moment: Outlook returns a full dateTime string. We’ll normalize it next so it’s easy to filter. --- 3️⃣ Normalize Dates — Convert to Date Format js // Code node contents return $input.all().map(item => { const startDateTime = new Date(item.json.start.dateTime); const formattedDate = startDateTime.toISOString().split('T')[0]; // YYYY-MM-DD return { json: { ...item.json, startDateFormatted: formattedDate } }; }); 4️⃣ Filter the Events Down to This Week After we’ve normalised the start date-time into a simple YYYY-MM-DD string, we drop in a Filter node. Add one rule for every day you want to keep—for example 2025-08-07 or 2025-08-08. Rows that match any of those dates will continue through the workflow; everything else is quietly discarded. Why we’re doing this: we only want to summarise tomorrow’s and the following day’s meetings, not the entire calendar. --- 5️⃣ Roll All Subjects Into a Single Item Next comes an Aggregate node. Tell it to aggregate the subject field and choose the option “Only aggregated fields.” The result is one clean item whose subject property is now a tidy list of every meeting title. It’s far easier (and cheaper) to pass one prompt to GPT than dozens of small ones. --- 6️⃣ Turn That List Into Plain Text Insert a small Code node right after the aggregation: js return [{ json: { text: items .map(item => JSON.stringify(item.json)) .join('\n') } }]; Need a Hand? I’m always happy to chat automation, n8n, or Outlook API quirks. Robert Breen – Automation Consultant & n8n Instructor 📧 robert@ynteractive.com | LinkedIn

Robert BreenBy Robert Breen
4244

Merge multiple PDF files with CustomJS API

This n8n template demonstrates how to download multiple PDF files from public URLs and merge them into a single PDF using the PDF Toolkit from www.customjs.space. @custom-js/n8n-nodes-pdf-toolkit Notice Community nodes can only be installed on self-hosted instances of n8n. What this workflow does Downloads each PDF using an HTTP Request. Populates files into an array with Merge node from n8n. Merges all downloaded PDFs using the Merge PDF node from the @custom-js/n8n-nodes-pdf-toolkit. Writes the final merged PDF to disk. Requirements Self-hosted n8n instance CustomJS API key for merging multiple PDF files. PDF files to be merged to be converted into a PDF Workflow Steps: Manual Trigger: Runs with user interaction. HTTP Request Node For PDF Download: Pass urls for PDF files to merge. Merge Node For Array Population: Just populates two files into an array. Merge PDF files: Uses the CustomJS node to merge the incoming PDF files into a single PDF file. If size of PDF files exceeds 6MB, you can simply pass an array of URLs for PDF files. --- Usage Get API key from customJS Sign up to customJS platform. Navigate to your profile page Press "Show" button to get API key Set Credentials for CustomJS API on n8n Copy and paste your API key generated from CustomJS here. Design workflow A Manual Trigger for starting workflow. Two HTTP Request Nodes for downloading PDF files. A Merge Node for populating files as an array. Merge PDFs node for merging files Write to Disk node for saving merged PDF file. You can replace logic for triggering and returning results. For example, you can trigger this workflow by calling a webhook and get a result as a response from webhook. Simply replace Manual Trigger and Write to Disk nodes. Perfect for Bundling reports or invoices. Generating document sets from external sources. Automating PDF handling without writing custom code

CustomJSBy CustomJS
1048

AWS Azure GCP multi-cloud cost monitoring & alerts for budget control

This automated n8n workflow tracks hourly cloud spending across AWS, Azure, and GCP. It detects cost spikes or budget overruns in real time, tags affected resources, and sends alerts via email, WhatsApp, or Slack. This ensures proactive cost management and prevents budget breaches. --- Good to Know AWS, Azure, and GCP APIs must have read access to billing data. Use secure credentials for API keys or service accounts. The workflow runs every hour for near real-time cost tracking. Alerts can be sent to multiple channels (Email, WhatsApp, Slack). Tags are applied automatically to affected resources for easy tracking. --- How It Works Hourly Cron Trigger – Starts the workflow every hour to fetch updated billing data. AWS Billing Fetch – Retrieves latest cost and usage data via AWS Cost Explorer API. Azure Billing Fetch – Retrieves subscription cost data from Azure Cost Management API. GCP Billing Fetch – Retrieves project-level spend data using GCP Cloud Billing API. Data Parser – Combines and cleans data from all three clouds into a unified format. Cost Spike Detector – Identifies unusual spending patterns or budget overruns. Owner Identifier – Matches resources to their respective owners or teams. Auto-Tag Resource – Tags the affected resource for quick identification and follow-up. Alert Sender – Sends notifications through Email, WhatsApp, and Slack with detailed cost reports. --- How to Use Import the workflow into n8n. Configure credentials for AWS, Azure, and GCP billing APIs. Set your budget threshold in the Cost Spike Detector node. Test the workflow to ensure all APIs fetch data correctly. Adjust the Cron Trigger for your preferred monitoring frequency. Monitor alert logs to track and manage cost spikes. --- Requirements AWS Access Key & Secret Key with Cost Explorer Read Permissions. Azure Client ID, Tenant ID, Client Secret with Cost Management Reader Role. GCP Service Account JSON Key with Billing Account Viewer Role. --- Customizing This Workflow Change the trigger frequency in the Cron node (e.g., every 15 min for faster alerts). Modify alert channels to include additional messaging platforms. Adjust cost spike detection thresholds to suit your organization’s budget rules. Extend the Data Parser to generate more detailed cost breakdowns. --- Want a tailored workflow for your business? Our experts can craft it quickly Contact our team

Oneclick AI SquadBy Oneclick AI Squad
812

Automatic jest test generation for GitHub PRs with dual AI review

Workflow: Automatic Unit Test Creator from GitHub 🏗️ Architecture Overview This workflow listens for GitHub pull-request events, analyzes changed React/TypeScript files, auto-generates Jest tests via AI, has them reviewed by a second AI pass, and posts suggestions back as PR comments: GitHub Webhook → PR opened or updated Fetch & Diff → Retrieve raw diff of changed files Filter & Split → Isolate .tsx files & their diffs Fetch File Contents → Provide full context for tests Test Maker Agent → Generate Jest tests for diff hunks Code Reviewer Agent → Refine tests for style & edge-cases Post PR Comment → Sends suggested tests back to GitHub 📦 Node-by-Node Breakdown mermaid flowchart LR A[Webhook: /github/pr-events] --> B[GitHub: Get PR] B --> C[Function: Parse diff_url + owner/repo] C --> D[HTTP Request: GET diff_url] D --> E[Function: Split on "diff --git"] E --> F[Filter: /\.tsx$/] F --> G[GitHub: Get File Contents] G --> H[Test Maker Agent] H --> I[Code Reviewer Agent] I --> J[Function: Build Comment Payload] J --> K[HTTP Request: POST to PR Comments] Webhook: GitHub PR Events Type: HTTP Webhook (/webhook/github/pr-events) Subscribed Events: pullrequest.opened, pullrequest.synchronize GitHub: Get PR Node: GitHub Action: "Get Pull Request" Inputs: owner, repo, pull_number Function: Parse diff_url + owner/repo Extracts: diff_url (e.g. …/pulls/123.diff) owner, repo, mergecommitsha HTTP Request: GET diff_url Fetches unified-diff text for the PR. Function: Split on "diff --git" Splits the diff into file-specific segments. Filter: /.tsx$/ Keeps only segments where the file path ends with .tsx. GitHub: Get File Contents For each .tsx file, fetches the latest blob via GitHub API. Test Maker Agent Prompt: "Generate Jest unit tests covering only the behaviors changed in these diff hunks." Output: Raw Jest test code. Code Reviewer Agent Reads file + generated tests Prompt: "Review and improve these tests for readability, edge-cases, and naming conventions." Output: Polished test suite. Function: Build Comment Payload Wraps tests in TypeScript fences: ts // generated tests… Constructs JSON: json { "body": "<tests>" } HTTP Request: POST to PR Comments URL: https://api.github.com/repos/{owner}/{repo}/issues/{pull_number}/comments Body: Contains the suggested tests. 🔍 Design Rationale & Best Practices Focused Diff Analysis Targets only .tsx files to cover UI logic. Two-Stage AI Separate "generate" + "review" steps mimic TDD + code review. Stateless Functions Pure JS for parsing & transformation, easy to test. Non-Blocking PR Comments Asynchronous suggestions—developers aren't blocked. Scoped Permissions GitHub token limited to reading PRs & posting comments.

VarritechBy Varritech
599

Generate personalized sales outreach with GPT across LinkedIn, Email & WhatsApp

Overview This workflow automates your entire sales outreach process across LinkedIn, Email, and WhatsApp using AI to create hyper-personalized messages for each prospect. Instead of spending hours crafting individual messages, the workflow analyzes your lead data and generates customized connection requests, emails, and WhatsApp messages that feel genuinely personal and researched. The workflow includes a built-in approval mechanism, so you can review all AI-generated messages before they're sent, ensuring quality control while still saving massive amounts of time. How It Works The workflow follows a seven-step process: Step 1: Data Collection The workflow starts by reading your lead data from a Google Sheet. Your sheet should contain information about each prospect including their name, title, company, industry, technologies they use, and any other relevant details that can be used for personalization. Step 2: Batch Processing To prevent overwhelming APIs and ensure smooth operation, the workflow processes leads in batches. Each lead's complete data is prepared and formatted for the AI agent to analyze. Step 3: AI Personalization This is where the magic happens. The AI agent receives all the prospect data and generates three distinct messages: A LinkedIn connection request (under 300 characters) that references their specific role, company, or industry A professional HTML email that demonstrates you've researched their business and explains how you can help A casual WhatsApp message that's friendly and approachable The AI is instructed to make these messages sound completely human, never generic or templated. Step 4: Data Cleanup and Storage The AI's output is parsed and cleaned up, then written back to your Google Sheet in separate columns. This creates a permanent record of all generated messages for your review. Step 5: Manual Approval Before anything gets sent, you receive an email asking for your approval. You can review all the generated messages in your Google Sheet, make any edits if needed, and then approve or reject the batch. This ensures you maintain full control over what goes out. Step 6: LinkedIn Automation Once approved, the workflow triggers your Phantombuster agent to send LinkedIn connection requests using the AI-generated messages. Phantombuster handles the actual LinkedIn interaction safely within their platform's limits. Step 7: Email and Notification Delivery Finally, the workflow sends out the personalized emails via Gmail and optionally notifies you via Telegram for each message sent. This happens sequentially to respect rate limits and maintain deliverability. Setup Requirements Before you can use this workflow, you'll need to set up several accounts and gather credentials: Essential Services: An n8n instance (cloud or self-hosted) A Google account with Google Sheets access A Gmail account for sending emails An OpenAI account with API access (for the AI agent) Phantombuster account (for LinkedIn automation) Optional Services: Telegram account and bot (for notifications) Credentials You'll Need: Google Sheets OAuth2 credentials Gmail OAuth2 credentials OpenAI API key Phantombuster API key and agent ID Telegram bot token and chat ID (if using notifications) How to Use This Workflow Initial Setup: Import this workflow into your n8n instance Add all required credentials in n8n's credential manager Create your Google Sheet with the following columns at minimum: First Name, Last Name, Title, Company Name, Personal Email, Industry, Website. Add three additional columns for output: Connection, AI Email, AI Whatsapp Message Copy your Google Sheet ID from the URL and update it in all Google Sheets nodes Open the AI Agent node and update the prompt with your personal information: your name, title, email, and LinkedIn URL Update the email addresses in the Gmail nodes to your actual email addresses Configure your Phantombuster agent for LinkedIn and add the API key and agent ID Running the Workflow: Add your lead data to the Google Sheet (you can start with just 2-3 leads for testing) Click "Execute Workflow" in n8n to start the process Wait for the AI to generate messages (this takes a few seconds per lead) Check your email for the approval request Review the AI-generated messages in your Google Sheet Reply to the approval email with your decision If approved, the workflow will automatically send LinkedIn requests, emails, and WhatsApp messages Best Practices: Start small. Process 5-10 leads at a time initially to test the quality of AI-generated messages and ensure everything works correctly. Once you're confident in the output, you can scale up to larger batches. Monitor your results. Keep track of response rates in your Google Sheet and adjust the AI prompt if certain types of messages aren't performing well. Respect rate limits. Gmail allows 100-500 emails per day depending on your account type, and LinkedIn has strict limits on connection requests (typically 100 per week through automation tools). Stay well within these limits to avoid account restrictions. Customizing This Workflow The workflow is designed to be highly customizable to fit your specific use case: Personalizing the AI Prompt: The most important customization is in the AI Agent node's prompt. You can modify it to: Emphasize different aspects of your value proposition Change the tone from formal to casual or vice versa Include specific pain points relevant to your target industry Add your company's unique selling points Adjust message length and structure Modifying the Output: You can change what the AI generates by editing the prompt. For example, you might want: Different message types (Twitter DMs instead of WhatsApp) Multiple email variations for A/B testing Follow-up message sequences Industry-specific templates Adding Features: The workflow can be extended with additional nodes: Add time delays between sends to appear more natural Include condition checks to segment leads by industry or company size Connect to your CRM to automatically log activities Add sentiment analysis to filter out negative-sounding messages Implement response tracking by monitoring your inbox Changing Tools: If you prefer different services, you can swap out nodes: Replace Phantombuster with other LinkedIn automation tools Use SendGrid or Mailgun instead of Gmail for higher volume Add Slack notifications instead of Telegram Connect to WhatsApp Business API for official messaging Data Source Alternatives: Instead of Google Sheets, you could: Connect directly to your CRM (HubSpot, Salesforce, Pipedrive) Use Airtable as your database Pull data from CSV files uploaded to cloud storage Integrate with lead generation tools like Apollo or Hunter Tips for Success The quality of your AI-generated messages depends heavily on the data you provide. The more information you have about each prospect (their role, company size, technologies used, recent news, pain points), the more personalized and effective the messages will be. Regularly review and refine your AI prompt based on the responses you're getting. If prospects aren't responding, your messages might be too sales-focused or not personal enough. Adjust the prompt to make messages feel more consultative and helpful. Don't send to your entire list at once. Even with approval gates, it's wise to test with small batches, measure results, iterate on your approach, and then scale up gradually. Always comply with email and LinkedIn best practices. Never spam, always provide value in your outreach, respect people's time and privacy, and make it easy for them to opt out if they're not interested. This workflow is a powerful tool that can save you hours of work while actually improving the quality of your outreach through AI-powered personalization. Use it responsibly and watch your response rates improve.

Aditya MalurBy Aditya Malur
469

Multi-source tax & cash flow forecasting with GPT-4, Gmail and Google Sheets

How It Works Automates monthly revenue aggregation from multiple sources with intelligent tax forecasting using GPT-4 structured analysis. Fetches revenue data from up to three distinct sources, consolidates datasets into unified records, applies OpenAI GPT-4 model for predictive tax obligation forecasting with context awareness. System generates formatted reports with structured forecast outputs and automatically sends comprehensive tax projections to agents via Gmail, storing results in Google Sheets for audit trails. Designed for tax professionals, accounting firms, and finance teams requiring accurate predictive tax planning, cash flow forecasting, and proactive compliance strategy without manual calculations. Setup Steps Configure OpenAI API key for GPT-4 model access Connect three revenue data sources with appropriate credentials Map data aggregation logic for multi-source consolidation Define structured output schema for forecast results Set up Gmail for automated agent notification Configure Google Sheets destination Prerequisites OpenAI API key with GPT-4 access, Gmail account, Google Sheets, three revenue data source credentials Use Cases Monthly tax liability projections, quarterly estimated tax planning Customization Adjust forecast model parameters, add additional revenue sources, modify email templates Benefits Eliminates manual tax calculations, enables proactive tax planning, improves cash flow forecasting accuracy

Cheng Siong ChinBy Cheng Siong Chin
43
All templates loaded