45 templates found
Category:
Author:
Sort:

Handling appointment leads and follow-up with Twilio, Cal.com and AI

This n8n workflow builds an appointment scheduling AI agent which can Take enquiries from prospective customers and help them book an appointment by checking appointment availability Where no appointment is booked, the Agent is able to send follow-up messages to re-engage leads. After an appointment is booked, the agent is able reschedule or even cancel the booking for the user without human intervention. For small outfits, this workflow could contribute the necessary "man-power" required to increase business sales. The sample Airtable can be found here: https://airtable.com/appO2nHiT9XPuGrjN/shroSFT2yjf87XAox 2024-10-22 Updated to Cal.com API v2. How it works The customer sends an enquiry via SMS to trigger our workflow. For this trigger, we'll use a Twilio webhook. The prospective or existing customer's number is logged in an Airtable Base which we'll be using to track all our enquries. Next, the message is sent to our AI Agent who can reply to the user and decide if an appointment booking can be made. The reply is made via SMS using Twilio. A scheduled trigger which runs every day, checks our chat logs for a list of prospective customers who have yet to book an appointment but still show interest. This list is sent to our AI Agent to formulate a personalised follow-up message to each lead and ask them if they want to continue with the booking. The follow-up interaction is logged so as to not to send too many messages to the customer. Requirements A Twilio account to receive customer messages. An Airtable account and Base to use as our datastore for enquiries. Cal.com account to use as our scheduling service. OpenAI account for our AI model. Customising this workflow Not using Airtable? Swap this out for your CRM of choice such as hubspot or your own service. Not using Cal.com? Swap this out for API-enabled services such as Acuity Scheduling or your own service.

JimleukBy Jimleuk
27828

Automatic reminders for follow-ups with AI and human in the loop Gmail

This n8n template extends the idea of follow-up reminders by having an AI agent suggest and book the next call or message to re-engage prospects which have been ignored. What makes this template particularly interesting and actually usable is that it uses the Human-in-the-loop approach to wait for a user's approval before actually making the booking or otherwise not if the user declined. A twist on a traditional idea where we can reduce the number of actionable tasks a human has to make by delegating them to AI. How it works A scheduled trigger checks your google calendar for sales meetings which happened a few days ago. For each event, gmail search is used to figure out if a follow-up message has been sent or received from the other party since the meeting. If none, it might mean the user needs a reminder to follow-up. For leads applicable for follow-up, we first get an AI Agent to find available meeting slots in the calendar. These slots and reminder are sent to the user via send-and-approval mode of the gmail node. The user replies in natural language either picking a slot, suggesting an entirely new slot or declines the request. When accepted, another AI Agent books the meeting in the calendar with the proposed dates and lead. When declined, no action is taken. How to use Update all calendar nodes (+subnodes) to point to the right calendar. If this is a shared-purpose calendar, you may need to either filter or create a new calendar. Update the gmail nodes to point to the right accounts. Requirements Google OAuth for Email and Calendar OpenAI for LLM Customising the template Not using Google? Swap out for Microsoft Outlook/Calendar or something else. Try swapping out or adding in additional send-for-approval methods such as telegram or whatsapp.

JimleukBy Jimleuk
14388

Telegram bot with Supabase memory and OpenAI assistant integration

Video Guide I prepared a detailed guide that showed the whole process of building an AI bot, from the simplest version to the most complex in a template. .png) Who is this for? This workflow is ideal for developers, chatbot enthusiasts, and businesses looking to build a dynamic Telegram bot with memory capabilities. The bot leverages OpenAI's assistant to interact with users and stores user data in Supabase for personalized conversations. What problem does this workflow solve? Many simple chatbots lack context awareness and user memory. This workflow solves that by integrating Supabase to keep track of user sessions (via telegramid and openaithread_id), allowing the bot to maintain continuity and context in conversations, leading to a more human-like and engaging experience. What this workflow does This Telegram bot template connects with OpenAI to answer user queries while storing and retrieving user information from a Supabase database. The memory component ensures that the bot can reference past interactions, making it suitable for use cases such as customer support, virtual assistants, or any application where context retention is crucial. 1.Receive New Message: The bot listens for incoming messages from users in Telegram. Check User in Database: The workflow checks if the user is already in the Supabase database using the telegram_id. Create New User (if necessary): If the user does not exist, a new record is created in Supabase with the telegramid and a unique openaithread_id. Start or Continue Conversation with OpenAI: Based on the user’s context, the bot either creates a new thread or continues an existing one using the stored openaithreadid. Merge Data: User-specific data and conversation context are merged. Send and Receive Messages: The message is sent to OpenAI, and the response is received and processed. Reply to User: The bot sends OpenAI’s response back to the user in Telegram. Setup Create a Telegram Bot using the Botfather and obtain the bot token. Set up Supabase: Create a new project and generate a SUPABASEURL and SUPABASEKEY. Create a new table named telegram_users with the following SQL query: create table public.telegram_users ( id uuid not null default genrandomuuid (), date_created timestamp with time zone not null default (now() at time zone 'utc'::text), telegram_id bigint null, openaithreadid text null, constraint telegramuserspkey primary key (id) ) tablespace pg_default; OpenAI Setup: Create an OpenAI assistant and obtain the OPENAIAPIKEY. Customize your assistant’s personality or use cases according to your requirements. Environment Configuration in n8n: Configure the Telegram, Supabase, and OpenAI nodes with the appropriate credentials. Set up triggers for receiving messages and handling conversation logic. Set up OpenAI assistant ID in "++OPENAI - Run assistant++" node.

Mark ShcherbakovBy Mark Shcherbakov
10201

News research and sentiment analysis AI agent with Gemini and SearXNG

This n8n workflow operates as a two-agent system where each agent has a specialized task. The process flows from initial user input to a final analysis, with a seamless handoff between the agents. How it works The Chat Trigger The entire process begins when you send a message using n8n's chat interface. This message serves as the initial prompt or query for the system. The Research Agent Takes Over The user's message is first sent to the Research Agent. This agent's job is to understand the query and gather relevant information. To do this, it has access to: LLM: Google Gemini, which acts as the agent's "brain" to process language and make decisions. Tools: web_search: It uses this tool (powered by your self-hosted SearXNG instance) to perform live searches on the internet. getcurrentdate: It can access the current date, which is useful for context-aware or time-sensitive research. The Research Agent uses these tools to find the most relevant information related to your query and then compiles it into a concise summary. Handoff to the Sentiment Analysis Agent Once the Research Agent has completed its task, it passes its findings directly to the Sentiment Analysis Agent. The Final Analysis The Sentiment Analysis Agent receives the text from the Research Agent. Its sole purpose, as defined by its system prompt, is to analyze the sentiment of the provided information. It determines if the content is positive, negative, or neutral and formulates a final response. This final analysis is then sent back to you in the chat, completing the workflow. Set up steps Select the Language Model (LLM): This workflow is pre-configured with Google Gemini. You can select a different model for the agents as needed. Configure LLM Credentials: Ensure that valid credentials for your chosen LLM are correctly set up within your n8n instance. Set Up the SearXNG Connection: Configure the node to connect to your self-hosted SearXNG instance. This enables the agent's web search capabilities. Define the Research Agent's Task: Customize the system prompt for the "Research Agent" to define its role, instructions, and how it should conduct its research. Define the Sentiment Analysis Agent's Task: Adjust the system prompt for the "Sentiment Analysis Agent" to specify how it should analyze the information provided by the Research Agent. Test the Workflow: Use the built-in chat interface in the n8n canvas to send a message and verify that the agents are functioning correctly.

Mihai FarcasBy Mihai Farcas
9481

Create a pizza ordering chatbot with GPT-3.5 - Menu, orders & status tracking

Pizza Ordering Chatbot with OpenAI - Menu, Orders & Status Tracking Introduction This workflow template is designed to automate order processing for a pizza store using OpenAI and n8n. The chatbot acts as a virtual assistant to handle customer inquiries related to menu details, order placement, and order status tracking. Features The chatbot provides an interactive experience for customers by performing the following functions: Menu Inquiry: When a customer asks about the menu, the chatbot responds with a list of available pizzas, prices, and additional options. Order Placement: If a customer places an order, the chatbot confirms order details, provides a summary, informs the customer that the order is being processed, and expresses gratitude. Order Status Tracking: If a customer asks about their order status, the chatbot retrieves details such as order date, pizza type, and quantity, providing real-time updates. Prerequisites Before setting up the workflow, ensure you have the following: OpenAI account (Sign up here) OpenAI API key to interact with GPT-3.5 n8n instance running locally or on a server (Installation Guide) Configuration Steps Step 1: Set Up OpenAI API Credentials Log in to OpenAI's website. Navigate to API Keys under your account settings. Click Create API Key and copy the key for later use. Step 2: Configure OpenAI Node in n8n Open n8n and create a new workflow. Click Add Node and search for OpenAI. Select OpenAI from the list. In the OpenAI node settings, click "Create New" under the Credentials section. Enter a name for the credentials (e.g., "PizzaBot OpenAI Key"). Paste your API Key into the field. Click Save. Step 3: Set Up the Chatbot Logic Connect the AI Agent Builder Node to the OpenAI Node and HTTP Request Node. Configure the OpenAI Node with the following settings: Model: gpt-3.5-turbo Prompt: Provide dynamic text based on customer inquiries (e.g., "List available pizzas," "Place an order for Margherita pizza," "Check my order status"). Temperature: Adjust based on desired creativity (recommended: 0.7). Max Tokens: Limit response length (recommended: 150). Add multiple HTTP Request Node: For Get Products: Fetch stored menu data and return details. For Order Product: Capture order details, generate an order ID, and confirm with the customer. For Get Order: Retrieve order details based on the order ID and display progress. Step 4: Testing and Deployment Click Execute Workflow to test the chatbot. Open the Chat Message node, then copy the chat URL to access the chatbot in your browser. Interact with the chatbot by asking different queries (e.g., "What pizzas do you have?" or "I want to order a Pepperoni pizza"). Verify responses and adjust prompts or configurations as needed. Deploy the workflow and integrate it with a messaging platform (e.g., Telegram, WhatsApp, or a website chatbot). Conclusion This n8n workflow enables a fully functional pizza ordering chatbot using OpenAI's GPT-3.5. Customers can view menus, place orders, and track their order status efficiently. You can further customize the chatbot by refining prompts, adding new features, or integrating with external databases for order management. 🚀 Happy automating!

Irfan HandokoBy Irfan Handoko
7544

Build an All-Source Knowledge Assistant with Claude, RAG, Perplexity, and Drive

📜 Detailed n8n Workflow Description Main Flow The workflow operates through a three-step process that handles incoming chat messages with intelligent tool orchestration: Message Trigger: The When chat message received node triggers whenever a user message arrives and passes it directly to the Knowledge Agent for processing. Agent Orchestration: The Knowledge Agent serves as the central orchestrator, registering a comprehensive toolkit of capabilities: LLM Processing: Uses Anthropic Chat Model with the claude-sonnet-4-20250514 model to craft final responses Memory Management: Implements Postgres Chat Memory to save and recall conversation context across sessions Reasoning Engine: Incorporates a Think tool to force internal chain-of-thought processing before taking any action Semantic Search: Leverages General knowledge vector store with OpenAI embeddings (1536-dimensional) and Cohere reranking for intelligent content retrieval Structured Queries: Provides structured data Postgres tool for executing queries on relational database tables Drive Integration: Includes search about any doc in google drive functionality to locate specific file IDs File Processing: Connects to Read File From GDrive sub-workflow for fetching and processing various file formats External Intelligence: Offers Message a model in Perplexity for accessing up-to-the-minute web information when internal knowledge proves insufficient Response Generation: After invoking the Think process, the agent intelligently selects appropriate tools based on the query, integrates results from multiple sources, and returns a comprehensive Markdown-formatted answer to the user. Persistent Context Management The workflow maintains conversation continuity through Postgres Chat Memory, which automatically logs every user-agent exchange. This ensures long-term context retention without requiring manual intervention, allowing for sophisticated multi-turn conversations that build upon previous interactions. Semantic Retrieval Pipeline The semantic search system operates through a sophisticated two-stage process: Embedding Generation: Embeddings OpenAI converts textual content into high-dimensional vector representations Relevance Reranking: Reranker Cohere reorders search hits to prioritize the most contextually relevant results Knowledge Integration: Processed results feed into the General knowledge vector store, providing the agent with relevant internal knowledge snippets for enhanced response accuracy Google Drive File Processing The file reading capability handles multiple formats through a structured sub-workflow: Workflow Initiation: The agent calls Read File From GDrive with the selected fileId parameter Sub-workflow Activation: When Executed by Another Workflow node activates the dedicated file processing sub-workflow Operation Validation: Operation node confirms the request type is readFile File Retrieval: Download File1 node retrieves the binary file data from Google Drive Format-Specific Processing: FileType node branches processing based on MIME type: PDF Files: Route through Extract from PDF → Get PDF Response to extract plain text content CSV Files: Process via Extract from CSV → Get CSV Response to obtain comma-delimited text data Image Files: Analyze using Analyse Image with GPT-4o-mini to generate visual descriptions Audio/Video Files: Transcribe using Transcribe Audio with Whisper for text transcript generation Content Integration: The extracted text content returns to Knowledge Agent, which seamlessly weaves it into the final response External Search Capability When internal knowledge sources prove insufficient, the workflow can access current public information through Message a model in Perplexity, ensuring responses remain accurate and up-to-date with the latest available information. Design Highlights The workflow architecture incorporates several key design principles that enhance reliability and reusability: Forced Reasoning: The mandatory Think step significantly reduces hallucinations and prevents tool misuse by requiring deliberate consideration before action Template Flexibility: The design is intentionally generic—organizations can replace [your company] placeholders with their specific company name and integrate their own credentials for immediate deployment Documentation Integration: Sticky notes throughout the canvas serve as inline documentation for workflow creators and maintainers, providing context without affecting runtime performance System Benefits With this comprehensive architecture, the assistant delivers powerful capabilities including long-term memory retention, semantic knowledge retrieval, multi-format file processing, and contextually rich responses tailored specifically for users at [your company]. The system balances sophisticated AI capabilities with practical business requirements, creating a robust foundation for enterprise-grade conversational AI deployment.

PaulBy Paul
7076

Translate & repost Twitter threads in multiple languages with OpenAI

Twitter Thread (Flood) Translator & Poster What it does Thread Extraction: Automatically detects and extracts all tweets from a provided Twitter thread (flood) link. Translation: Translates each extracted tweet into your target language using OpenAI. Rewriting: Rewrites each translated tweet to maintain the original meaning while improving clarity or style. Automated Posting: Posts the rewritten tweets as a new thread on Twitter using twitterapi.io, preserving the original thread structure. How it works Accepts a Twitter thread (flood) link as input. Extracts all tweets from the thread in their original order. Each tweet is sent to OpenAI for translation into your desired language. The translated tweets are then rewritten for clarity and natural flow, while keeping the original meaning intact. The processed tweets are automatically posted as a new thread on your Twitter account via twitterapi.io. Setup Steps Create a Notion Database: Set up a database page in Notion to store and manage your Twitter links and workflow data. Configure Notion Integration: Add the created database page ID to the Notion nodes in your workflow. Set Twitter API Credentials: Add your twitterapi.io API key to the relevant nodes. Add Twitter Account Details: Enter your Twitter account username/email and password for authentication. Set Up OpenAI Credentials: Provide your OpenAI API credentials to enable translation and rewriting. Subworkflow Integration: Create a separate workflow for subworkflow logic and call it using the Execute Workflow node for modular automation. Set Desired Language & Thread Link: Change the target language and Twitter thread (flood) link directly in the Manual Trigger node to customize each run. Benefits Ultra Low Cost: Total cost for a 15-tweet thread (flood) is just $0.016 USD ($0.015 for twitterapi.io + $0.001 for OpenAI API). (Actual cost may vary depending on the density of tweets in the thread.) End-to-End Automation: Go from thread extraction to translation, rewriting, and reposting-all in one workflow. Multilingual Support: Effortlessly translate and republish Twitter threads in any supported language. > Note: Detailed configuration instructions and node explanations are included as sticky notes within the workflow canvas. --- Ideal for: Content creators looking to reach new audiences by translating and republishing Twitter threads Social media managers automating multilingual content workflows Anyone wanting to streamline the process of thread extraction, translation, and posting --- Notes This workflow is not able to post images or videos to Twitter-it handles text-only threads.

enes cingozBy enes cingoz
5275

Automatic Gmail categorization and labeling with AI

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? If your inbox is full of unread emails, this workflow is for you. Instead of reading through them one by one, let AI do the sorting. It reads your emails and flags only what needs action. What does it solve? This workflow reads your unread Gmail emails and uses AI to decide what’s important — and what’s not. It labels emails that need your attention, identifies receipts, and trashes everything else. No more manual reading. Just an inbox that uses AI to take care of itself. How it works Every hour, the workflow runs automatically. It searches for unread emails in your Gmail inbox. For each email: It extracts the content and sends it to OpenAI. The AI returns one of four labels: Action, Receipt, Informational or Spam. Based on the label: Emails are marked with the appropriate label. Or moved to trash it is spam. It marks the email as read once processed. How to set up? Connect these services in your n8n credentials: Gmail (OAuth2) OpenAI (API key) Create the Gmail labels: In your Gmail account, create these labels exactly as written: Action, Receipt, and Informational The workflow will apply these labels based on AI classification. How to customize this workflow to your needs Change the AI prompt to detect more types of emails like Meeting or Newsletter. Add more branches to the Switch node to apply custom logic. Change the schedule to fit your workflow. By default, it runs every hour, but you can update this in the Schedule Trigger node.

Matt Chong | n8n CreatorBy Matt Chong | n8n Creator
5178

Attach a default error handler to all active workflows

How it works: Did you ever miss any errors in your workflow executions? I did! And I usually only realised a few days or weeks later. 🙈 This template attaches a default error workflow to all your active workflows. From now on, you'll receive a notification whenever a workflow errors and you'll have peace of mind again. It runs every night at midnight so you never have to think of this again. Of course, you can also run it manually. Steps to set up: Update the Gmail note with your own email address, or replace it with any other notification mechanism. You can also use Slack, Discord, Telegram or text messages.. Activate the workflow. Relax. Caveats: I did not add any rate limiting, so if you have a workflow that runs very frequently and it errors... well let's say your mailbox will not be a nice place anymore. Ideas for improvement? If you have any suggestions for improvement, feel free to reach out to me at bart@n8n.io. Enjoy!

bartvBy bartv
4625

AI-powered web research in Google Sheets with GPT and Bright Data

🔍 AI-Powered Web Research in Google Sheets with Bright Data 📋 Overview Transform any Google Sheets cell into an intelligent web scraper! Type =BRIGHTDATA("cell", "search prompt") and get AI-filtered result from every website in ~20 seconds. What happens automatically: AI optimizes your search query Bright Data scrapes the web (bypasses bot detection) AI analyzes and filters result Returns clean data directly to your cell Completes in <25 seconds Cost: ~$0.02-0.05 per search | Time saved: 3-5 minutes per search --- 👥 Who's it for Market researchers needing competitive intelligence E-commerce teams tracking prices Sales teams doing lead prospecting SEO specialists gathering content research Real estate agents monitoring listings Anyone tired of manual copy-paste --- ⚙️ How it works Webhook Call - Google Sheets function sends POST request Data Preparation - Organizes input structure AI Query Optimization - GPT-4.1 Mini refines search query Web Scraping - Bright Data fetches data while bypassing blocks AI Analysis - GPT-4o Mini filters and summarizes result Response - Returns plain text to your cell Logging - Updates logs for monitoring --- 🛠️ Setup Instructions Time to deploy: 20 minutes Requirements n8n instance with public URL Bright Data account + API key OpenAI API key Google account for Apps Script Part 1: n8n Workflow Setup Import this template into your n8n instance Configure Webhook node: Copy your webhook URL: https://n8n.yourdomain.com/webhook/brightdata-search Set authentication: Header Auth Set API key: 12312346 (or create your own) Add OpenAI credentials to AI nodes. Configure Bright Data: Add API credentials Configure Output Language: Manually edit the "Set Variables" Node. Test workflow with manual execution Activate the workflow Part 2: Google Sheets Function Open Google Sheet → Extensions → Apps Script Paste this code: javascript function BRIGHTDATA(prompt, source) { if (!prompt || prompt === "") { return "❌ Must enter prompt"; } source = source || "google"; // Update with YOUR webhook URL const N8NWEBHOOKURL = "https://your-n8n-domain.com/webhook/brightdata-search"; // Update with YOUR password const API_KEY = "12312346"; let spreadsheetId, sheetName, cellAddress; try { const sheet = SpreadsheetApp.getActiveSheet(); const activeCell = sheet.getActiveCell(); spreadsheetId = SpreadsheetApp.getActiveSpreadsheet().getId(); sheetName = sheet.getName(); cellAddress = activeCell.getA1Notation(); } catch (e) { return "❌ Cannot identify cell"; } const payload = { prompt: prompt, source: source.toLowerCase(), context: { spreadsheetId: spreadsheetId, sheetName: sheetName, cellAddress: cellAddress, timestamp: new Date().toISOString() } }; const options = { method: "post", contentType: "application/json", payload: JSON.stringify(payload), muteHttpExceptions: true, headers: { "Accept": "text/plain", "key": API_KEY } }; try { const response = UrlFetchApp.fetch(N8NWEBHOOKURL, options); const responseCode = response.getResponseCode(); if (responseCode !== 200) { Logger.log("Error response: " + response.getContentText()); return "❌ Error " + responseCode; } return response.getContentText(); } catch (error) { Logger.log("Exception: " + error.toString()); return "❌ Connection error: " + error.toString(); } } function doGet(e) { return ContentService.createTextOutput(JSON.stringify({ status: "alive", message: "Apps Script is running", timestamp: new Date().toISOString() })).setMimeType(ContentService.MimeType.JSON); } Update N8NWEBHOOKURL with your webhook Update API_KEY with your password Save (Ctrl+S / Cmd+S) - Important! Close Apps Script editor --- 💡 Usage Examples =BRIGHTDATA("C3", "What is the current price of the product?") =BRIGHTDATA("D30", "What is the size of this company?") =BRIGHTDATA("A4", "Do this comapny is hiring Developers?") --- 🎨 Customization Easy Tweaks AI Models - Switch to GPT-4o for better optimization Response Format - Modify prompt for specific outputs Speed - Optimize AI prompts to reduce time Language - Change prompts for any language Advanced Options Implement rate limiting Add data validation Create async mode for long queries Add Slack notifications --- 🚀 Pro Tips Be Specific - "What is iPhone 15 Pro 256GB US price?" beats "What is iPhone price?" Speed Matters - Keep prompts concise (30s timeout limit) Monitor Costs - Track Bright Data usage Debug - Check workflow logs for errors --- ⚠️ Important Notes Timeout: 30-second Google Sheets limit (aim for <20s) Plain Text Only: No JSON responses Costs: Monitor Bright Data at console.brightdata.com Security: Keep API keys secret No Browser Storage: Don't use localStorage/sessionStorage --- 🔧 Troubleshooting | Error | Solution | |-------|----------| | "Exceeded maximum execution time" | Optimize AI prompts or use async mode | | "Could not fetch data" | Verify Bright Data credentials | | Empty cell | Check n8n logs for AI parsing issues | | Broken characters | Verify UTF-8 encoding in webhook node | --- 📚 Resources Bright Data API Docs n8n Webhook Documentation Google Apps Script Reference --- Built with ❤️ by Elay Guez

Elay GuezBy Elay Guez
3841

Send a message to Telegram on a new item saved to Reader

What is it This workflow aims to build a simple bot that will send a message to a telegram channel every time there is a new saved item to the Reader. This workflow can be easily modify to support other way of sending the notification, thanks to existing n8n nodes. Warning: This is only for folks who already have access to the Reader, it won't work if you don't Also, this workflow use a file to store the last update time in order to not sync everything everytime. Setup The config node : It contains the telegram channel id It also contains the file used as storage To get the header auth, you have to : Go to the reader Open the devtools, Option + ⌘ + J (on macOS), or Shift + CTRL + J (on Windows/Linux) Go to network and find a profile_details/ request, click on it Go to Request Headers Copy the value for "Cookie" In n8n, set the name of the Header auth account to Cookie and the value with the one you copied before

NicolasBy Nicolas
3186

Create screenshots with uProc, save to Dropbox and send by email

Do you want to create a website screenshot without browser extensions? This workflow creates screenshots of any website using the uProc Get Screenshot by URL tool and sends an email with the screenshots. You need to add your credentials (Email and API Key - real -) located at Integration section to n8n. Node "Create Web + Email Item" can be replaced by any other supported service returning Website and Email values, like Google Sheets, Mailchimp, MySQL, or Typeform. Every "uProc" node returns an image URL of the captured website. This generated URL will remain only 24 hours in our server. You can set up the uProc node with several parameters: width: you can choose one of the predefined values to generate the screenshot, or you can set up a custom width you want. full-page: the tool will return a screenshot of the website from top to bottom with the defined width. In our workflow, we generate two screenshots: 1) One screenshot of 640 pixels width. 2) One full-page screenshot of 640 pixels width. Screenshots are downloaded by "Get File" nodes and saved to the screenshots folder in Dropbox. Finally, we use the Amazon SES node to send an HTML email with both screenshots to the specified email. We will receive the next email:

Miquel ColomerBy Miquel Colomer
2453