15 templates found
Category:
Author:
Sort:

IT ops AI SlackBot workflow - chat with your knowledge base

Video Demo: Click here to see a video of this workflow in action. Summary Description: The "IT Department Q&A Workflow" is designed to streamline and automate the process of handling IT-related inquiries from employees through Slack. When an employee sends a direct message (DM) to the IT department's Slack channel, the workflow is triggered. The initial step involves the "Receive DMs" node, which listens for new messages. Upon receiving a message, the workflow verifies the webhook by responding to Slack's challenge request, ensuring that the communication channel is active and secure. Once the webhook is verified, the workflow checks if the message sender is a bot using the "Check if Bot" node. If the sender is identified as a bot, the workflow terminates the process to avoid unnecessary actions. If the sender is a human, the workflow sends an acknowledgment message back to the user, confirming that their query is being processed. This is achieved through the "Send Initial Message" node, which posts a simple message like "On it!" to the user's Slack channel. The core functionality of the workflow is powered by the "AI Agent" node, which utilizes the OpenAI GPT-4 model to interpret and respond to the user's query. This AI-driven node processes the text of the received message, generating an appropriate response based on the context and information available. To maintain conversation context, the "Window Buffer Memory" node stores the last five messages from each user, ensuring that the AI agent can provide coherent and contextually relevant answers. Additionally, the workflow includes a custom Knowledge Base (KB) tool (see that tool template here) that integrates with the AI agent, allowing it to search the company's internal KB for relevant information. After generating the response, the workflow cleans up the initial acknowledgment message using the "Delete Initial Message" node to keep the conversation thread clean. Finally, the generated response is sent back to the user via the "Send Message" node, providing them with the information or assistance they requested. This workflow effectively automates the IT support process, reducing response times and improving efficiency. To quickly deploy the Knowledge Ninja app in Slack, use the app manifest below and don't forget to replace the two sample urls: { "display_information": { "name": "Knowledge Ninja", "description": "IT Department Q&A Workflow", "background_color": "005e5e" }, "features": { "bot_user": { "display_name": "IT Ops AI SlackBot Workflow", "always_online": true } }, "oauth_config": { "redirect_urls": [ "Replace everything inside the double quotes with your slack redirect oauth url, for example: https://n8n.domain.com/rest/oauth2-credential/callback" ], "scopes": { "user": [ "search:read" ], "bot": [ "chat:write", "chat:write.customize", "groups:history", "groups:read", "groups:write", "groups:write.invites", "groups:write.topic", "im:history", "im:read", "im:write", "mpim:history", "mpim:read", "mpim:write", "mpim:write.topic", "usergroups:read", "usergroups:write", "users:write", "channels:history" ] } }, "settings": { "event_subscriptions": { "request_url": "Replace everything inside the double quotes with your workflow webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad", "bot_events": [ "message.im" ] }, "orgdeployenabled": false, "socketmodeenabled": false, "tokenrotationenabled": false } }

Angel MenendezBy Angel Menendez
39013

Lead workflow: Yelp & Trustpilot scraping + OpenAI analysis via BrightData

πŸ›’ Lead Workflow: Yelp & Trustpilot Scraping + OpenAI Analysis via BrightData > Description: Automated lead generation workflow that scrapes business data from Yelp and Trustpilot based on location and category, analyzes credibility, and sends personalized outreach emails using AI. > ⚠️ Important: This template requires a self-hosted n8n instance to run. πŸ“‹ Overview This workflow provides an automated lead generation solution that identifies high-quality prospects from Yelp and Trustpilot, analyzes their credibility through reviews, and sends personalized outreach emails. Perfect for digital marketing agencies, sales teams, and business development professionals. ✨ Key Features 🎯 Smart Location Analysis AI breaks down cities into sub-locations for comprehensive coverage πŸ› Yelp Integration Scrapes business details using BrightData's Yelp dataset ⭐ Trustpilot Verification Validates business credibility through review analysis πŸ“Š Data Storage Automatically saves results to Google Sheets πŸ€– AI-Powered Outreach Generates personalized emails using Claude AI πŸ“§ Automated Sending Sends emails directly through Gmail integration πŸ”„ How It Works User Input: Submit location, country, and business category through a form AI Location Analysis: Gemini AI identifies sub-locations within the specified area Yelp Scraping: BrightData extracts business information from multiple locations Data Processing: Cleans and stores business details in Google Sheets Trustpilot Verification: Scrapes reviews and company details for credibility check Email Generation: Claude AI creates personalized outreach messages Automated Outreach: Sends emails to qualified prospects via Gmail πŸ“Š Data Output | Field | Description | Example | |---------------|----------------------------------|----------------------------------| | Company Name | Business name from Yelp/Trustpilot | Best Local Restaurant | | Website | Company website URL | https://example-restaurant.com | | Phone Number | Business contact number | (555) 123-4567 | | Email | Business email address | demo@example.com | | Address | Physical business location | 123 Main St, City, State | | Rating | Overall business rating | 4.5/5 | | Categories | Business categories/tags | Restaurant, Italian, Fine Dining | πŸš€ Setup Instructions ⏱️ Estimated Setup Time: 10–15 minutes Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access BrightData account with Yelp and Trustpilot datasets Google Gemini API access Anthropic API key for Claude Gmail account for sending emails Step 1: Import the Workflow Copy the JSON workflow code In n8n: Workflows β†’ + Add workflow β†’ Import from JSON Paste JSON and click Import Step 2: Configure Google Sheets Integration Create two Google Sheets: Yelp data: Name, Categories, Website, Address, Phone, URL, Rating Trustpilot data: Company Name, Email, Phone Number, Address, Rating, Company About Copy Sheet IDs from URLs In n8n: Credentials β†’ + Add credential β†’ Google Sheets OAuth2 API Complete OAuth setup and test connection Update all Google Sheets nodes with your Sheet IDs Step 3: Configure BrightData Set up BrightData credentials in n8n Replace API token with: BRIGHTDATAAPI_KEY Verify dataset access: Yelp dataset: gd_lgugwl0519h1p14rwk Trustpilot dataset: gd_lm5zmhwd2sni130p Test connections Step 4: Configure AI Models Google Gemini (Location Analysis) Add Google Gemini API credentials Configure model: models/gemini-1.5-flash Claude AI (Email Generation) Add Anthropic API credentials Configure model: claude-sonnet-4-20250514 Step 5: Configure Gmail Integration Set up Gmail OAuth2 credentials in n8n Update "Send Outreach Email" node Test email sending Step 6: Test & Activate Activate the workflow Test with sample data: Country: United States Location: Dallas Category: Restaurants Verify data appears in Google Sheets Check that emails are generated and sent πŸ“– Usage Guide Starting a Lead Generation Campaign Access the form trigger URL Enter your target criteria: Country: Target country Location: City or region Category: Business type (e.g., restaurants) Submit the form to start the process Monitoring Results Yelp Data Sheet: View scraped business information Trustpilot Sheet: Review credibility data Gmail Sent Items: Track outreach emails sent πŸ”§ Customization Options Modifying Email Templates Edit the "AI Generate Email Content" node to customize: Email tone and style Services mentioned Call-to-action messages Branding elements Adjusting Data Filters Modify rating thresholds Set minimum review counts Add geographic restrictions Filter by business size Scaling the Workflow Increase batch sizes Add delays between requests Use parallel processing Add error handling 🚨 Troubleshooting Common Issues & Solutions BrightData Connection Failed Cause: Invalid API credentials or dataset access Solution: Verify credentials and dataset permissions No Data Extracted Cause: Invalid location or changed page structure Solution: Verify location names and test other categories Gmail Authentication Issues Cause: Expired OAuth tokens Solution: Re-authenticate and check permissions AI Model Errors Cause: API quota exceeded or invalid keys Solution: Check usage limits and API key Performance Optimization Rate Limiting: Add delays Error Handling: Retry failed requests Data Validation: Check for malformed data Memory Management: Process in smaller batches πŸ“ˆ Use Cases & Examples Digital Marketing Agency Lead Generation Goal: Find businesses needing marketing Target: Restaurants, retail stores Approach: Focus on good-rated but low-online-presence businesses B2B Sales Prospecting Goal: Find software solution clients Target: Growing businesses Approach: Focus on recent positive reviews Partnership Development Goal: Find complementary businesses Target: Established businesses Approach: Focus on reputation and satisfaction scores ⚑ Performance & Limits Expected Performance Processing Time: 5–10 minutes/location Data Accuracy: 90%+ Success Rate: 85%+ Daily Capacity: 100–500 leads Resource Usage API Calls: ~10–20 per business Storage: Minimal (Google Sheets) Execution Time: 3–8 minutes/10 businesses Network Usage: ~5–10MB/business 🀝 Support & Community Getting Help n8n Community Forum: community.n8n.io Docs: docs.n8n.io BrightData Support: Via dashboard Contributing Share improvements Report issues and suggestions Create industry-specific variations Document best practices > πŸ”’ Privacy & Compliance: Ensure GDPR/CCPA compliance. Always respect robots.txt and terms of service of scraped sites. --- 🎯 Ready to Generate Leads! This workflow provides a complete solution for automated lead generation and outreach. Customize it to fit your needs and start building your pipeline today! For any questions or support, please contact: πŸ“§ info@incrementors.com or fill out this form: Contact Us

IncrementorsBy Incrementors
12785

Create a Google Analytics data report with AI and sent it to e-mail and Telegram

What this workflow does This workflow retrieves Google Analytics data from the last 7 days and the same period in the previous year. The data is then prepared by AI as a table, analyzed and provided with a small summary. The summary is then sent by email to a desired address and, shortened and summarized again, sent to a Telegram account. This workflow has the following sequence: time trigger (e.g. every Monday at 7 a.m.) retrieval of Google Analytics data from the last 7 days assignment and summary of the data retrieval of Google Analytics data from the last 7 days of the previous year allocation and summary of the data preparation in tabular form and brief analysis by AI. sending the report as an email preparation in short form by AI for Telegram (optional) sending as Telegram message. Requirements The following accesses are required for the workflow: Google Analytics (via Google Analytics API): Documentation AI API access (e.g. via OpenAI, Anthropic, Google or Ollama) SMTP access data (for sending the mail) Telegram access data (optional for sending as Telegram message): Documentation Feel free to contact me via LinkedIn, if you have any questions!

Friedemann SchuetzBy Friedemann Schuetz
12242

Document analysis & chatbot creation with Llama Parser, Gemini LLM & Pinecone DB

πŸ“„Description This automation workflow enables users to upload files via an N8N form, automatically analyzes the content using Google Gemini agents, and delivers the analyzed results via email along with a chatbot link. The system leverages Llama Cloud API, Google Gemini LLM, Pinecone vector database, and Gmail to provide a seamless, multilingual content analysis experience. βœ… Prerequisites Before setting up this workflow, ensure the following are in place: An active N8N instance. Access to Llama Cloud API. Google Gemini LLM API keys (for Translator & Analyzer agents). A Pinecone account with an active index. A Gmail account with API access configured. Basic knowledge of N8N workflow setup. βš™οΈ Setup Instructions Deploy the N8N Form Create a public-facing form using N8N. Configure it to accept: File uploads. User email input. File Preprocessing Store the uploaded files temporarily. Organize and preprocess them as needed. Content Extraction using Llama Cloud API Feed the files into the Llama Cloud API. Extract and parse the content for further processing. Translation (if required) Use a Translator Agent (Google Gemini). Check if the content is in English. If not, translate it. Content Analysis Forward the (translated) content to the Analyzer Agent (Google Gemini). Perform deep analysis to extract insights. Vector Storage in Pinecone Store both: The parsed and translated content. The analyzed content. Use Pinecone to store the content as embeddings for chatbot use. User Notification via Gmail Send the analyzed content and chatbot link to the user’s provided email using Gmail API. 🧩 Customization Guidance To add more languages: Update the translation logic to include additional language support. To modify analysis depth: Adjust the prompts sent to the Gemini Analyzer Agent. To change the chatbot behavior: Retrain or reconfigure the chatbot to utilize the new Pinecone index contextually. πŸ” Workflow Summary User uploads files and email via N8N form. Files are parsed using Llama Cloud API. Content is translated (if needed) using Gemini Translator Agent. Translated content is analyzed by the Gemini Analyzer Agent. Parsed and analyzed data is stored in Pinecone. User receives email with analyzed results and a chatbot link.

pavithBy pavith
4101

Receive and analyze emails with rules in Sublime Security

This n8n workflow provides a comprehensive automation solution for processing email attachments, specifically targeting enhanced security protocols for organizations that use platforms like Outlook. It starts with the IMAP node, which is set to ingest emails and identify those with .eml attachments. Once an email with an attachment is ingested, the workflow progresses to a conditional operation where it checks for the presence of attachments. If an attachment is found, the binary data is moved and converted to JSON format, preparing it for further analysis. This meticulous approach to detecting attachments is crucial for maintaining a robust security posture, allowing for the proactive identification and handling of potentially malicious content. In the subsequent stage, the workflow leverages the capabilities of Sublime Security by analyzing the email attachment. The binary file is scrutinized for threats, and upon detection, the information is split to matched and unmatched data. This process not only speeds up the threat detection mechanism but also ensures compatibility with other systems, such as Slack, resulting in a smooth and efficient workflow. This automation emphasizes operational efficiency with minimal user involvement, enhancing the organization's defense against cyber threats. The final phase of the workflow involves preparing the output for a Slack report. Whether a threat is detected or not, n8n ensures that stakeholders are immediately informed by dispatching comprehensive reports or notifications to Slack channels. This promotes a culture of transparency and prompt action within the team.

n8n TeamBy n8n Team
2217

Summarize YouTube transcripts in any language with Google Gemini & Google Docs

YouTube Transcript Summarization in Any Language for Social Media This n8n workflow automates the process of: Retrieving YouTube Video Transcripts: It fetches the transcript for any YouTube video URL provided using the YouTube Transcript API from RapidAPI. Generating a Concise Summary in Any Language: The workflow uses Google Gemini (PaLM) to create a concise summary of the transcript in the language specified by the user (e.g., English, Spanish, etc.). Storing the Summary in Google Docs: The generated summary is inserted into a predefined Google Document, making it easy for users to share or edit. Features: Language Flexibility: Summaries are created in the desired language. Fully Automated: From fetching the transcript to updating Google Docs, the process is fully automated. Social Media Ready: The summary is formatted and stored in a Google Doc, ready for use in social media posts. This workflow integrates with YouTube Transcript API via RapidAPI, allowing you to easily fetch video transcripts and summarize them with AI. The entire process is automated and seamless. Powered by RapidAPI: API Used: YouTube Transcript API via RapidAPI to get the transcript data. --- Benefits: Saves Time: Automates the transcript summarization process, eliminating the need for manual content extraction and summarization. Customizable Language Support: Provides summaries in any language, enabling accessibility and engagement for a global audience. Streamlined Content Creation: Automatically generates concise, engaging summaries that are ready for social media use. Google Docs Integration: Saves summaries directly into a Google Doc for easy sharing, editing, and content management. --- Challenges Addressed: Manual Transcript Extraction: Problem: Manually transcribing and summarizing YouTube videos for social media can be time-consuming and error-prone. Solution: This workflow fully automates the process, saving hours of manual work using the YouTube Transcript API. Lack of Language Support in Summaries: Problem: Many automated tools only summarize content in a single language, limiting their accessibility. Solution: With language flexibility, the workflow creates summaries in the language of your choice, helping you cater to diverse audiences. Inconsistent Video Quality & Transcript Accuracy: Problem: Not all YouTube videos have well-structured or accurate transcripts, leading to incomplete or inaccurate summaries. Solution: The workflow can process and format even imperfect transcripts, ensuring that the generated summaries are still accurate and useful. Managing Content Across Platforms: Problem: Transcripts and summaries often need to be stored in multiple locations for social media posts, which can be cumbersome. Solution: The workflow integrates with Google Docs to automatically store and manage summaries in one place, making it easier to share and reuse content. ---

Sk developer By Sk developer
2050

Convert RSS feeds into a podcast with Google Gemini, Kokoro TTS, and FFmpeg

🎧 Daily RSS Digest & Podcast Generation This workflow automates the creation of a daily sports podcast from your favorite news sources. It fetches articles, uses AI to write a digest and a two-person dialogue, and produces a single, merged audio file with KOKORO TTS ready for listening. ✨ How it works: πŸ“° Fetch & Filter Daily News: The workflow triggers daily, fetches articles from your chosen RSS feeds, and filters them to keep only the most recent content. ✍️ Generate AI Digest & Script: Using Google Gemini, it first creates a written summary of the day's news. A second AI agent then transforms this news into an engaging, conversational podcast script between two distinct AI speakers. πŸ—£οΈ Generate Voices in Chunks: The script is split into individual lines of dialogue. The workflow then loops through each line, calling a Text-to-Speech (TTS) API to generate a separate audio file (an MP3 chunk) for each part of the conversation. πŸŽ›οΈ Merge Audio with FFmpeg: After all the audio chunks are created and saved locally, a command-line script generates a list of all the files and uses FFmpeg to losslessly merge them into a single, seamless MP3 file. All temporary files are then deleted. πŸ“€ Send the Final Podcast: The final, merged MP3 is read from the server and delivered directly to your Telegram chat with a dynamic, dated filename. You can modify: πŸ“° The RSS Feeds to any news source you want. πŸ€– The AI Prompts to change the tone, language, or style of the digest and podcast. πŸŽ™οΈ The TTS Voices used for the two speakers. πŸ“« The Final Delivery Method (e.g., send to Discord, save to Google Drive, etc.). Perfect for creating a personalized, hands-free news briefing to listen to on your commute. Inspired by: https://n8n.io/workflows/6523-convert-newsletters-into-ai-podcasts-with-gpt-4o-mini-and-elevenlabs/

JonasBy Jonas
1940

Create a Telegram bot with Mistral Nemotron AI and conversation memory

πŸ€– Create a Telegram Bot with Mistral AI and Conversation Memory A sophisticated Telegram bot that provides AI-powered responses with conversation memory. This template demonstrates how to integrate any AI API service with Telegram, making it easy to swap between different AI providers like OpenAI, Anthropic, Google AI, or any other API-based AI model. πŸ”§ How it works The workflow creates an intelligent Telegram bot that: πŸ’¬ Maintains conversation history for each user 🧠 Provides contextual AI responses using any AI API service πŸ“± Handles different message types and commands πŸ”„ Manages chat sessions with clear functionality πŸ”Œ Easily adaptable to any AI provider (OpenAI, Anthropic, Google AI, etc.) βš™οΈ Set up steps πŸ“‹ Prerequisites πŸ€– Telegram Bot Token (from @BotFather) πŸ”‘ AI API Key (from any AI service provider) πŸš€ n8n instance with webhook capability πŸ› οΈ Configuration Steps πŸ€– Create Telegram Bot Message @BotFather on Telegram Create new bot with /newbot command Save the bot token for credentials setup 🧠 Choose Your AI Provider OpenAI: Get API key from OpenAI platform Anthropic: Sign up for Claude API access Google AI: Get Gemini API key NVIDIA: Access LLaMA models Hugging Face: Use inference API Any other AI API service πŸ” Set up Credentials in n8n Add Telegram API credentials with your bot token Add Bearer Auth/API Key credentials for your chosen AI service Test both connections πŸš€ Deploy Workflow Import the workflow JSON Customize the AI API call (see customization section) Activate the workflow Set webhook URL in Telegram bot settings ✨ Features πŸš€ Core Functionality πŸ“¨ Smart Message Routing: Automatically categorizes incoming messages (commands, text, non-text) 🧠 Conversation Memory: Maintains chat history for each user (last 10 messages) πŸ€– AI-Powered Responses: Integrates with any AI API service for intelligent replies ⚑ Command Support: Built-in /start and /clear commands πŸ“± Message Types Handled πŸ’¬ Text Messages: Processed through AI model with context πŸ”§ Commands: Special handling for bot commands ❌ Non-text Messages: Polite error message for unsupported content πŸ’Ύ Memory Management πŸ‘€ User-specific chat history storage πŸ”„ Automatic history trimming (keeps last 10 messages) 🌐 Global state management across workflow executions πŸ€– Bot Commands /start 🎯 - Welcome message with bot introduction /clear πŸ—‘οΈ - Clears conversation history for fresh start Regular text πŸ’¬ - Processed by AI with conversation context πŸ”§ Technical Details πŸ—οΈ Workflow Structure πŸ“‘ Telegram Trigger - Receives all incoming messages πŸ”€ Message Filtering - Routes messages based on type/content πŸ’Ύ History Management - Maintains conversation context 🧠 AI Processing - Generates intelligent responses πŸ“€ Response Delivery - Sends formatted replies back to user πŸ€– AI API Integration (Customizable) Current Example (NVIDIA): Model: mistralai/mistral-nemotron Temperature: 0.6 (balanced creativity) Max tokens: 4096 Response limit: Under 200 words πŸ”„ Easy to Replace with Any AI Service: OpenAI Example: json { "model": "gpt-4", "messages": [...], "temperature": 0.7, "max_tokens": 1000 } Anthropic Claude Example: json { "model": "claude-3-sonnet-20240229", "messages": [...], "max_tokens": 1000 } Google Gemini Example: json { "contents": [...], "generationConfig": { "temperature": 0.7, "maxOutputTokens": 1000 } } πŸ›‘οΈ Error Handling ❌ Non-text message detection and appropriate responses πŸ”§ API failure handling ⚠️ Invalid command processing 🎨 Customization Options πŸ€– AI Provider Switching To use a different AI service, modify the "NVIDIA LLaMA Chat Model" node: πŸ“ Change the URL in HTTP Request node πŸ”§ Update the request body format in "Prepare API Request" node πŸ” Update authentication method if needed πŸ“Š Adjust response parsing in "Save AI Response to History" node 🧠 AI Behavior πŸ“ Modify system prompt in "Prepare API Request" node 🌑️ Adjust temperature and response parameters πŸ“ Change response length limits 🎯 Customize model-specific parameters πŸ’Ύ Memory Settings πŸ“Š Adjust history length (currently 10 messages) πŸ‘€ Modify user identification logic πŸ—„οΈ Customize data persistence approach 🎭 Bot Personality πŸŽ‰ Update welcome message content ⚠️ Customize error messages and responses βž• Add new command handlers πŸ’‘ Use Cases 🎧 Customer Support: Automated first-line support with context awareness πŸ“š Educational Assistant: Homework help and learning support πŸ‘₯ Personal AI Companion: General conversation and assistance πŸ’Ό Business Assistant: FAQ handling and information retrieval πŸ”¬ AI API Testing: Perfect template for testing different AI services πŸš€ Prototype Development: Quick AI chatbot prototyping πŸ“ Notes 🌐 Requires active n8n instance for webhook handling πŸ’° AI API usage may have rate limits and costs (varies by provider) πŸ’Ύ Bot memory persists across workflow restarts πŸ‘₯ Supports multiple concurrent users with separate histories πŸ”„ Template is provider-agnostic - easily switch between AI services πŸ› οΈ Perfect starting point for any AI-powered Telegram bot project πŸ”§ Popular AI Services You Can Use | Provider | Model Examples | API Endpoint Style | |----------|---------------|-------------------| | 🟒 OpenAI | GPT-4, GPT-3.5 | https://api.openai.com/v1/chat/completions | | πŸ”΅ Anthropic | Claude 3 Opus, Sonnet | https://api.anthropic.com/v1/messages | | πŸ”΄ Google | Gemini Pro, Gemini Flash | https://generativelanguage.googleapis.com/v1beta/models/ | | 🟑 NVIDIA | LLaMA, Mistral | https://integrate.api.nvidia.com/v1/chat/completions | | 🟠 Hugging Face | Various OSS models | https://api-inference.huggingface.co/models/ | | 🟣 Cohere | Command, Generate | https://api.cohere.ai/v1/generate | Simply replace the HTTP Request node configuration to switch providers!

Ajith joseph By Ajith joseph
818

Create and update a channel, and send a message on Twist

No description available.

Harshil AgrawalBy Harshil Agrawal
595

Manage tasks and deadlines via WhatsApp using GPT-4 and Google Sheets

Who’s it for This template is for anyone who wants to manage tasks, deadlines, and updates directly from WhatsApp. It’s especially useful for teams, freelancers, and small businesses that track their work in Google Sheets and want quick AI-powered assistance without opening spreadsheets. How it works / What it does This workflow turns WhatsApp into your personal task manager. When a user sends a message, the AI agent (powered by OpenAI) interprets the request, retrieves or updates task information from Google Sheets, and sends a concise response back via WhatsApp. The workflow can highlight overdue tasks, upcoming deadlines, and provide actionable suggestions. How to set up Connect your WhatsApp API account in n8n. Add your OpenAI credentials. Link your Google Sheets document where tasks are stored. Deploy the workflow and test by sending a message to your WhatsApp number. Requirements WhatsApp Business API account connected to n8n OpenAI account for AI responses Google Sheets with task data How to customize the workflow Adjust the AI prompt to change tone or instructions. Modify the Google Sheets fields (Task, Status, Due Date, Notes) to match your structure. Add conditions or filters to customize which tasks get highlighted.

Yar Malik (Asfandyar)By Yar Malik (Asfandyar)
560

Generate blog posts and social media content with GPT-4.1 and Pexels images

AI Content Generator with Auto Pexels Image Matching This n8n template demonstrates how to use AI to generate engaging content and automatically find matching royalty-free images based on the content context. Use cases are many: Try creating blog posts with hero images, generating social media content with visuals or drafting email newsletters with relevant photos. Good to know At time of writing, Pexels offers free API access with up to 200 requests per hour. See Pexels API for updated info. OpenAI API costs vary by model. GPT-4.1 mini is cheaper while normal GPT-4.1 and above offer deeper content generation but cost more per request. Using the floating JavaScript node can reduce token usage by processing content and keyword extraction in a single prompt. How it works We'll collect a content topic or idea via a manual form trigger. OpenAI generates initial content based on your input topic. The AI extracts suitable keywords from the generated content to find matching images. The keywords are sent to Pexels API, which searches for relevant royalty-free stock images. OpenAI creates the final polished content that complements the selected image. The result is displayed as a formatted HTML template combining content and image together. How to use The manual trigger node is used as an example, but feel free to replace this with other triggers such as webhook or even a form. You can batch-generate multiple pieces of content by looping through a list, but of course, the processing will take longer and cost more. Requirements OpenAI API key (get one at https://platform.openai.com) Pexels API key (get free access at https://www.pexels.com/api) Valid content topics or ideas to generate from Customizing this workflow Optimize token usage: Connect the floating "Extract Content and Image Keyword" JavaScript node to process everything in one prompt and minimize API costs. If you use this option, update the expressions in the "Pexels Image Search" node and "Create Suitable Content Including Image" node to reference the extracted keywords from the JS node. Upgrade to GPT-4.1, GPT-5.1, or GPT-5.2 for more advanced and creative content generation. Change the HTML template output to other formats like Markdown, plain text, or JSON for different publishing platforms. For long term, store the results in a database like Supabase or Google Sheets if you are planning to reuse the contents.

Wan DinieBy Wan Dinie
310

Validate email addresses with APILayer API

πŸ“§ Email Validation Workflow Using APILayer API This n8n workflow enables users to validate email addresses in real time using the APILayer Email Verification API. It's particularly useful for preventing invalid email submissions during lead generation, user registration, or newsletter sign-ups, ultimately improving data quality and reducing bounce rates. --- βš™οΈ Step-by-Step Setup Instructions Trigger the Workflow Manually: The workflow starts with the Manual Trigger node, allowing you to test it on demand from the n8n editor. Set Required Fields: The Set Email & Access Key node allows you to enter: email: The target email address to validate. access_key: Your personal API key from apilayer.net. Make the API Call: The HTTP Request node dynamically constructs the URL: bash https://apilayer.net/api/check?accesskey={{ $json.accesskey }}&email={{ $json.email }} It sends a GET request to the APILayer endpoint and returns a detailed response about the email's validity. (Optional): You can add additional nodes to filter, store, or react to the results depending on your needs. --- πŸ”§ How to Customize Replace the manual trigger with a webhook or schedule trigger to automate validations. Dynamically map the email and access_key values from previous nodes or external data sources. Add conditional logic to filter out invalid emails, log them into a database, or send alerts via Slack or Email. --- πŸ’‘ Use Case & Benefits Email validation is crucial in maintaining a clean and functional mailing list. This workflow is especially valuable in: Sign-up forms where real-time email checks prevent fake or disposable emails. CRM systems to ensure user-entered emails are valid before saving them. Marketing pipelines to minimize email bounce rates and increase campaign deliverability. Using APILayer’s trusted validation service, you can verify whether an email exists, check if it’s a role-based address (like info@ or support@), and identify disposable email servicesβ€”all with a simple workflow. --- Keywords: email validation, n8n workflow, APILayer API, verify email, real-time email check, clean email list, reduce bounce rate, data accuracy, API integration, no-code automation

Sarfaraz Muhammad SajibBy Sarfaraz Muhammad Sajib
288