4 templates found
Category:
Author:
Sort:

Build a WhatsApp assistant with memory, Google Suite & multi-AI research and imaging

The "WhatsApp Productivity Assistant with Memory and AI Imaging" is a comprehensive n8n workflow that transforms your WhatsApp into a powerful, multi-talented AI assistant. It's designed to handle a wide range of tasks by understanding user messages, analyzing images, and connecting to various external tools and services. The assistant can hold natural conversations, remember past interactions using a MongoDB vector store (RAG), and decide which tool is best suited for a user's request. Whether you need to check your schedule, research a topic, get the latest news, create an image, or even analyze a picture you send, this workflow orchestrates it all seamlessly through a single WhatsApp chat interface. The workflow is structured into several interconnected components: WhatsApp Trigger & Incoming Message Processing: This is the entry point, starting when a message (text or image) is received via WhatsApp. A Route Message by Type (Image/Text) node then intelligently routes the message based on its content type. A Typing.... node sends a typing indicator to the user for a better experience. If an image is received, it's downloaded, processed via an HTTP Request, and analyzed by the Analyze image node. The Code1 node then standardizes both text and image analysis output into a single, unified input for the main AI agent. Core AI Agent: This is the brain of the operation. The AI Agent1 node receives the user's input, maintains short-term conversational memory using Simple Memory, and uses a powerful language model (gpt-oss-120b2 or gpt-oss-120b1) to decide which tool or sub-agent to use. It orchestrates all the other agents and tools. Productivity Tools Agent: This group of nodes connects the assistant to your personal productivity suite. It includes sub-agents and tools for managing Google Calendar, Google Tasks, and Gmail, allowing you to schedule events, manage to-dos, and read emails. It leverages a language model (gpt-4.1-mini or gemini-2.5-flash) for understanding and executing commands within these tools. Research Tool Agent: This agent handles all research-related queries. It has access to multiple search tools (Brave Web Search, Brave News Search, Wikipedia, Tavily, and a custom perprlexcia search) to find the most accurate and up-to-date information from the web. It uses a language model (gpt-oss-120b or gpt-4.1-nanoChat Model1) for reasoning. Long-Term Memory Webhook: A dedicated sub-workflow (Webhook2) that processes conversation history, extracts key information using Extract Memory Info, and stores it in a MongoDB Atlas Vector Store for long-term memory. This allows the AI agent to remember past preferences and facts. Image Generation Webhook: A specialized sub-workflow (Webhook3) triggered when a user asks to create an image. It uses a dedicated AI Agent with MongoDB Atlas Vector Store1 for contextual image prompt generation, Clean Prompt Text1 to refine the prompt, an HTTP Request to an external image generation API (e.g., Together.xyz), and then converts and sends the generated image back to the user via WhatsApp. --- Use Cases Personal Assistant: Schedule appointments, create tasks, read recent emails, and manage your daily agenda directly from WhatsApp. Information Retrieval: Ask any factual, news, or research-based question and get real-time answers from various web sources. Creative Content Generation: Request the AI to generate images based on your descriptions for logos, artwork, or social media content. Smart Communication: Engage in natural, contextual conversations with an AI that remembers past interactions. Image Analysis: Send an image and ask the AI to describe its contents or answer questions about it. --- Pre-conditions Before importing and running this template, you will need: Self-hosted n8n Instance: This template requires a self-hosted n8n instance as it uses webhooks that need public accessibility. WhatsApp Business Account: A Meta Developer Account configured for WhatsApp Business Platform API access. MongoDB Atlas Account: A MongoDB Atlas cluster with a database and collection set up for the vector store. Google Cloud Project: Configured with API access for Google Calendar, Google Tasks, and Gmail. API Keys/Accounts for: OpenWeatherMap: For weather forecasts. Groq, OpenRouter, or Vercel AI Gateway: For various Language Models (e.g., gpt-oss-120b, gpt-5-nano, gpt-4o-mini). Mistral Cloud: For embedding models (e.g., codestral-embed-2505). Brave Search: For web and news searches. Tavily API: For structured search results. Together.xyz or similar Image Generation API: For creating images. Perplexity API (or self-hosted instance): For the perprlexcia tool (the current URL http://self hoseted perplexcia/api/search implies a self-hosted or custom endpoint). Publicly Accessible URLs: Your n8n instance and any custom webhook endpoints (like perprlexcia) must be publicly accessible. --- Requirements (n8n Credentials) You will need to set up the following credentials within your n8n instance: WhatsApp OAuth account: For the WhatsApp Trigger node. WhatsApp account: For Send message2, Send message3, Download media, and Typing.... nodes. Google Palm Api account: For Analyze image, Google Gemini Chat Model, gemini-2.5-flash, and Google Gemini Chat Model5 nodes. OpenWeatherMap account: For the Get Weather Forecast node. Groq account: For gpt-oss-120b node. Google Calendar OAuth2Api account: For the Google Calendar tools. MongoDB account: For MongoDB Atlas Vector Store nodes. OpenRouter account: For gpt-5-nano and gpt-4.1-nanoChat Model1 nodes. Gmail account : For Get many messages and Get a message nodes (ensure correct Gmail OAuth2 setup for each). Google Tasks account: For the Google Tasks tools. Bearer Auth account: For HTTP Request5 (used in media download). Brave Search account: For Brave Web Search and Brave News Search nodes. Vercel Ai Gateway Api account: For gpt-4.1-mini, gpt-oss-120b, gpt-oss-120b2, and gpt-4.1-nano nodes. HTTP Header Auth account: For Tavily web search (create a new one named "Tavily API Key" with Authorization: Bearer YOURTAVILYAPI_KEY) and HTTP Request (for Together.xyz, e.g., "Together.xyz API Key"). Mistral Cloud account: For codestral-embed-2505, codestral-embed-, and codestral-embed-2506 nodes.

Iniyavan JCBy Iniyavan JC
3593

Create viral social media videos with FalAI Flux/Kling and GPT-4 automation

AI-Powered Viral Video Factory 🚀 This workflow automates the entire process of creating short, cinematic, fact-based videos ready for social media. It takes a single concept, generates a script and visuals, creates video clips, adds a voiceover, and assembles a final video, which is then uploaded directly to your Google Drive. It's perfect for content creators and marketing agencies looking to scale video production with minimal manual effort. --- How It Works 🎬 Generate a Viral Idea 💡: The workflow begins with the Create New Idea1 (OpenAI) node, which generates a viral-ready video concept, including a punchy title, hashtags, and a brief description based on a core theme (e.g., space, black holes). This idea is then logged in a Google Sheet. Create a Cinematic Script & Voiceover 📜: An OpenAI node (Generating scenes1) creates a detailed 12-scene script, outlining the visuals for a 60-second video. The script text for all scenes is combined and prepared for voiceover generation by another OpenAI node (Generate Voiceover). Generate Scene-by-Scene Visuals ✨: The workflow loops through each of the 12 scenes to create an animated clip: Image Generation: An HTTP Request node sends the scene's prompt to the fal-ai/flux model to create a photorealistic still image. Animation Prompting: The Video Prompts1 (OpenAI Vision) node analyzes the generated image and creates a new, specific prompt to animate it cinematically. Image-to-Video: Another HTTP Request node uses the fal-ai/kling-video model to turn the still image into a 5-second animated video clip based on the new animation prompt. Assemble the Final Video 🎞️: Stitch Clips: Once all 12 clips are generated, the Merge Clips node uses the fal-ai/ffmpeg-api to concatenate them into a single, seamless 60-second video. Add Audio: The Combine Voice and Video node then layers the AI-generated voiceover onto the stitched video. Deliver to Google Drive 📂: Finally, the completed video is converted from a URL to a file and automatically uploaded to your specified Google Drive folder for easy access and publishing. --- Key Technologies Used n8n: For orchestrating the entire automated workflow. OpenAI (GPT-4.1 & GPT-4o): For idea generation, scriptwriting, voiceover, and vision analysis. Fal.ai: For high-performance, API-based image generation (Flux), video animation (Kling), and video processing (FFMPEG API). Google Drive & Sheets: For logging ideas and storing the final video output. --- Setup Instructions Add Credentials: In n8n, add your OpenAI API key. Connect your Google account for Google Sheets and Google Drive access. You will need a Fal.ai API Key. Configure Fal.ai API Key: Crucially, you must replace the placeholder API key in all HTTP Request nodes that call the fal.run URL. Find the Authorization header in each of these nodes and replace the existing key with your own Key YOURFALAIKEYHERE. Nodes to update: Create Images1, Get Images1, Create Video1, Get Video1, Merge Clips, Get Final video, Combine Voice and Video. Configure OpenAI Nodes: Select each OpenAI node (e.g., Create New Idea1, Generating scenes1) and choose your OpenAI credential. You can customize the main prompt in the Create New Idea1 node to change the theme of the videos you want to generate. Configure Google Sheets & Drive: In the Organise idea, caption etc1 node, select your Google Sheets credential and specify the Spreadsheet and Sheet ID you want to use for logging ideas. In the Upload file to drive node, select your Google Drive credential and choose the destination folder for your final videos.

MatthewBy Matthew
2144

Cryptocurrency volume/mCap screener - automated trading alerts to Discord

Purpose & Audience This n8n workflow template is designed for cryptocurrency traders, investors, and market analysts who want to automate the process of detecting unusual trading activity across 1,250+ cryptocurrencies. By continuously monitoring volume-to-market-cap ratios and price movements, the workflow delivers real-time alerts directly to your Discord server—helping you catch potential breakouts, pump schemes, or high-impact market events before they become mainstream news. What It Does Monitors 1,250+ cryptocurrencies from CoinGecko every 4 hours for unusual trading patterns Calculates volume-to-market-cap ratios to identify coins with abnormally high trading activity (>30% ratio or >$100M volume with significant price movement) Ranks and filters the top volume alerts based on trading intensity Sends beautifully formatted Discord embeds with coin metrics, price changes, market cap, and direct CoinGecko links Provides context with educational information about what volume/MCap ratios mean and trading warnings Who Is It For Day traders and scalpers seeking early signals for high-volatility opportunities Market analysts who want automated surveillance of unusual market activity across hundreds of coins Discord community managers looking to provide valuable trading insights to their members Anyone interested in catching potential pumps, breakouts, or news-driven price movements before the crowd Setup once and let it run 24/7. The workflow automatically scans the market every 4 hours and only sends alerts when something significant is detected. Customize the alert thresholds, add more coins, or adjust the schedule to fit your trading style. No coding required—just connect your CoinGecko API and Discord webhook, and you're ready to catch the next big move! How to Set Up Obtain a free CoinGecko API key and create a Discord webhook for your server Import the workflow and add your API credentials Customize alert thresholds and update frequency to match your trading preferences That's it—receive real-time volume alerts for potential trading opportunities automatically Preview: Please click here for preview.

Malik HashirBy Malik Hashir
92

Postgres data freshness monitoring with email alerts

Monitor Postgres Data Freshness and Email Alert If Stale This template monitors a set of tables inside a Postgres database to ensure they're getting updated. If the table hasn't been updated in 3 days (configurable), an email alert is sent containing the tables that are stale. Requirements You must have a Postgres database containing one or more tables that you'd like to monitor. Each table to monitor must have a date or timestamp column that tracks when data was pushed. For example, this might be: A timestamp column if your table holds event/timeseries data A last_updated column if your rows are expected to be modified Usage Use this template Add your Postgres and email credentials Adjust the Produce tables + date columns node to produce pairs of [table, date_column] that should be monitored for freshness 💁‍♂️ Note that a timestamp column also works (Optional) Adjust the Remove fresh tables node for your desired staleness window (default is 3 days, but you can adjust as you please) (Optional) Customize the Send alerts node to call whichever alerting workflow you please (I recommend my alerting workflow for easiest plug-and-play) How it works This template works by: Pulling the most recent row for each table Calculating how out-of-date each table is, in days Dropping fresh tables that have been updated within the past 3 days Sending an email alert with the stale tables that haven't been updated within the past 3 days

KevinBy Kevin
54
All templates loaded