Back to Catalog
Daniel Nkencho

Daniel Nkencho

AI Automation Consultant | Helping Business Implement AI systems | Book a call here ⇾ http://cal.com/corefluxai

Total Views7,943
Templates8

Templates by Daniel Nkencho

Daily AI news digest from Hacker News with GPT-5 summaries and email delivery

Daily AI News Digest from Hacker News with GPT Summaries and Email Delivery Automate your daily AI news briefing: fetch AI-tagged stories from Hacker News, filter for the last 24 hours, scrape and summarize with GPT, then deliver a clean HTML email digest—no manual curation needed. What It Does Runs on schedule to fetch up to 1000 Hacker News stories tagged "AI", filters for today's posts, loops through each to scrape content from source URLs, generates concise AI summaries via OpenAI GPT, aggregates into a styled HTML newsletter, and sends via email. Setup Requirements Credentials Needed: OpenAI API Key: Get from platform.openai.com/api-keys, add as "OpenAI" credential in n8n SMTP Server: Configure email credentials (Gmail, Zoho, etc.) in n8n's SMTP settings Configuration Steps: Import workflow JSON into n8n Add OpenAI credential to "GPT 5 pro" node Add SMTP credential to "Send email" node Update fromEmail and toEmail fields in "Send email" node Set schedule in "start" trigger node (default: daily) Activate workflow Key Features Smart Filtering: Fetches 1000 stories, filters last 24 hours using date expressions AI Summarization: GPT generates heading + 2-sentence summaries with links Reliable Scraping: HTTP requests with markdown conversion for clean LLM input Batch Processing: Loops through items, processes sequentially Responsive Design: Mobile-friendly HTML email template with inline CSS Aggregation: Combines all summaries into single digest Customization Options Change Keywords: Modify "AI" filter in "Get many items" node Adjust Timeframe: Edit date filter in "Today" node Tweak Summaries: Customize GPT prompt in "News Summary Agent" node Email Styling: Update HTML/CSS in "Send email" node Schedule: Change frequency in "start" trigger Use Cases Personal daily AI briefings for researchers/developers Team knowledge sharing via automated newsletters Content curation for blogs or social media Trend monitoring for marketers Troubleshooting No stories returned: Check HN API limits, verify keyword filter Scraping failures: Some sites block bots—proxy noted in workflow but may need updates Email not sending: Verify SMTP credentials and test connection Poor summaries: Adjust GPT prompt or switch model Execution Time: 2-10 minutes depending on story count

Daniel NkenchoBy Daniel Nkencho
2486

Web crawler: Convert websites to AI-ready markdown in Google Sheets

Transform any website into a structured knowledge repository with this intelligent crawler that extracts hyperlinks from the homepage, intelligently filters images and content pages, and aggregates full Markdown-formatted content—perfect for fueling AI agents or building comprehensive company dossiers without manual effort. 📋 What This Template Does This advanced workflow acts as a lightweight web crawler: it scrapes the homepage to discover all internal links (mimicking a sitemap extraction), deduplicates and validates them, separates image assets from textual pages, then fetches and converts non-image page content to clean Markdown. Results are seamlessly appended to Google Sheets for easy analysis, export, or integration into vector databases. Automatically discovers and processes subpage links from the homepage Filters out duplicates and non-HTTP links for efficient crawling Converts scraped content to Markdown for AI-ready formatting Categorizes and stores images, links, and full content in a single sheet row per site 🔧 Prerequisites Google account with Sheets access for data storage n8n instance (cloud or self-hosted) Basic understanding of URLs and web links 🔑 Required Credentials Google Sheets OAuth2 API Setup Go to console.cloud.google.com → APIs & Services → Credentials Click "Create Credentials" → Select "OAuth client ID" → Choose "Web application" Add authorized redirect URIs: https://your-n8n-instance.com/rest/oauth2-credential/callback (replace with your n8n URL) Download the client ID and secret, then add to n8n as "Google Sheets OAuth2 API" credential type During setup, grant access to Google Sheets scopes (e.g., spreadsheets) and test the connection by listing a sheet ⚙️ Configuration Steps Import the workflow JSON into your n8n instance In the "Set Website" node, update the website_url value to your target site (e.g., https://example.com) Assign your Google Sheets credential to the three "Add ... to Sheet" nodes Update the documentId and sheetName in those nodes to your target spreadsheet ID and sheet name/ID Ensure your sheet has columns: "Website", "Links", "Scraped Content", "Images" Activate the workflow and trigger manually to test scraping 🎯 Use Cases Knowledge base creation: Crawl a company's site to aggregate all content into Sheets, then export to Notion or a vector DB for internal wikis AI agent training: Extract structured Markdown from industry sites to fine-tune LLMs on domain-specific data like legal docs or tech blogs Competitor intelligence: Build dossiers by crawling rival websites, separating assets and text for SEO audits or market analysis Content archiving: Preserve dynamic sites (e.g., news portals) as static knowledge dumps for compliance or historical research ⚠️ Troubleshooting No links extracted: Verify the homepage has <a> tags; test with a simple site like example.com and check HTTP response in executions Sheet update fails: Confirm column names match exactly (case-sensitive) and credential has edit permissions; try a new blank sheet Content truncated: Google Sheets limits cells to ~50k chars—adjust the .slice(0, 50000) in "Add Scraped Content to Sheet" or split into multiple rows Rate limiting errors: Add a "Wait" node after "Scrape Links" with 1-2s delay if the site blocks rapid requests

Daniel NkenchoBy Daniel Nkencho
1090

Generate song lyrics and music from text prompts using OpenAI and Fal.ai Minimax

Spark your creativity instantly in any chat—turn a simple prompt like "heartbreak ballad" into original, full-length lyrics and a professional AI-generated music track, all without leaving your conversation. 📋 What This Template Does This chat-triggered workflow harnesses AI to generate detailed, genre-matched song lyrics (at least 600 characters) from user messages, then queues them for music synthesis via Fal.ai's minimax-music model. It polls asynchronously until the track is ready, delivering lyrics and audio URL back in chat. Crafts original, structured lyrics with verses, choruses, and bridges using OpenAI Submits to Fal.ai for melody, instrumentation, and vocals aligned to the style Handles long-running generations with smart looping and status checks Returns complete song package (lyrics + audio link) for seamless sharing 🔧 Prerequisites n8n account (self-hosted or cloud with chat integration enabled) OpenAI account with API access for GPT models Fal.ai account for AI music generation 🔑 Required Credentials OpenAI API Setup Go to platform.openai.com → API keys (sidebar) Click "Create new secret key" → Name it (e.g., "n8n Songwriter") Copy the key and add to n8n as "OpenAI API" credential type Test by sending a simple chat completion request Fal.ai HTTP Header Auth Setup Sign up at fal.ai → Dashboard → API Keys Generate a new API key → Copy it In n8n, create "HTTP Header Auth" credential: Name="Fal.ai", Header Name="Authorization", Header Value="Key [Your API Key]" Test with a simple GET to their queue endpoint (e.g., /status) ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign OpenAI API credentials to the "OpenAI Chat Model" node Assign Fal.ai HTTP Header Auth to the "Generate Music Track", "Check Generation Status", and "Fetch Final Result" nodes Activate the workflow—chat trigger will appear in your n8n chat interface Test by messaging: "Create an upbeat pop song about road trips" 🎯 Use Cases Content Creators: YouTubers generating custom jingles for videos on the fly, streamlining production from idea to audio export Educators: Music teachers using chat prompts to create era-specific folk tunes for classroom discussions, fostering interactive learning Gift Personalization: Friends crafting anniversary R&B tracks from shared memories via quick chats, delivering emotional audio surprises Artist Brainstorming: Songwriters prototyping hip-hop beats in real-time during sessions, accelerating collaboration and iteration ⚠️ Troubleshooting Invalid JSON from AI Agent: Ensure the system prompt stresses valid JSON; test the agent standalone with a sample query Music Generation Fails (401/403): Verify Fal.ai API key has minimax-music access; check usage quotas in dashboard Status Polling Loops Indefinitely: Bump wait time to 45-60s for complex tracks; inspect fal.ai queue logs for bottlenecks Lyrics Under 600 Characters: Tweak agent prompt to enforce fuller structures like [V1][C][V2][B][C]; verify output length in executions

Daniel NkenchoBy Daniel Nkencho
601

Automate Apollo lead scraping & email enrichment to Airtable CRM with Apify

Apollo Lead Scraper to Airtable CRM Automate your lead generation by scraping targeted prospects from Apollo.io, enriching with contact details, and seamlessly syncing to Airtable for organized outreach—all without manual data entry. What It Does This workflow pulls search URLs from Airtable, uses Apify to scrape Apollo leads (up to 50k), enriches with emails and LinkedIn profiles, removes duplicates, filters valid entries, and categorizes contacts into Airtable tables based on email availability for efficient CRM management. Key Features Apify Apollo Scraper - Extracts up to 50k leads with personal/work emails Smart Deduplication - Removes duplicates based on key fields like email and name Email Categorization - Separates contacts with/without emails into dedicated tables Field Mapping - Customizable data transformation for Airtable compatibility Configurable Limits - Adjust total records and memory for optimal performance Error Handling - Built-in troubleshooting for common issues like invalid URLs Perfect For Sales Teams: Build targeted B2B pipelines for email campaigns Recruiters: Source candidates by job title, location, and skills Marketers: Create datasets for market research and analysis Agencies: Automate client prospecting from custom filters Researchers: Collect professional data for industry studies CRM Managers: Maintain clean, enriched contact databases Technical Highlights Leveraging n8n's Airtable and Apify integrations, this workflow showcases: Dynamic data fetching from Airtable tables Actor-based web scraping with custom parameters Conditional branching for data routing Efficient data processing with set, filter, and if nodes Scalable design for large datasets with memory optimization Ideal for automating lead workflows and scaling prospecting efforts. No advanced coding needed—just set up credentials and run!

Daniel NkenchoBy Daniel Nkencho
484

Generate custom logos from websites using OpenAI and Gemini

Transform any website into a custom logo in seconds with AI-powered analysis—no design skills required! 📋 What This Template Does This workflow receives a website URL via webhook, captures a screenshot and fetches the page content, then leverages OpenAI to craft an optimized prompt based on the site's visuals and text. Finally, Google Gemini generates a professional logo image, which is returned as a binary response for immediate use. Automates screenshot capture and content scraping for comprehensive site analysis Intelligently generates tailored logo prompts using multimodal AI Produces high-quality, context-aware logos with Gemini's image generation Delivers the logo directly via webhook response 🔧 Prerequisites n8n self-hosted or cloud instance with webhook support ScreenshotOne account for website screenshots OpenAI account with API access Google AI Studio account for Gemini API 🔑 Required Credentials ScreenshotOne API Setup Sign up at screenshotone.com and navigate to Dashboard → API Keys Generate a new access key with screenshot permissions In the workflow, replace "[Your ScreenshotOne Access Key]" in the "Capture Website Screenshot" node with your key (no n8n credential needed—it's an HTTP query param) OpenAI API Setup Log in to platform.openai.com → API Keys Create a new secret key with chat completions access Add to n8n as "OpenAI API" credential type and assign to "OpenAI Prompt Generator" node Google Gemini API Setup Go to aistudio.google.com/app/apikey Create a new API key (free tier available) Add to n8n as "Google PaLM API" credential type and assign to "Generate Logo Image" node ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign the required credentials to the OpenAI and Google Gemini nodes Replace the placeholder API key in the "Capture Website Screenshot" node's query parameters Activate the workflow to enable the webhook Test by sending a POST request to the webhook URL with JSON body: {"websiteUrl": "https://example.com"} 🎯 Use Cases Marketing teams prototyping brand assets: Quickly generate logo variations for client websites during pitches, saving hours on manual design Web developers building portfolios: Auto-create matching logos for new sites to enhance visual consistency in demos Freelance designers iterating ideas: Analyze competitor sites to inspire custom logos without starting from scratch Educational projects on AI design: Teach students how multimodal AI combines text and images for creative outputs ⚠️ Troubleshooting Screenshot fails (timeout/error): Increase "timeout" param to 120s or check URL accessibility; verify API key and quotas at screenshotone.com Prompt generation empty: Ensure OpenAI credential has sufficient quota; test node isolation with a simple query Logo image blank or low-quality: Refine the prompt in "Generate Logo Prompt" for more specifics (e.g., add style keywords); check Gemini API limits Webhook not triggering: Confirm POST method and JSON body format; view execution logs for payload details

Daniel NkenchoBy Daniel Nkencho
483

Backup & restore n8n workflows with Telegram, Google Drive and form upload

Secure your n8n automations with this comprehensive template that automates periodic backups to Telegram for instant access while enabling flexible restores from Google Drive links or direct file uploads—ensuring quick recovery without data loss. 📋 What This Template Does This dual-branch workflow handles full n8n instance backups and restores seamlessly. The backup arm runs every 3 days, fetching all workflows via the n8n API, aggregating them into a JSON array, converting to a text file, and sending it to Telegram for offsite storage and sharing. The restore arm supports two entry points: manual execution to pull a backup from Google Drive or form-based upload for local files, then parses the JSON, cleans workflows for compatibility, and loops to create missing ones or update existing by name—handling batches efficiently to respect API limits. Scheduled backups with Telegram delivery for easy stakeholder access Dual restore paths: Drive download or direct file upload via form Intelligent create-or-update logic with data sanitization to avoid conflicts Looped processing with existence checks and error continuation 🔧 Prerequisites n8n instance with API enabled (self-hosted or cloud) Telegram account for bot setup Google Drive account (optional, for Drive-based restores) 🔑 Required Credentials n8n API Setup In n8n, navigate to Settings → n8n API Enable the API and generate a new key Add to n8n as "n8n API" credential type, pasting the key in the API Key field Telegram API Setup Message @BotFather on Telegram to create a new bot and get your token Find your chat ID by messaging @userinfobot Add to n8n as "Telegram API" credential type, entering the Bot Token Google Drive OAuth2 API Setup In Google Cloud Console, go to APIs & Services → Credentials Create an OAuth 2.0 Client ID for Web application, enable Drive API Add redirect URI: [your-n8n-instance-url]/rest/oauth2-credential/callback Add to n8n as "Google Drive OAuth2 API" credential type and authorize ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign the n8n API, Telegram API, and Google Drive credentials to their nodes Update the Telegram chat ID in the "Send Backup to Telegram" node Set the Google Drive file ID in the "Download Backup from Drive" node (from file URL) Activate the workflow and test backup by executing the Schedule node manually Test restore: Run manual trigger for Drive or use the form for upload 🎯 Use Cases Dev teams backing up staging workflows to Telegram for rapid production restores during deployments Solo automators uploading local backups via form to sync across devices after n8n migrations Agencies sharing client workflow archives via Drive links for secure, collaborative restores Educational setups scheduling exports to Telegram for student template distribution and recovery ⚠️ Troubleshooting Backup file empty: Verify n8n API permissions include read access to workflows Restore parse errors: Check JSON validity in backup file; adjust Code node property reference if needed API rate limits hit: Increase Wait node duration or reduce batch size in Loop Form upload fails: Ensure file is valid JSON text; test with small backup first

Daniel NkenchoBy Daniel Nkencho
280

Automated Gmail support agent with Gemini 2.5, RAG & Cohere reranking

AI Email Support Agent with RAG & Cohere Reranking Transform your inbox into an intelligent support system: automatically detect new emails, retrieve relevant knowledge from Pinecone, rerank with Cohere for precision, generate contextual replies using Gemini AI, and respond—all while maintaining conversation history. What It Does This workflow triggers on incoming Gmail messages, leverages a LangChain agent with PostgreSQL memory for context, queries a Pinecone vector store (RAG) enhanced by Cohere reranking and OpenAI embeddings, crafts personalized responses via Gemini 2.5, and auto-replies to keep support flowing. Key Features Gmail Integration - Real-time polling for new emails every minute RAG with Pinecone - Retrieves top 10 relevant docs from "agency-info" index as agent tool Cohere Reranking - Boosts retrieval accuracy by reordering results semantically Persistent Memory - Postgres chat history keyed by email ID for ongoing threads Gemini-Powered Agent - Handles queries with custom system prompt for agency support Seamless Auto-Reply - Sends formatted text responses directly in Gmail Perfect For Agencies: Automate client FAQs on services, pricing, and ownership Support Teams: Scale responses without losing conversation context Small Businesses: Handle inquiries 24/7 with AI-driven accuracy Developers: Prototype RAG agents with vector stores and rerankers Marketers: Personalize outreach replies based on knowledge base Consultants: Quick, informed answers from internal docs Technical Highlights Built on n8n's LangChain ecosystem, this workflow highlights: Trigger-to-response pipeline with polling and webhooks Hybrid retrieval: Embeddings + vector search + semantic reranking Stateful agents with database-backed memory for multi-turn chats Multi-provider setup: OpenAI (embeddings), Cohere (rerank), Google (LLM) Scalable for production with configurable topK and session keys Setup Instructions Prerequisites n8n instance with LangChain nodes enabled Accounts for: Gmail (OAuth2), OpenAI (API key), Cohere (API key), Google Gemini (API key), Pinecone (API key and index), Postgres (database connection, e.g., Neon or Supabase) Required Credentials Gmail OAuth2 Enable Gmail API in Google Cloud Console Create OAuth2 credential in n8n with scopes: https://www.googleapis.com/auth/gmail.readonly, https://www.googleapis.com/auth/gmail.send OpenAI API Get API key from platform.openai.com Add as OpenAI credential in n8n Cohere API Sign up at cohere.com Copy API key to n8n Cohere credential Google Gemini API Generate key at https://aistudio.google.com/ Add as Google PaLM credential in n8n (compatible with Gemini) Pinecone API Create index "agency-info" with dimension 1024 Add API key to n8n Pinecone credential Postgres Set up database (e.g., Neon/Supabase) with a table for chat history Add connection details (host, database, user, password) to n8n Postgres credential Configuration Steps Import the workflow JSON into your n8n instance Assign all required credentials to the respective nodes Populate the Pinecone "agency-info" index with your knowledge base documents (use a separate upsert workflow or Pinecone dashboard) Customize the tableName in the Postgres Memory node if needed (default: "emailsupportagent_") Adjust the agent's system prompt or topK retrieval if required for your use case Activate the workflow and test by sending a sample email to trigger it Troubleshooting No trigger firing: Verify Gmail scopes and polling interval Empty retrieval: Check Pinecone index population, dimensions (must be 1024), and document embeddings Rerank errors: Ensure Cohere API key is valid and has sufficient quota Memory issues: Confirm Postgres connection and that sessionKey uses email ID Perfect for deploying hands-off email automation. Import, connect credentials, and activate!

Daniel NkenchoBy Daniel Nkencho
227

Extract & process Q&A from URLs with Airtop, OpenRouter AI & Safety Guardrails

Transform your Telegram bot into a secure content analyzer: send any URL, and get safe, structured Q&A extractions with AI-powered safety checks and web search capabilities. 📋 What This Template Does This workflow activates when a user sends a valid URL to your Telegram bot. It extracts questions and answers from the webpage using Airtop, applies NSFW and PII guardrails to ensure safe content, then uses an OpenRouter AI agent (with optional Tavily search) to generate and send a concise response. If guardrails fail, it alerts the user instead. Filters for valid URLs only to prevent unnecessary processing Extracts structured Q&A from documents or forms Enforces safety checks for harmful or private content Supports web searches for enhanced responses when needed 🔧 Prerequisites A Telegram bot created via @BotFather Accounts with Airtop, OpenRouter, and Tavily 🔑 Required Credentials Telegram API Setup Open Telegram → Search @BotFather → Use /newbot command Follow prompts to create bot and obtain API token Add to n8n as Telegram API credential type Airtop API Setup Visit https://airtop.ai → Sign up or log in → Navigate to Dashboard → API Keys Generate a new API key with extraction permissions Add to n8n as Airtop API credential type OpenRouter API Setup Go to https://openrouter.ai → Sign up or log in → Navigate to API Keys section Generate and copy your API key (free tier sufficient for basic use) Add to n8n as OpenRouter API credential type Tavily API Setup Visit https://app.tavily.com → Sign up or log in → Go to API Keys Generate and copy your API key Add to n8n as Tavily API credential type ⚙️ Configuration Steps Import the workflow JSON into n8n Assign your Telegram, Airtop, OpenRouter, and Tavily credentials to the respective nodes Activate the workflow to register the Telegram trigger Test by sending a plain URL (no extra text) to your bot in Telegram Monitor the first execution and adjust guardrail thresholds if needed 🎯 Use Cases Researchers summarizing academic papers or reports while ensuring no sensitive data leaks Support teams extracting info from customer-submitted docs/forms with automatic safety filtering Content creators pulling Q&A from articles for bots, blocking inappropriate responses Educators analyzing educational resources safely for student-facing chat tools ⚠️ Troubleshooting No response from bot: Verify the message contains only a valid URL; adjust regex in Filter Only URLs node if needed Guardrail failures: Lower NSFW threshold (e.g., from 0.7 to 0.5) or disable PII checks in Apply Safety Guardrails node Extraction errors: Test with public, text-heavy URLs; some JS-heavy sites may require alternative extractors Rate limits hit: Check OpenRouter/Tavily dashboards for usage; upgrade to paid tiers for heavy traffic

Daniel NkenchoBy Daniel Nkencho
73
All templates loaded