Back to Catalog
Muhammad Asadullah

Muhammad Asadullah

I am a Data Scientist and Generative AI Developer with expertise in building AI Applications, Chatbots, and Automations. Skilled in Python, R, and no-code/low-code platforms like n8n, Zapier and Make.

Total Views2,368
Templates5

Templates by Muhammad Asadullah

End-to-end Ai blog research and writer with Gemini AI, Supabase and Nano-Banana

Blog Research and Writer n8n Workflow - Ai Blog Writer Fully automated blog creation system using n8n + AI Agents + Image Generation Example Blog Overview This workflow automates the entire blog creation pipeline—from topic research to final publication. Three specialized AI agents collaborate to produce publication-ready blog posts with custom images, all saved directly to your Supabase database. How It Works Research Agent (Topic Discovery) Triggers: Runs on schedule (default: daily at 4 AM) Process: Fetches existing blog titles from Supabase to avoid duplicates Uses Google Search + RSS feeds to identify trending topics in your niche Scrapes competitor content to find content gaps Generates detailed topic briefs with SEO keywords, search intent, and differentiation angles Output: Comprehensive research document with SERP analysis and content strategy Writer Agent (Content Creation) Triggers: Receives research from Agent 1 Process: Writes full blog article based on research brief Follows strict SEO and readability guidelines (no AI fluff, natural tone, actionable content) Structures content with proper HTML markup Includes key sections: hook, takeaways, frameworks, FAQs, CTAs Places image placeholders with mock URLs (https://db.com/image_1, etc.) Output: Complete JSON object with title, slug, excerpt, tags, category, and full HTML content Image Prompt Writer (Visual Generation) Triggers: Receives blog content from Agent 2 Process: Analyzes blog content to determine number and type of images needed Generates detailed 150-word prompts for each image (feature image + content images) Creates prompts optimized for Nano-Banana image model Names each image descriptively for SEO Output: Structured prompts for 3-6 images per blog post Image Generation Pipeline Process: Loops through each image prompt Generates images via Nano-Banana API (Wavespeed.ai) Downloads and converts images to PNG Uploads to Supabase storage bucket Generates permanent signed URLs Replaces mock URLs in HTML with real image URLs Output: Blog HTML with all images embedded Publication Final blog post saved to Supabase blogs table as draft Ready for immediate publishing or review Key Features ✅ Duplicate Prevention: Checks existing blogs before researching new topics ✅ SEO Optimized: Natural language, proper heading structure, keyword integration ✅ Human-Like Writing: No robotic phrases, varied sentence structure, actionable advice ✅ Custom Images: Generated specifically for each blog's content ✅ Fully Structured: JSON output with all metadata (tags, category, excerpt, etc.) ✅ Error Handling: Automatic retries with wait periods between agent calls ✅ Tool Integration: Google Search, URL scraping, RSS feeds for research Setup Requirements API Keys Needed Google Gemini API: For Gemini 2.5 Pro/Flash models (content generation/writing) Groq API (optional): For Kimi-K2-Instruct model (research/writing) Serper.dev API: For Google Search (2,500 free searches/month) Wavespeed.ai API: For Nano-Banana image generation Supabase Account: For database and image storage Supabase Setup Create blogs table with fields: title, slug, excerpt, category, tags, featured_image, status, featured, content Create storage bucket for blog images Configure bucket as public or use signed URLs Workflow Configuration Update these placeholders: RSS Feed URLs: Replace [your website's rss.xml] with your site's RSS feed Storage URLs: Update Supabase storage paths in "Upload object" and "Generate presigned URL" nodes API Keys: Add your credentials to all HTTP Request nodes Niche/Brand: Customize Research Agent system prompt with your industry keywords Writing Style: Adjust Writer Agent prompt for your brand voice Customization Options Change Image Provider Replace the "nano banana" node with: Gemini Imagen 3/4 DALL-E 3 Midjourney API Any Wavespeed.ai model Adjust Schedule Modify "Schedule Trigger" to run: Multiple times daily Specific days of week On-demand via webhook Alternative Research Tools Replace Serper.dev with: Perplexity API (included as alternative node) Custom web scraping Different search providers Output Format json { "title": "Your SEO-Optimized Title", "slug": "your-seo-optimized-title", "excerpt": "Compelling 2-3 sentence summary with key benefits.", "category": "Your Category", "tags": ["tag1", "tag2", "tag3", "tag4"], "author_name": "Your Team Name", "featured": false, "status": "draft", "content": "<article>...complete HTML with embedded images...</article>" } Performance Notes Average runtime: 15-25 minutes per blog post Cost per post: ~$0.10-0.30 (depending on API usage) Image generation: 10-15 seconds per image with Nano-Banana Retry logic: Automatically handles API timeouts with 5-15 minute wait periods Best Practices Review Before Publishing: Workflow saves as "draft" status for human review Monitor API Limits: Track Serper.dev searches and image generation quotas Test Custom Prompts: Adjust Research/Writer prompts to match your brand Image Quality: Review generated images; regenerate if needed SEO Validation: Check slugs and meta descriptions before going live Workflow Architecture 3 Main Phases: Research → Writer → Image Prompts (Sequential AI Agent chain) Image Generation → Upload → URL Replacement (Loop-based processing) Final Assembly → Database Insert (Single save operation) Error Handling: Wait nodes between agents prevent rate limiting Retry logic on agent failures (max 2 retries) Conditional checks ensure content quality before proceeding --- Result: Hands-free blog publishing that maintains quality while saving 3-5 hours per post.

Muhammad AsadullahBy Muhammad Asadullah
1187

Google Drive to Pinecone vector storage workflow

Document Chat Bot with Automated RAG System This workflow creates a conversational assistant that can answer questions based on your Google Drive documents. It automatically processes various file types and uses Retrieval-Augmented Generation (RAG) to provide accurate answers based on your document content. How It Works Monitors Google Drive for New Documents: Automatically detects when files are created or updated in designated folders Processes Multiple File Types: Handles PDFs, Excel spreadsheets, and Google Docs Builds a Knowledge Base: Converts documents into searchable vector embeddings stored in Supabase Provides Chat Interface: Users can ask questions about their documents through a web interface Retrieves Relevant Information: Uses advanced RAG techniques to find and present the most relevant information Setup Steps (Estimated time: 25-30 minutes) API Credentials: Connect your OpenAI API key for text processing and embeddings Google Drive Integration: Set up Google Drive triggers to monitor specific folders Supabase Configuration: Configure Supabase vector database for document storage Chat Interface Setup: Deploy the web-based chat interface using the provided webhook The workflow automatically chunks documents into manageable segments, generates embeddings, and stores them in a vector database for efficient retrieval. When users ask questions, the system finds the most relevant document sections and uses them to generate accurate, contextual responses.

Muhammad AsadullahBy Muhammad Asadullah
285

LinkedIn post agent

LinkedIn Post Generator - Automated Marketing Content Workflow This workflow creates and publishes professional LinkedIn posts automatically on a schedule, complete with AI-generated images. Here's how it works: How It Works Generates professional marketing posts focused on Generative AI and enterprise solutions (update prompt for your desired content) Creates matching images that represent the post's themes visually Publishes directly to LinkedIn on a scheduled basis Incorporates RSS feeds for up-to-date content inspiration Setup Steps (Estimated time: 15-20 minutes) API Credentials: Connect your OpenAI API key for text and image generation LinkedIn Authentication: Add your LinkedIn credentials to enable posting RSS Configuration: Add relevant industry RSS feed URLs for content inspiration Schedule: Set your preferred posting frequency in the Schedule Trigger node The workflow uses GPT-4o and GPT-4o Mini to create professionally-toned content that positions you as a thought leader in marketing and AI implementation. The generated content follows specific formatting guidelines to maximize engagement on LinkedIn. Each post is carefully crafted to be 100-150 words with strategic paragraph breaks, ending with relevant hashtags. The matching images are designed to be clean, minimalistic, and aligned with the post's theme without any distracting text elements.

Muhammad AsadullahBy Muhammad Asadullah
238

Extract & filter Reddit posts & comments with keyword search & markdown formatting

Reddit Posts & Comments Scraper A powerful workflow to scrape Reddit posts and comments by keywords and/or subreddit, with intelligent filtering and formatting. How it works Search Reddit - Accepts keywords and/or subreddit parameters via webhook to search for relevant posts Filter & Sort - Filters posts by date (last 60 days), minimum upvotes (20+), removes duplicates, and sorts by popularity Extract Comments - For each post, retrieves and extracts the top 20 most upvoted comments with their reply threads Format Results - Structures all posts and comments into a clean, readable markdown report Return Data - Sends the formatted report back as a webhook response, ready for use in AI tools or other applications Set up steps Time to set up: ~10 minutes Create Reddit App (5 minutes) Go to https://www.reddit.com/prefs/apps and create a new app Add your n8n URL and callback URL in the "redirect uri" field (you'll find this in the Reddit OAuth2 credentials setup in n8n) Copy your app's Client ID and Client Secret Configure Reddit Credentials in n8n (2 minutes) In n8n, create new Reddit OAuth2 API credentials Paste your Client ID and Client Secret from step 1 Complete the OAuth2 authentication flow Update Webhook URLs (2 minutes) Update the webhook URLs in the example HTTP request nodes to match your n8n instance URL The workflow has two webhooks: /webhook/reddit-search-keyword - for searching by keywords only /webhook/reddit-search-subreddit - for searching within a specific subreddit Customize Filters (1 minute - optional) Date filter: Line 6 in "Posted in Last x days" node (default: 60 days) Upvote filter: "Upvotes Requirement Filtering" node (default: 20+ upvotes) Post limit: "Limit" node (default: 10 posts) Test the Workflow Use the example HTTP request nodes to test both search methods: Keywords only: ?keywords=your,keywords,here Subreddit search: ?keywords=your,keywords&subreddit=SubredditName Usage Examples Search by keywords: GET https://your-n8n.com/webhook/reddit-search-keyword?keywords=AI,machine learning Search within subreddit: GET https://your-n8n.com/webhook/reddit-search-subreddit?keywords=ChatGPT,GPT-4&subreddit=OpenAI The workflow returns a formatted text report with all posts, their metadata, and top comments ready for analysis or AI processing.

Muhammad AsadullahBy Muhammad Asadullah
196

YouTube channel monitor with Gemini AI transcription & summarization

This n8n workflow monitors YouTube channels 24/7, transcribes new videos with AI, and scores them by relevance—all automatically saved to Google Sheets. About This n8n workflow automatically monitors YouTube channels, transcribes new videos, and generates AI-powered summaries with relevance scoring. It pulls channel URLs from a Google Sheet, fetches recent videos (last 14 days), transcribes them using Google Gemini, and saves detailed summaries back to your spreadsheet with relevance scores based on custom criteria. How It Works Schedule Trigger: Runs weekly (every 7 days at 7:05 AM) Fetch Channels: Reads channel URLs and filter criteria from Google Sheets Process Channels: Loops through each channel and fetches recent videos via YouTube RSS feeds Filter Videos: Only processes videos from the last 14 days that don't already exist in the database Transcribe: Uses Google Gemini API to transcribe video content Summarize & Score: AI agent analyzes transcripts against your criteria and generates: Full 500-word summary in markdown Short bullet-point summary Topic classification Relevance score (1-10) Relevance reasoning Save Results: Appends or updates video data in Google Sheets Setup Instructions Prerequisites n8n instance (cloud or self-hosted) Google account with Sheets API access Google Gemini API key Step 1: Google Sheets Setup Create a new Google Spreadsheet with two sheets: Google Sheet Template: Click here to access Make a copy to your Google Drive Configure the two sheets as described below Sheet 1: "channels" Column A: category (e.g., "AI Tools", "Marketing") Column B: channels (comma-separated YouTube channel URLs) Column C: video filter criteria (describe what makes videos relevant) Sheet 2: "videos" Columns: id, title, video url, date, channel, category, topic, summary, short sumary, transcript, relevance score, relevance reason Step 2: Configure n8n Workflow Import the workflow JSON into n8n Google Sheets Authentication: Connect your Google account in both "Get row(s) in sheet" nodes Update the document ID to match your spreadsheet Gemini API Key: In the "set channel" node, replace the placeholder API key with your own Gemini API key Get your key from: https://makersuite.google.com/app/apikey Step 3: Adjust Settings (Optional) Video lookback period: Edit line 6 in "Videos Posted in Last X days" node (default: 14 days) Videos per channel: Modify the "Limit" node (default: 4 videos) Schedule: Change the "Schedule Trigger5" node timing as needed Wait times: Adjust wait nodes to respect API rate limits Step 4: Test & Activate Add a test channel URL to your "channels" sheet Run the workflow manually to verify it works Activate the workflow to run on schedule Notes The workflow includes rate limiting (wait nodes) to avoid API throttling Videos are checked for duplicates before processing Failed transcriptions continue to the next video without stopping the workflow Results are automatically saved to Google Sheets after each video processes

Muhammad AsadullahBy Muhammad Asadullah
180
All templates loaded