6 templates found
Category:
Author:
Sort:

Automate SEO title & description updates for WordPress with Yoast SEO API

This workflow automates the update of Yoast SEO metadata for a specific post or product on a WordPress or WooCommerce site. It sends a POST request to a custom API endpoint exposed by the Yoast SEO API Manager plugin, allowing for programmatic changes to the SEO title and meta description. Bulk version available here. Prerequisites A WordPress site with administrator access. The Yoast SEO plugin installed and activated. The Yoast SEO API Manager companion plugin installed and activated to expose the required API endpoint. WordPress credentials configured within your n8n instance. Setup Steps Configure the Settings Node: In the Settings node, replace the value of the wordpress URL variable with the full URL of your WordPress site (e.g., https://your-domain.com/). Set Credentials: In the HTTP Request - Update Yoast Meta node, select your pre-configured WordPress credentials from the Credential for WordPress API dropdown menu. Define Target and Content: In the same HTTP Request node, navigate to the Body Parameters section and update the following values: post_id: The ID of the WordPress post or WooCommerce product you wish to update. yoast_title: The new SEO title. yoast_description: The new meta description. How It Works Manual Trigger: The workflow is initiated manually. This can be replaced by any trigger node for full automation. Settings Node: This node defines the base URL of the target WordPress instance. This centralizes the configuration, making it easier to manage. HTTP Request Node: This is the core component. It constructs and sends a POST request to the /wp-json/yoast-api/v1/update-meta endpoint. The request body contains the post_id and the new metadata, and it authenticates using the selected n8n WordPress credentials. Customization Guide Dynamic Inputs: To update posts dynamically, replace the static values in the HTTP Request node with n8n expressions. For example, you can use data from a Google Sheets node by setting the postid value to an expression like {{ $json.columnname }}. Update Additional Fields: The underlying API may support updating other Yoast fields. Consult the Yoast SEO API Manager plugin's documentation to identify other available parameters (e.g., yoastcanonicalurl) and add them to the Body Parameters section of the HTTP Request node. Change the Trigger: Replace the When clicking ‘Test workflow’ node with any other trigger node to fit your use case, such as: Schedule: To run the update on a recurring basis. Webhook: To trigger the update from an external service. Google Sheets: To trigger the workflow whenever a row is added or updated in a specific sheet. Yoast SEO API Manager Plugin for WordPress language // ATTENTION: Replace the line below with <?php - This is necessary due to display constraints in web interfaces. <?php / Plugin Name: Yoast SEO API Manager v1.2 Description: Manages the update of Yoast metadata (SEO Title, Meta Description) via a dedicated REST API endpoint. Version: 1.2 Author: Phil - https://inforeole.fr (Adapted by Expert n8n) */ if ( ! defined( 'ABSPATH' ) ) { exit; // Exit if accessed directly. } class YoastAPIManager { public function construct() { addaction('restapiinit', [$this, 'registerapi_routes']); } / Registers the REST API route to update Yoast meta fields. */ public function registerapiroutes() { registerrestroute( 'yoast-api/v1', '/update-meta', [ 'methods' => 'POST', 'callback' => [$this, 'updateyoastmeta'], 'permissioncallback' => [$this, 'checkroute_permission'], 'args' => [ 'post_id' => [ 'required' => true, 'validate_callback' => function( $param ) { $post = get_post( (int) $param ); if ( ! $post ) { return false; } $allowedposttypes = class_exists('WooCommerce') ? ['post', 'product'] : ['post']; return inarray($post->posttype, $allowedposttypes, true); }, 'sanitize_callback' => 'absint', ], 'yoast_title' => [ 'type' => 'string', 'sanitizecallback' => 'sanitizetext_field', ], 'yoast_description' => [ 'type' => 'string', 'sanitizecallback' => 'sanitizetext_field', ], ], ] ); } / Updates the Yoast meta fields for a specific post. * @param WPRESTRequest $request The REST API request instance. @return WPRESTResponse|WPError Response object on success, or WPError on failure. */ public function updateyoastmeta( WPRESTRequest $request ) { $postid = $request->getparam('post_id'); if ( ! currentusercan('editpost', $postid) ) { return new WP_Error( 'rest_forbidden', 'You do not have permission to edit this post.', ['status' => 403] ); } // Map API parameters to Yoast database meta keys $fields_map = [ 'yoasttitle' => 'yoastwpseotitle', 'yoastdescription' => 'yoastwpseometadesc', ]; $results = []; $updated = false; foreach ( $fieldsmap as $paramname => $meta_key ) { if ( $request->hasparam( $paramname ) ) { $value = $request->getparam( $paramname ); updatepostmeta( $postid, $metakey, $value ); $results[$param_name] = 'updated'; $updated = true; } } if ( ! $updated ) { return new WP_Error( 'nofieldsprovided', 'No Yoast fields were provided for update.', ['status' => 400] ); } return new WPRESTResponse( $results, 200 ); } / Checks if the current user has permission to access the REST API route. * @return bool */ public function checkroutepermission() { return currentusercan( 'edit_posts' ); } } new YoastAPIManager(); Bulk version available here : this bulk version, provided with a dedicated WordPress plugin, allows you to generate and bulk-update meta titles and descriptions for multiple articles simultaneously using artificial intelligence. It automates the entire process, from article selection to the final update in Yoast, offering considerable time savings. . --- Phil | Inforeole | Linkedin 🇫🇷 Contactez nous pour automatiser vos processus

philBy phil
852

Automate hotel price comparison with multi-platform scraping and email reporting

This is a production-ready, end-to-end workflow that automatically compares hotel prices across multiple booking platforms and delivers beautiful email reports to users. Unlike basic building blocks, this workflow is a complete solution ready to deploy. --- ✨ What Makes This Production-Ready ✅ Complete End-to-End Automation Input: Natural language queries via webhook Processing: Multi-platform scraping & comparison Output: Professional email reports + analytics Feedback: Real-time webhook responses ✅ Advanced Features 🧠 Natural Language Processing for flexible queries 🔄 Parallel scraping from multiple platforms 📊 Analytics tracking with Google Sheets integration 💌 Beautiful HTML email reports 🛡️ Error handling and graceful degradation 📱 Webhook responses for real-time feedback ✅ Business Value For Travel Agencies: Instant price comparison service for clients For Hotels: Competitive pricing intelligence For Travelers: Save time and money with automated research --- 🚀 Setup Instructions Step 1: Import Workflow Copy the workflow JSON from the artifact In n8n, go to Workflows → Import from File/URL Paste the JSON and click Import Step 2: Configure Credentials A. SMTP Email (Required) Settings → Credentials → Add Credential → SMTP Host: smtp.gmail.com (for Gmail) Port: 587 User: your-email@gmail.com Password: your-app-password (not regular password!) Gmail Setup: Enable 2FA on your Google Account Generate App Password: https://myaccount.google.com/apppasswords Use the generated password in n8n B. Google Sheets (Optional - for analytics) Settings → Credentials → Add Credential → Google Sheets OAuth2 Follow the OAuth flow to connect your Google account Sheet Setup: Create a new Google Sheet Name the first sheet "Analytics" Add headers: timestamp, query, hotel, city, checkIn, checkOut, bestPrice, platform, totalResults, userEmail Copy the Sheet ID from URL and paste in the "Save to Google Sheets" node Step 3: Set Up Scraping Service You need to create a scraping API that the workflow calls. Here are your options: Option A: Use Your Existing Python Script Create a simple Flask API wrapper: python api_wrapper.py from flask import Flask, request, jsonify import subprocess import json app = Flask(name) @app.route('/scrape/<platform>', methods=['POST']) def scrape(platform): data = request.json query = f"{data['checkIn']} to {data['checkOut']}, {data['hotel']}, {data['city']}" try: result = subprocess.run( ['python3', 'pricescrap2.py', query, platform], capture_output=True, text=True, timeout=30 ) Parse your script output output = result.stdout Assuming your script returns price data return jsonify({ 'price': extracted_price, 'currency': 'USD', 'roomType': 'Standard Room', 'url': booking_url, 'availability': True }) except Exception as e: return jsonify({'error': str(e)}), 500 if name == 'main': app.run(host='0.0.0.0', port=5000) Deploy: bash pip install flask python api_wrapper.py Update n8n HTTP Request nodes: URL: http://your-server-ip:5000/scrape/booking URL: http://your-server-ip:5000/scrape/agoda URL: http://your-server-ip:5000/scrape/expedia Option B: Use Third-Party Scraping Services Recommended Services: ScraperAPI (scraperapi.com) - $49/month for 100k requests Bright Data (brightdata.com) - Pay as you go Apify (apify.com) - Has pre-built hotel scrapers Example with ScraperAPI: javascript // In HTTP Request node URL: http://api.scraperapi.com Query Parameters: apikey: YOURAPI_KEY url: https://booking.com/search?hotel={{$json.hotelName}}... Option C: Use n8n SSH Node (Like Your Original) Keep your SSH approach but improve it: Replace HTTP Request nodes with SSH nodes Point to your server with the Python script Ensure error handling and timeouts javascript // SSH Node Configuration Host: your-server-ip Command: python3 /path/to/pricescrap2.py "{{$json.hotelName}}" "{{$json.city}}" "{{$json.checkInISO}}" "{{$json.checkOutISO}}" "booking" Step 4: Activate Webhook Click on "Webhook - Receive Request" node Click "Listen for Test Event" Copy the webhook URL (e.g., https://your-n8n.com/webhook/hotel-price-check) Test with this curl command: bash curl -X POST https://your-n8n.com/webhook/hotel-price-check \ -H "Content-Type: application/json" \ -d '{ "message": "I want to check Marriott Hotel in Singapore from 15th March to 18th March", "email": "user@example.com", "name": "John Doe" }' Step 5: Activate Workflow Toggle the workflow to Active The webhook is now live and ready to receive requests --- 📝 Usage Examples Example 1: Basic Query json { "message": "Hilton Hotel in Dubai from 20th December to 23rd December", "email": "traveler@email.com", "name": "Sarah" } Example 2: Flexible Format json { "message": "I need prices for Taj Hotel, Mumbai. Check-in: 5th January, Check-out: 8th January", "email": "customer@email.com" } Example 3: Short Format json { "message": "Hyatt Singapore March 10 to March 13", "email": "user@email.com" } --- 🎨 Customization Options Add More Booking Platforms Steps: Duplicate an existing "Scrape" node Update the platform parameter Connect it to "Aggregate & Compare" Update the aggregation logic to include the new platform Change Email Template Edit the "Format Email Report" node's JavaScript: Modify HTML structure Change colors (currently purple gradient) Add your company logo Include terms and conditions Add SMS Notifications Using Twilio: Add new node: Twilio → Send SMS Connect after "Aggregate & Compare" Format: "Best deal: ${hotel} at ${platform} for ${price}" Add Slack Integration Add Slack node after "Aggregate & Compare" Send to travel-deals channel Include quick booking links Implement Caching Add Redis or n8n's built-in cache: javascript // Before scraping, check cache const cacheKey = ${hotelName}-${city}-${checkIn}-${checkOut}; const cached = await $cache.get(cacheKey); if (cached && Date.now() - cached.timestamp < 3600000) { return cached.data; // Use 1-hour cache } --- 📊 Analytics & Monitoring Google Sheets Dashboard The workflow automatically logs to Google Sheets. Create a dashboard with: Metrics to track: Total searches per day/week Most searched hotels Most searched cities Average price ranges Platform with best prices (frequency) User engagement (repeat users) Example Sheet Formulas: // Total searches today =COUNTIF(A:A, TODAY()) // Most popular hotel =INDEX(C:C, MODE(MATCH(C:C, C:C, 0))) // Average best price =AVERAGE(G:G) Set Up Alerts Add a node after "Aggregate & Compare": javascript // Alert if prices are unusually high if (bestDeal.price > avgPrice * 1.5) { // Send alert to admin return [{ json: { alert: true, message: High prices detected for ${hotelName} } }]; } --- 🛡️ Error Handling The workflow includes comprehensive error handling: Missing Information If user doesn't provide hotel/city/dates → Responds with helpful prompt Scraping Failures If all platforms fail → Sends "No results" email with suggestions Partial Results If some platforms work → Shows available results + notes errors Email Delivery Issues Uses continueOnFail: true to prevent workflow crashes --- 🔒 Security Best Practices Rate Limiting Add rate limiting to prevent abuse: javascript // In Parse & Validate node const userEmail = $json.email; const recentSearches = await $cache.get(searches:${userEmail}); if (recentSearches && recentSearches.length > 10) { return [{ json: { status: 'rate_limited', response: 'Too many requests. Please try again in 1 hour.' } }]; } Input Validation Already implemented - validates hotel names, cities, dates Email Verification Add email verification before first use: javascript // Send verification code const code = Math.random().toString(36).substring(7); await $sendEmail({ to: userEmail, subject: 'Verify your email', body: Your code: ${code} }); API Key Protection Never expose scraping API keys in responses or logs --- 🚀 Deployment Options Option 1: n8n Cloud (Easiest) Sign up at n8n.cloud Import workflow Configure credentials Activate Pros: No maintenance, automatic updates Cons: Monthly cost Option 2: Self-Hosted (Most Control) bash Using Docker docker run -it --rm \ --name n8n \ -p 5678:5678 \ -v ~/.n8n:/home/node/.n8n \ n8nio/n8n Using npm npm install -g n8n n8n start Pros: Free, full control Cons: You manage updates Option 3: Cloud Platforms Railway.app (recommended for beginners) DigitalOcean App Platform AWS ECS Google Cloud Run --- 📈 Scaling Recommendations For < 100 searches/day Current setup is perfect Use n8n Cloud Starter or small VPS For 100-1000 searches/day Add Redis caching (1-hour cache) Use queue system for scraping Upgrade to n8n Cloud Pro For 1000+ searches/day Implement job queue (Bull/Redis) Use dedicated scraping service Load balance multiple n8n instances Consider microservices architecture --- 🐛 Troubleshooting Issue: Webhook not responding Solution: Check workflow is Active Verify webhook URL is correct Check n8n logs: Settings → Log Streaming Issue: No prices returned Solution: Test scraping endpoints individually Check if hotel name matches exactly Verify dates are in future Try different date ranges Issue: Emails not sending Solution: Verify SMTP credentials Check "less secure apps" setting (Gmail) Use App Password instead of regular password Check spam folder Issue: Slow response times Solution: Enable parallel scraping (already configured) Add timeout limits (30 seconds recommended) Implement caching Use faster scraping service

Oneclick AI SquadBy Oneclick AI Squad
713

Automate Reddit brand monitoring & responses with GPT-4o-mini, Sheets & Slack

How it Works This workflow automates intelligent Reddit marketing by monitoring brand mentions, analyzing sentiment with AI, and engaging authentically with communities. Every 24 hours, the system searches Reddit for posts containing your configured brand keywords across all subreddits, finding up to 50 of the newest mentions to analyze. Each discovered post is sent to OpenAI's GPT-4o-mini model for comprehensive analysis. The AI evaluates sentiment (positive/neutral/negative), assigns an engagement score (0-100), determines relevance to your brand, and generates contextual, helpful responses that add genuine value to the conversation. It also classifies the response type (educational/supportive/promotional) and provides reasoning for whether engagement is appropriate. The workflow intelligently filters posts using a multi-criteria system: only posts that are relevant to your brand, score above 60 in engagement quality, and warrant a response type other than "pass" proceed to engagement. This prevents spam and ensures every interaction is meaningful. Selected posts are processed one at a time through a loop to respect Reddit's rate limits. For each worthy post, the AI-generated comment is posted, and complete interaction data is logged to Google Sheets including timestamp, post details, sentiment, engagement scores, and success status. This creates a permanent audit trail and analytics database. At the end of each run, the workflow aggregates all data into a comprehensive daily summary report with total posts analyzed, comments posted, engagement rate, sentiment breakdown, and the top 5 engagement opportunities ranked by score. This report is automatically sent to Slack with formatted metrics, giving your team instant visibility into your Reddit marketing performance. --- Who is this for? Brand managers and marketing teams needing automated social listening and engagement on Reddit Community managers responsible for authentic brand presence across multiple subreddits Startup founders and growth marketers who want to scale Reddit marketing without hiring a team PR and reputation teams monitoring brand sentiment and responding to discussions in real-time Product marketers seeking organic engagement opportunities in product-related communities Any business that wants to build authentic Reddit presence while avoiding spammy marketing tactics --- Setup Steps Setup time: Approx. 30-40 minutes (credential configuration, keyword setup, Google Sheets creation, Slack integration) Requirements: Reddit account with OAuth2 application credentials (create at reddit.com/prefs/apps) OpenAI API key with GPT-4o-mini access Google account with a new Google Sheet for tracking interactions Slack workspace with posting permissions to a marketing/monitoring channel Brand keywords and subreddit strategy prepared Create Reddit OAuth Application: Visit reddit.com/prefs/apps, create a "script" type app, and obtain your client ID and secret Configure Reddit Credentials in n8n: Add Reddit OAuth2 credentials with your app credentials and authorize access Set up OpenAI API: Obtain API key from platform.openai.com and configure in n8n OpenAI credentials Create Google Sheet: Set up a new sheet with columns: timestamp, postId, postTitle, subreddit, postUrl, sentiment, engagementScore, responseType, commentPosted, reasoning Configure these nodes: Brand Keywords Config: Edit the JavaScript code to include your brand name, product names, and relevant industry keywords Search Brand Mentions: Adjust the limit (default 50) and sort preference based on your needs AI Post Analysis: Customize the prompt to match your brand voice and engagement guidelines Filter Engagement-Worthy: Adjust the engagementScore threshold (default 60) based on your quality standards Loop Through Posts: Configure max iterations and batch size for rate limit compliance Log to Google Sheets: Replace YOURSHEETID with your actual Google Sheets document ID Send Slack Report: Replace YOURCHANNELID with your Slack channel ID Test the workflow: Run manually first to verify all connections work and adjust AI prompts Activate for daily runs: Once tested, activate the Schedule Trigger to run automatically every 24 hours --- Node Descriptions (10 words each) Daily Marketing Check - Schedule trigger runs workflow every 24 hours automatically daily Brand Keywords Config - JavaScript code node defining brand keywords to monitor Reddit Search Brand Mentions - Reddit node searches all subreddits for brand keyword mentions AI Post Analysis - OpenAI analyzes sentiment, relevance, generates contextual helpful comment responses Filter Engagement-Worthy - Conditional node filters only high-quality relevant posts worth engaging Loop Through Posts - Split in batches processes each post individually respecting limits Post Helpful Comment - Reddit node posts AI-generated comment to worthy Reddit discussions Log to Google Sheets - Appends all interaction data to spreadsheet for permanent tracking Generate Daily Summary - JavaScript aggregates metrics, sentiment breakdown, generates comprehensive daily report Send Slack Report - Posts formatted daily summary with metrics to team Slack channel

Daniel ShashkoBy Daniel Shashko
679

Deploy code to GitHub with natural language via Slack & Claude 3.5

Github Deployer Agent Overview The Github Deployer Agent is an intelligent automation tool that integrates with Slack to streamline code deployment workflows. Powered by Anthropic's Claude 3.5 and Tavily for web search, it enables seamless, context-aware file pushes to a GitHub repository with minimal user input. Capabilities Accepts natural language via Slack Automatically pushes code to a default GitHub repository Uses Claude 3.5 for code generation and decision-making Leverages Tavily for real-time web search to enhance context Supports folder structure hints to ensure clean and organized repositories Required Connections To operate correctly, the following integrations must be in place: Slack API Token with permission to read messages and post responses GitHub Personal Access Token with repo write permissions Tavily API Key for external search functionality Claude 3.5 API Access via Anthropic Detailed configuration instructions are provided in the workflow Example Input From Slack, you can send messages like: "Generate a basic README.md for my Python project and store it in the root directory." Customising This Workflow You can tailor the workflow by: Modifying default folder paths or repository settings Integrate Jira node to use issue keys as default folder naming Add slack file upload option

Humble TurtleBy Humble Turtle
256

Monitor RSS feeds, extract full articles with Jina AI, and save to Supabase

Monitor RSS Feeds, Extract Full Articles, and Save to Supabase Overview This workflow solves a common problem with RSS feeds: they often only provide a short summary or snippet of the full article. This template automatically monitors a list of your favorite blog RSS feeds, filters for new content, visits the article page to extract the entire blog post, and then saves the structured data into a Supabase database. It's designed for content creators, marketers, researchers, and anyone who needs to build a personal knowledge base, conduct competitive analysis, or power a content aggregation system without manual copy-pasting. ----- Use Cases Content Curation: Automatically gather full-text articles for a newsletter or social media content. Personal Knowledge Base: Create a searchable archive of articles from experts in your field. Competitive Analysis: Track what competitors are publishing without visiting their blogs every day. AI Model Training: Collect a clean, structured dataset of full-text articles to fine-tune an AI model. ----- How It Works Scheduled Trigger: The workflow runs automatically on a set schedule (default is once per day). Fetch RSS Feeds: It takes a list of RSS feed URLs you provide in the "blogs to track" node. Filter for New Posts: It checks the publication date of each article and only continues if the article is newer than a specified age (e.g., published within the last 60 days). Extract Full Content: For each new article, the workflow uses the Jina AI Reader URL (https://r.jina.ai/)) to scrape the full, clean text from the blog post's webpage. This is a free and powerful way to get past the RSS snippet limit. Save to Supabase: Finally, it organizes the extracted data and saves it to your chosen Supabase table. The following data is saved by default: title source_url (the link to the original article) content_snippet (the full extracted article text) published_date creator (the author) status (a static value you can set, e.g., "new") content_type (a static value you can set, e.g., "blog") ----- Setup Instructions You can get this template running in about 10-15 minutes. Set Up Your RSS Feed List: Navigate to the "blogs to track" Set node. In the source_identifier field, replace the example URLs with the RSS feed URLs for the blogs you want to monitor. You can add as many as you like. Tip:* The best way to find a site's RSS feed is to use a tool like Perplexity or a web-browsing enabled LLM. javascript // Example list of RSS feeds ['https://blog.n8n.io/rss', 'https://zapier.com/blog/feeds/latest/'] Configure the Content Age Filter: Go to the "max\content\age\_days" Set node. Change the value from the default 60 to your desired timeframe (e.g., 7 to only get articles from the last week). Connect Your Storage Destination: The template uses the "Save Blog Data to Database" Supabase node. First, ensure you have a table in your Supabase project with columns to match the data (e.g., title, sourceurl, contentsnippet, published_date, creator, etc.). In the n8n node, create new credentials using your Supabase Project URL and Service Role Key. Select your table from the list and map the data fields from the workflow to your table columns. Want to use something else? You can easily replace the Supabase node with a Google Sheets, Airtable, or the built-in n8n Table node. Just drag the final connection to your new node and configure the field mapping. Set Your Schedule: Click on the first node, "Schedule Trigger". Adjust the trigger interval to your needs. The default is every day at noon. Activate Workflow: Click the "Save" button, then toggle the workflow to Active. You're all set\!

automediaBy automedia
249

Clone nested folder structures in Google Drive with custom naming

Google Drive Nested Folder Structure Cloner How it works Takes any Google Drive folder structure and creates an exact copy with custom naming Perfect for agencies, consultants, or anyone who needs identical folder structures for multiple projects Creates folder hierarchy only (no files are copied) Handles nested folders automatically with level-by-level processing Set up steps Get your source and destination folder IDs from Google Drive URLs Edit the 3 variables in the CONFIG node (takes 30 seconds) Connect your Google Drive credentials in n8n Run the workflow and watch it create your folder structure For deeply nested folders, run multiple times to process each level What you'll need Google Drive account with API access configured in n8n Read permissions on source folder, write permissions on destination folder Pro tips Create a "Templates" folder in Google Drive for your reusable structures The workflow intelligently skips folders that already exist Each run processes the next level down for nested structures Check the final report to see exactly what was created, skipped, or failed

Jagged FrontiersmanBy Jagged Frontiersman
157
All templates loaded