11 templates found
Category:
Author:
Sort:

Store data received from webhook in JSON

Store the data received from the CocktailDB API in JSON

Harshil AgrawalBy Harshil Agrawal
4279

Automate Actions After PDF Generation with PDFMonkey

Automate Actions After PDF Generation with PDFMonkey in n8n Overview This n8n workflow template allows you to automatically react to PDF generation events from PDFMonkey. When a new PDF is successfully created, this workflow retrieves the file and processes it based on your needs—whether it’s sending it via email, saving it to cloud storage, or integrating it with other apps. How It Works Trigger: The workflow listens for a PDFMonkey webhook event when a new PDF is generated. Retrieve PDF: It fetches the newly generated PDF file from PDFMonkey. Process & Action: Depending on the outcome: ✅ On success: The workflow downloads the PDF and can distribute or store it. ❌ On failure: It handles errors accordingly (e.g., sending alerts, retrying, or logging the issue). Configuration To set up this workflow, follow these steps: Copy the Webhook URL generated by n8n. Go to your PDFMonkey Webhooks dashboard and paste the URL in the appropriate field to define the callback URL. Save your settings and trigger a test to ensure proper integration. 📖 For detailed setup instructions, visit: PDFMonkey Webhooks Documentation Use Cases This workflow is ideal for: Automating invoice processing (e.g., sending PDFs to customers via email). Archiving reports in cloud storage (e.g., Google Drive, Dropbox, or AWS S3). Sending notifications via Slack, Microsoft Teams, or WhatsApp when a new PDF is available. Logging generated PDFs in Airtable, Notion, or a database for tracking. Customization You can customize this workflow to: Add conditional logic (e.g., different actions based on the document type). Enhance security (e.g., encrypting PDFs before sharing). Extend integrations by connecting with CRM tools, task managers, or analytics platforms. Need Help? If you need assistance setting up or customizing this workflow, feel free to reach out to us via chat on pdfmonkey.io—we’ll be happy to help! 🚀

VincentBy Vincent
900

AI-powered meeting research & daily agenda with Google Calendar, Attio CRM, and Slack

Research meeting attendees and prepare daily agenda in Slack This workflow automatically researches your meeting attendees every morning and sends you a comprehensive brief in Slack with context about who you're meeting, their company, and key talking points. Who's it for Sales professionals who need quick context before meetings Executives with packed calendars who need meeting preparation Customer success teams managing multiple client relationships Account managers preparing for client calls Business development teams researching prospects Anyone who wants to be better prepared for their daily meetings How it works Daily Trigger: Runs every weekday morning at 6 AM (customizable) to analyze your Google Calendar Calendar Analysis: Fetches all meetings scheduled for today and filters for external meetings (those with attendees other than yourself) AI-Powered Research: For each external meeting, an AI agent researches attendees using multiple sources: Searches your CRM (Attio) for existing contact information Queries Gmail history for past email interactions Searches past calendar events for previous meetings with attendees Performs web searches for recent news about attendees and their companies Retrieves company data from Apollo.io including industry, size, and technologies CRM Updates: Automatically creates new contact records in Attio for unknown attendees and adds meeting preparation notes to existing contacts Brief Generation: Compiles all research into a scannable, actionable meeting brief with key talking points Slack Delivery: Sends the formatted brief to your designated Slack channel for easy mobile access Setup requirements Google Calendar OAuth2 connection (for fetching meetings) Slack workspace with bot permissions (for receiving briefs) Gmail OAuth2 connection (for email history search) OpenRouter API key (for AI processing) Attio CRM account and API token (optional - for contact management) Apollo.io API key (optional - for company research) Anthropic API key (optional - for advanced web search) How to customize Adjust Schedule: Modify the Schedule Trigger node to run at your preferred time - change from 6 AM to whenever works best for your schedule Customize Research Sources: Remove CRM integration if you don't use Attio Remove Apollo.io if you don't need company research Add additional research tools as needed Modify Output Format: Edit the prompt in "Format Daily Meeting Brief" node to change how the information is structured and presented Change Delivery Method: Replace Slack with Microsoft Teams, email, or Discord Add multiple delivery channels if needed Send to different channels based on meeting type Filter Meetings: Adjust the filtering logic to include/exclude certain types of meetings based on keywords, attendees, or calendar properties Advanced customization Add VIP alerts: Create special handling for meetings with executives or key clients Include preparation documents: Automatically attach relevant files from Google Drive Time zone handling: Adjust for meetings across different time zones Language support: Modify prompts to generate briefs in different languages

Harry SigginsBy Harry Siggins
586

Scrape Twitter profiles with Bright Data API and export to Google Sheets

🐦 Twitter Profile Scraper via Bright Data API with Google Sheets Output A comprehensive n8n automation that scrapes Twitter profile data using Bright Data's Twitter dataset and stores comprehensive tweet analytics, user metrics, and engagement data directly into Google Sheets. 📋 Overview This workflow provides an automated Twitter data collection solution that extracts profile information and tweet data from specified Twitter accounts within custom date ranges. Perfect for social media analytics, competitor research, brand monitoring, and content strategy analysis. ✨ Key Features 🔗 Form-Based Input: Easy-to-use form for Twitter URL and date range selection 🐦 Twitter Integration: Uses Bright Data's Twitter dataset for accurate data extraction 📊 Comprehensive Data: Captures tweets, engagement metrics, and profile information 📈 Google Sheets Storage: Automatically stores all data in organized spreadsheet format 🔄 Progress Monitoring: Real-time status tracking with automatic retry mechanisms ⚡ Fast & Reliable: Professional scraping with built-in error handling 📅 Date Range Control: Flexible time period selection for targeted data collection 🎯 Customizable Fields: Advanced data field selection and mapping 🎯 What This Workflow Does Input Twitter Profile URL: Target Twitter account for data scraping Date Range: Start and end dates for tweet collection period Custom Fields: Configurable data points to extract Processing Form Trigger: Collects Twitter URL and date range from user input API Request: Sends scraping request to Bright Data with specified parameters Progress Monitoring: Continuously checks scraping job status until completion Data Retrieval: Downloads complete dataset when scraping is finished Data Processing: Formats and structures extracted information Sheet Integration: Automatically populates Google Sheets with organized data Output Data Points | Field | Description | Example | |-------|-------------|---------| | user_posted | Username who posted the tweet | @elonmusk | | name | Display name of the user | Elon Musk | | description | Tweet content/text | "Exciting updates coming soon..." | | date_posted | When the tweet was posted | 2025-01-15T10:30:00Z | | likes | Number of likes on the tweet | 1,234 | | reposts | Number of retweets | 567 | | replies | Number of replies | 89 | | views | Total view count | 12,345 | | followers | User's follower count | 50M | | following | Users they follow | 123 | | is_verified | Verification status | true/false | | hashtags | Hashtags used in tweet | AI Technology | | photos | Image URLs in tweet | image1.jpg, image2.jpg | | videos | Video content URLs | video1.mp4 | | user_id | Unique user identifier | 12345678 | | timestamp | Data extraction timestamp | 2025-01-15T11:00:00Z | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Bright Data account with Twitter dataset access Google account with Sheets access Valid Twitter profile URLs to scrape 10-15 minutes for setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows → + Add workflow → Import from JSON Paste JSON and click Import Step 2: Configure Bright Data Set up Bright Data credentials: In n8n: Credentials → + Add credential → HTTP Header Auth Enter your Bright Data API credentials Test the connection Configure dataset: Ensure you have access to Twitter dataset (gd_lwxkxvnf1cynvib9co) Verify dataset permissions in Bright Data dashboard Step 3: Configure Google Sheets Integration Create a Google Sheet: Go to Google Sheets Create a new spreadsheet named "Twitter Data" or similar Copy the Sheet ID from URL: https://docs.google.com/spreadsheets/d/SHEETIDHERE/edit Set up Google Sheets credentials: In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Prepare your data sheet with columns: Use the column headers from the data points table above The workflow will automatically populate these fields Step 4: Update Workflow Settings Update Bright Data nodes: Open "🚀 Trigger Twitter Scraping" node Replace BRIGHTDATAAPI_KEY with your actual API token Verify dataset ID is correct Update Google Sheets node: Open "📊 Store Twitter Data in Google Sheet" node Replace YOURGOOGLESHEET_ID with your Sheet ID Select your Google Sheets credential Choose the correct sheet/tab name Step 5: Test & Activate Add test data: Use the form trigger to input a Twitter profile URL Set a small date range for testing (e.g., last 7 days) Test the workflow: Submit the form to trigger the workflow Monitor progress in n8n execution logs Verify data appears in Google Sheet Check all expected columns are populated 📖 Usage Guide Running the Workflow Access the workflow form trigger URL (available when workflow is active) Enter the Twitter profile URL you want to scrape Set the start and end dates for tweet collection Submit the form to initiate scraping Monitor progress - the workflow will automatically check status every minute Once complete, data will appear in your Google Sheet Understanding the Data Your Google Sheet will show: Real-time tweet data for the specified date range User engagement metrics (likes, replies, retweets, views) Profile information (followers, following, verification status) Content details (hashtags, media URLs, quoted tweets) Timestamps for each tweet and data extraction Customizing Date Ranges Recent data: Use last 7-30 days for current activity analysis Historical analysis: Select specific months or quarters for trend analysis Event tracking: Focus on specific date ranges around events or campaigns Comparative studies: Use consistent time periods across different profiles 🔧 Customization Options Modifying Data Fields Edit the customoutputfields array in the "🚀 Trigger Twitter Scraping" node to add or remove data points: json "customoutputfields": [ "id", "user_posted", "name", "description", "date_posted", "likes", "reposts", "replies", "views", "hashtags", "followers", "is_verified" ] Changing Google Sheet Structure Modify the column mapping in the "📊 Store Twitter Data in Google Sheet" node to match your preferred sheet layout and add custom formulas or calculations. Adding Multiple Recipients To process multiple Twitter profiles: Modify the form to accept multiple URLs Add a loop node to process each URL separately Implement delays between requests to respect rate limits 🚨 Troubleshooting Common Issues & Solutions "Bright Data connection failed" Cause: Invalid API credentials or dataset access Solution: Verify credentials in Bright Data dashboard, check dataset permissions "No data extracted" Cause: Invalid Twitter URLs or private/protected accounts Solution: Verify URLs are valid public Twitter profiles, test with different accounts "Google Sheets permission denied" Cause: Incorrect credentials or sheet permissions Solution: Re-authenticate Google Sheets, check sheet sharing settings "Workflow timeout" Cause: Large date ranges or high-volume accounts Solution: Use smaller date ranges, implement pagination for high-volume accounts "Progress monitoring stuck" Cause: Scraping job failed or API issues Solution: Check Bright Data dashboard for job status, restart workflow if needed Advanced Troubleshooting Check execution logs in n8n for detailed error messages Test individual nodes by running them separately Verify data formats and ensure consistent field mapping Monitor rate limits if scraping multiple profiles consecutively Add error handling and implement retry logic for robust operation 📊 Use Cases & Examples Social Media Analytics Goal: Track engagement metrics and content performance Monitor tweet engagement rates over time Analyze hashtag effectiveness and reach Track follower growth and audience interaction Generate weekly/monthly performance reports Competitor Research Goal: Monitor competitor social media activity Track competitor posting frequency and timing Analyze competitor content themes and strategies Monitor competitor engagement and audience response Identify trending topics and hashtags in your industry Brand Monitoring Goal: Track brand mentions and sentiment analysis Monitor specific Twitter accounts for brand mentions Track hashtag campaigns and user-generated content Analyze sentiment trends and audience feedback Identify influencers and brand advocates Content Strategy Development Goal: Analyze successful content patterns Identify high-performing tweet formats and topics Track optimal posting times and frequencies Analyze hashtag performance and reach Study audience engagement patterns Market Research Goal: Collect social media data for market analysis Gather consumer opinions and feedback Track industry trends and discussions Monitor product launches and market reactions Support product development with social insights ⚙ Advanced Configuration Batch Processing Multiple Profiles To monitor multiple Twitter accounts efficiently: Create a master sheet with profile URLs and date ranges Add a loop node to process each profile separately Implement delays between requests to respect rate limits Use separate sheets or tabs for different profiles Adding Data Analysis Enhance the workflow with analytical capabilities: Create additional sheets for processed data and insights Add formulas to calculate engagement rates and trends Implement data visualization with charts and graphs Generate automated reports and summaries Integration with Business Tools Connect the workflow to your existing systems: CRM Integration: Update customer records with social media data Slack Notifications: Send alerts when data collection is complete Database Storage: Store data in PostgreSQL/MySQL for advanced analysis BI Tools: Connect to Tableau/Power BI for comprehensive visualization 📈 Performance & Limits Expected Performance Single profile: 30 seconds to 5 minutes (depending on date range) Data accuracy: 95%+ for public Twitter profiles Success rate: 90%+ for accessible accounts Daily capacity: 10-50 profiles (depends on rate limits and data volume) Resource Usage Memory: ~200MB per execution Storage: Minimal (data stored in Google Sheets) API calls: 1 Bright Data call + multiple Google Sheets calls per profile Bandwidth: ~5-10MB per profile scraped Execution time: 2-10 minutes for typical date ranges Scaling Considerations Rate limiting: Add delays for high-volume scraping Error handling: Implement retry logic for failed requests Data validation: Add checks for malformed or missing data Monitoring: Track success/failure rates over time Cost optimization: Monitor API usage to control costs 🤝 Support & Community Getting Help n8n Community Forum: community.n8n.io Documentation: docs.n8n.io Bright Data Support: Contact through your dashboard GitHub Issues: Report bugs and feature requests Contributing Share improvements with the community Report issues and suggest enhancements Create variations for specific use cases Document best practices and lessons learned 📋 Quick Setup Checklist Before You Start ☐ n8n instance running (self-hosted or cloud) ☐ Bright Data account with Twitter dataset access ☐ Google account with Sheets access ☐ Valid Twitter profile URLs ready for scraping ☐ 10-15 minutes available for setup Setup Steps ☐ Import Workflow - Copy JSON and import to n8n ☐ Configure Bright Data - Set up API credentials and test ☐ Create Google Sheet - New sheet with proper column structure ☐ Set up Google Sheets credentials - OAuth setup and test ☐ Update workflow settings - Replace API keys and sheet IDs ☐ Test with sample data - Add 1 Twitter URL and small date range ☐ Verify data flow - Check data appears in Google Sheet correctly ☐ Activate workflow - Enable form trigger for production use Ready to Use! 🎉 Your workflow URL: Access form trigger when workflow is active 🎯 Happy Twitter Scraping! This workflow provides a solid foundation for automated Twitter data collection. Customize it to fit your specific social media analytics and research needs. For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/

IncrementorsBy Incrementors
577

Extract named entities from web pages with Google Natural Language API

Who is this for? Content strategists analyzing web page semantic content SEO professionals conducting entity-based analysis Data analysts extracting structured data from web pages Marketers researching competitor content strategies Researchers organizing and categorizing web content Anyone needing to automatically extract entities from web pages What problem is this workflow solving? Manually identifying and categorizing entities (people, organizations, locations, etc.) on web pages is time-consuming and error-prone. This workflow solves this challenge by: Automating the extraction of named entities from any web page Leveraging Google's powerful Natural Language API for accurate entity recognition Processing web pages through a simple webhook interface Providing structured entity data that can be used for analysis or further processing Eliminating hours of manual content analysis and categorization What this workflow does This workflow creates an automated pipeline between a webhook and Google's Natural Language API to: Receive a URL through a webhook endpoint Fetch the HTML content from the specified URL Clean and prepare the HTML for processing Submit the HTML to Google's Natural Language API for entity analysis Return the structured entity data through the webhook response Extract entities including people, organizations, locations, and more with their salience scores Setup Prerequisites: An n8n instance (cloud or self-hosted) Google Cloud Platform account with Natural Language API enabled Google API key with access to the Natural Language API Google Cloud Setup: Create a project in Google Cloud Platform Enable the Natural Language API for your project Create an API key with access to the Natural Language API Copy your API key for use in the workflow n8n Setup: Import the workflow JSON into your n8n instance Replace "YOUR-GOOGLE-API-KEY" in the "Google Entities" node with your actual API key Activate the workflow to enable the webhook endpoint Copy the webhook URL from the "Webhook" node for later use Testing: Use a tool like Postman or cURL to send a POST request to your webhook URL Include a JSON body with the URL you want to analyze: {"url": "https://example.com"} Verify that you receive a response containing the entity analysis data How to customize this workflow to your needs Analyzing Specific Entity Modify the "Google Entities" node parameters to include entityType filters Add a "Function" node after "Google Entities" to filter specific entity types Create conditions to extract only entities of interest (people, organizations, etc.) Processing Multiple URLs in Batch: Replace the webhook with a different trigger (HTTP Request, Google Sheets, etc.) Add a "Split In Batches" node to process multiple URLs Use a "Merge" node to combine results before sending the response Enhancing Entity Data: Add additional API calls to enrich extracted entities with more information Implement sentiment analysis alongside entity extraction Create a data transformation node to format entities by type or relevance Additional Notes This workflow respects Google's API rate limits by processing one URL at a time The Natural Language API may not identify all entities on a page, particularly for highly technical content HTML content is trimmed to 100,000 characters if longer to avoid API limitations Consider legal and privacy implications when analyzing and storing entity data from web pages You may want to adjust the HTML cleaning process for specific website structures ❤️ Hueston SEO Team

HuestonBy Hueston
443

Self update Docker-based n8n with email approval and SSH

n8n Self-Updater Workflow > An automated n8n workflow originally built for DigitalOcean-based n8n deployments, but fully compatible with any VPS or cloud hosting (e.g., AWS, Google Cloud, Hetzner, Linode, etc.) where n8n runs via Docker. This workflow checks for the latest Docker image of n8n, notifies you via email for approval, and securely updates your n8n instance via SSH once approved. --- How It Works Trigger: The workflow runs automatically every 3 days at 4 PM UTC (or manually if triggered). Check Version: It retrieves your current n8n Docker version and image digest via SSH. Compare: Fetches the remote digest from Docker Hub and compares it with the local one. Notify via Email: If a new update is available, an approval email is sent with details: Current version Local digest Remote digest What will happen after approval Approval Logic: Approve → Workflow connects via SSH and updates the n8n container automatically. Decline → Workflow ends; next check occurs in the next cycle. Auto Update Execution: Creates (if missing) a update_docker.sh script on the server. Runs it in the background (nohup) to: bash cd /opt/n8n-docker-caddy docker compose pull docker compose down docker compose up -d The delay ensures n8n restarts only after workflow completion. --- Requirements SSH Access to your server (where n8n runs). Add your credentials in n8n under Credentials → SSH Password*. SMTP Connection for email notifications. Configure in Credentials → SMTP*. Fill in: From Email → e.g., info@yourdomain.com To Email → your email for receiving approvals Docker-based n8n Deployment, e.g., n8n-docker-caddy setup. Docker and docker-compose installed on the server. --- How to Use Import the Workflow: Copy the provided JSON file. In your n8n instance → click Import Workflow → paste the JSON. Set Up Credentials: Create two credentials in n8n: SSH Password → Your server's SSH credentials. SMTP → Your email provider's SMTP credentials. Edit the Email Node: Replace: fromEmail: info@yourdomain.com → with your email. toEmail: youremail@yourdomain.com → with your desired recipient. Enable Auto Trigger (optional): Go to the Schedule Trigger node and set your desired interval/time. Run the Workflow: Test manually first. Once verified, activate it for automatic checks. --- Notes Originally designed for DigitalOcean VPS setups, but can run on any Docker-based n8n server. The workflow avoids duplicate updates by comparing digests instead of version tags. If the update_docker.sh file already exists, it reuses it safely. Approval emails include full details for transparency. Background execution ensures no interruptions during restart. --- Example Behavior Day 1: Workflow checks → detects update → sends email → user approves. 30 seconds later: Workflow runs update script → n8n restarts with latest Docker image. Day 4: Workflow checks again → digests match → silently completes (no email sent). --- Author: Muhammad Anas Farooq

Muhammad Anas FarooqBy Muhammad Anas Farooq
349

Automatically collect bug bounty tips from Twitter to Google Sheets

How it works Automatically monitors Twitter for bug bounty tips and educational content every 4 hours, then saves valuable insights to Google Sheets for easy reference and organization. Set up steps Get your API key from https://twitterapi.io/ (free tier available) Configure Google Sheets credentials in n8n Create a Google Sheet with the required columns Update the Sheet ID in the final node What you'll get A continuously updated database of bug bounty tips, techniques, and insights from the security community, perfectly organized in Google Sheets with: Tweet content and URLs Engagement metrics (likes, retweets, replies) Formatted timestamps for easy sorting Automatic duplicate prevention Perfect for security researchers, bug bounty hunters, and cybersecurity professionals who want to stay updated with the latest tips and techniques from Twitter's security community.

KunshBy Kunsh
292

Order processing with Google Sheets and Slack: inventory checks and alerts

Overview A cornerstone of your Order Management System, this workflow ensures seamless inventory control through fully automated stock checks, leading to a direct reduction in operational costs. It provides real-time alerts to the responsible personnel, enabling proactive issue detection and resolution to eliminate the financial damages associated with unexpected stock-outs. How it works Order Webhook Receives orders from external sources (e.g., website, form, or app) via API. Check Order Request Checks the validity of the order (e.g., complete product, valid customer details) Check Inventory Retrieve inventory information and compare it with the order request. Notifications Generate a notification to Slack for the manager indicating a successful order or an out-of-stock situation. Logging Log the process details to a Google Sheet for tracking. Set up steps Webhook Create a JSON request with the following format to call the Webhook Url { "id": "ORDER1001", "customer": { "name": "Customer", "email": "customer@example.com" }, "items": [ { "sku": "SKU001", "quantity": 2, "name": "Product A", "price": 5000 }, { "sku": "SKU002", "quantity": 2, "name": "Product C", "price": 10000 } ], "total": 30000 } Define the greater than or less than conditions on the inventory level to enter the corresponding branches. Google Sheet Clone the file to your Google Drive. (WMS Data Demo) Replace your credentials and connect. Access permission must be granted to n8n. Slack Replace your credentials and connect. A channel named "warehouse" needs to be prepared to receive notifications (if using a different name, you must update the Slack node).

SatoshiBy Satoshi
267

Access Upbit crypto market data in Telegram with GPT-4o-mini

Instantly access Upbit Spot Market Data in Telegram with AI Automation This workflow integrates the Upbit REST API with GPT-4o-mini and Telegram, giving you real-time price data, order books, trades, and candles directly in chat. Perfect for crypto traders, market analysts, and investors who want structured Upbit data at their fingertips—no manual API calls required. --- ⚙️ How It Works A Telegram bot listens for user queries like upbit KRW-BTC 15m. The Upbit AI Agent parses the request and fetches live data from the official Upbit REST API: Price & 24h stats (/v1/ticker) Order book depth & best bid/ask (/v1/orderbook) Recent trades (/v1/trades/ticks) Dynamic OHLCV candles across all timeframes (/v1/candles/{seconds|minutes|days|weeks|months|years}) A built-in Calculator tool computes spreads, % change, and midpoints. A Think module reshapes raw JSON into simplified, clean fields. The agent formats results into concise, structured text and sends them back via Telegram. --- 📊 What You Can Do with This Agent ✅ Get real-time prices and 24h change for any Upbit trading pair. ✅ View order book depth and best bid/ask snapshots. ✅ Fetch multi-timeframe OHLCV candles (from 1s to 1y). ✅ Track recent trades with price, volume, side, and timestamp. ✅ Calculate midpoints, spreads, and percentage changes. ✅ Receive clean, human-readable reports in Telegram—no JSON parsing needed. --- 🛠 Set Up Steps Create a Telegram Bot Use @BotFather and save your bot token. Configure Telegram API and OpenAI in n8n Add your bot token under Telegram credentials. Replace your Telegram ID in the authentication node to restrict access. Import the Workflow Load Upbit AI Agent v1.02.json into n8n. Ensure connections to tools (Ticker, Orderbook, Trades, Klines). Deploy and Test Example query: upbit KRW-BTC 15m → returns price, order book, candles, and trades. Example query: upbit USDT-ETH trades 50 → returns 50 latest trades. --- 📺 Setup Video Tutorial Watch the full setup guide on YouTube: [](https://www.youtube.com/watch?v=Yf6HJE_eu2k) --- ⚡ Unlock clean, structured Upbit Spot Market data instantly—directly in Telegram! --- 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn

Don Jayamaha JrBy Don Jayamaha Jr
238

Sync companies from Google Sheets to Salesforce with smart duplicate prevention

How it works Automatically imports company data from Google Sheets into Salesforce while intelligently preventing duplicate accounts. The workflow searches for existing companies, creates new accounts only when needed, and ensures all contact information is properly associated. Key features: Smart duplicate detection by company name matching Dual processing paths for new vs existing companies Automatic contact creation and association Comprehensive error handling and data validation Professional sectional documentation with setup guides Set up steps Configure Google Sheets API credentials (OAuth 2.0) Set up Salesforce Connected App with Account/Contact permissions Prepare Google Sheets with proper column headers (Company Name, Email, Phone, Industry) Map Salesforce field requirements in workflow nodes Test with small dataset before full deployment Estimated setup time: 15-30 minutes Processing time: 15-45 seconds per company All detailed configuration steps, troubleshooting guides, and security best practices are included in the comprehensive sticky note documentation within the workflow.

Xavier TaiBy Xavier Tai
155

Seo FAQ generator for websites using GPT 5 Nano and Google Sheets

General Description This workflow is designed for SEO professionals, digital agencies, content creators, and WordPress site owners who want to improve their search engine rankings automatically. It’s also perfect for curious automation enthusiasts, who want to explore how AI and n8n can work together to save both time and money. Main Uses Automatically generate SEO-optimized FAQs directly from your website content. Store and organize all FAQs in Google Sheets, ready to manage or implement. Send an automatic email report confirming the update. Key Features Includes an intelligent configuration to extract all URLs from the sitemap and process them one by one in a loop. As an alternative, if you don’t want to spend too much on API calls, you can enable a configurable limit to process only X number of URLs per run. Comes with an OPTIONS node that centralizes the workflow’s configuration for the user, making it easy to customize key aspects like language, notification email, company name, keywords, or the number of FAQs per page. In addition, users can choose to inject keywords directly into the generated FAQs, making the output even more aligned with their SEO strategy. Save Time and Money This workflow automates a process that would normally take hours of manual work. On average, the cost is only \$0.20 for every 20 to 40 pages processed, depending on the content length (using the ChatGPT 5 Nano model). Support and Documentation The workflow includes detailed documentation, and almost every node comes with explanations and alternative solutions in case of errors. In addition, any failure is covered with clear error messages explaining what happened and why, making it easier to fix quickly. In short, this template is a complete tool that combines automation, artificial intelligence, smart URL extraction, keyword injection, a centralized configuration system, and SEO best practices, so you can focus on what really matters: growing your business and boosting your online visibility. IMG Google Sheet: https://ibb.co/Yg4KMNk

Oriol SeguíBy Oriol Seguí
49
All templates loaded