Anthropic AI agent: Claude Sonnet 4 and Opus 4 with Think and Web Search tool
This workflow dynamically chooses between two new powerful Anthropic models — Claude Opus 4 and Claude Sonnet 4 — to handle user queries, based on their complexity and nature, maintaining scalability and context awareness with Anthropic web search function and Think tool. --- Key Advantages 🔁 Dynamic Model Selection Automatically routes each user query to either Claude Sonnet 4 (for routine tasks) or Claude Opus 4 (for complex reasoning), ensuring optimal performance and cost-efficiency. 🧠 AI Agent with Tool Use The AI agent can utilize a web search tool to retrieve up-to-date information and a Think tool for complex reasoning processes — improving response quality. 📎 Memory Integration Uses session-based memory to maintain conversational context, making interactions more coherent and human-like. 🧮 Built-in Calculation Tool Handles numeric queries using an integrated calculator tool, reducing the need for external processing. 📤 Structured Output Parser Ensures outputs are always well-structured and formatted in JSON, which improves consistency and downstream integrations. 🌐 Web Search Capability Supports real-time information retrieval for current events, statistics, or details not available in the AI’s base knowledge. --- Components Overview Trigger: Listens for new chat messages. Routing Agent: Analyzes the message and returns the best model to use. AI Agent: Handles the conversation, decides when to use tools. Tools: web_search for internet queries Think for reasoning Calculator for math tasks Models Used: claude-sonnet-4-20250514: Optimized for general and business logic tasks. claude-opus-4-20250514: Best for deep, strategic, and analytical queries. --- How It Works Dynamic Model Selection The workflow begins when a chat message is received. The Anthropic Routing Agent analyzes the user's query to determine the most suitable model (either Claude Sonnet 4 or Claude Opus 4) based on the query's complexity and requirements. The routing agent uses predefined criteria to decide: Claude Sonnet 4: Best for standard tasks like real-time workflow routing, data validation, and routine business logic. Claude Opus 4: Reserved for complex scenarios requiring deep reasoning, advanced analysis, or high-impact decisions. Query Processing and Response Generation The selected model processes the query, leveraging tools like web_search for real-time information retrieval, Think for internal reasoning, and Calculator for numerical tasks. The AI Agent coordinates these tools, ensuring the response is accurate and context-aware. A Simple Memory node retains session context for coherent multi-turn conversations. The final response is formatted and returned to the user without intermediate steps or metadata. --- Set Up Steps Node Configuration Trigger: Configure the "When chat message received" node to handle incoming user queries. Routing Agent: Set up the "Anthropic Routing Agent" with the system message defining model selection logic. Ensure it outputs a JSON object with prompt and model fields. AI Model Nodes: Link the "Sonnet 4 or Opus 4" node to dynamically use the selected model. The "Sonnet 3.7" node powers the routing agent itself. Tool Integration Attach the "web_search" HTTP tool to enable internet searches, ensuring the API endpoint and headers (e.g., anthropic-version) are correctly configured. Connect auxiliary tools (Think, Calculator) to the "AI Agent" for extended functionality. Add the "Simple Memory" node to maintain conversation history. Credentials Provide an Anthropic API key to all nodes requiring authentication (e.g., model nodes, web search). Testing Activate the workflow and test with sample queries to verify: Correct model selection (e.g., Sonnet for simple queries, Opus for complex ones). Proper tool usage (e.g., web searches trigger when needed). Memory retention across chat turns. Deployment Once validated, set the workflow to active for live interactions. ---- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Summarize the new documents from Google Drive and save summary in Google Sheet
This workflow is created by AI developers at WeblineIndia. It streamlines the process of managing content by automatically identifying and fetching the most recently added Google Doc file from your Google Drive. It extracts the content of the document for processing and leverages an AI model to generate a concise and meaningful summary of the extracted text. The summarized content is then stored in a designated Google Sheet, alongside relevant details like the document name and the date it was added, providing an organized and easily accessible reference for future use. This automation simplifies document handling, enhances productivity, and ensures seamless data management. Steps : Fetch the Most Recent Document from Google Drive Action: Use the Google Drive Node. Details: List files, filter by date to fetch the most recently added .doc file, and retrieve its file ID and metadata. Extract Content from the Document Action: Use the Google Docs Node. Details: Set the operation to "Get Content," pass the file ID, and extract the document's text content. Summarize the Document Using an AI Model Action: Use an AI Model Node (e.g., OpenAI, ChatGPT). Details: Provide the extracted text to the AI model, use a prompt to generate a summary, and capture the result. Store the Summarized Content in Google Sheets Action: Use the Google Sheets Node. Details: Append a new row to the target sheet with details such as the original document name, summary, and date added. --- About WeblineIndia WeblineIndia specializes in delivering innovative and custom AI solutions to simplify and automate business processes. If you need any help, please reach out to us.
Low-code API for Flutterflow apps
Flow Start: The flow starts upon receiving an HTTP GET call. Webhook: Receives the HTTP GET call and triggers the flow. Database: Connects to the database (Customer Datastore) to retrieve all necessary information (getAllPeople). Data Processing: Variable Insertion: The retrieved data is inserted into a variable. Variable Aggregation: The variables are aggregated and prepared for use in FlutterFlow. Webhook Response: Sends the response back through the Webhook with the processed data ready for use in FlutterFlow.
Send follow-ups using Gmail to Hubspot contacts
Use Case Following up at the right time is one of the most important parts of sales. This workflow uses Gmail to send outreach emails to Hubspot contacts that have already been contacted only once more than a month ago, and records the engagement in Hubspot. Setup Setup HubSpot Oauth2 creds (Be careful with scopes. They have to be exact, not less or more. Yes, it’s not simple, but it’s well documented in the n8n docs. Be smarter than me, read the docs) Setup Gmail creds. Change the email variables in the Set keys node How to adjust this template There's plenty to do here because the approach here is really just a starting point. Most important here is to figure out what your rules are to follow up. After a month? More than once? Also, remember to update the follow-up email! Unless you want to sell n8n 😉
Extract text from images & PDFs via Telegram with Mistral OCR to Markdown
This n8n template provides a complete solution for Optical Character Recognition (OCR) of image and PDF files directly within Telegram --- Users can simply send PNG, JPEG, or PDF documents to your Telegram bot, and the workflow will process them, extract text using Mistral OCR, and return the content as a downloadable Markdown (.md) text file. Key Features & How it Works: Effortless OCR via Telegram: Users send a file to the bot, and the system automatically detects the file type (PNG, JPEG, or PDF). File Size Validation: The workflow enforces a 25 MB file size limit, in line with Telegram Bot API restrictions, ensuring smooth operation. Mistral-Powered Recognition: Leveraging Mistral OCR, the template accurately extracts text from various document types. Markdown Output: Recognized text is automatically converted into a clean Markdown (.md) text file, ready for easy editing, storage, or further processing. Secure File Delivery: The processed Markdown file is delivered back to the user via Telegram. For this, the workflow ingeniously uses a GET request to itself (acting as a file downloader proxy). This generated link allows Telegram to fetch the .md file directly. Please note: This download functionality requires the workflow to be in an Active status. Optional Whitelist Security: Enhance your bot's security with an optional whitelist feature. You can configure specific Telegram User IDs to restrict access, ensuring only authorized users can interact with your bot. Simplified Webhook Management: The template includes dedicated utility flows for convenient management of your Telegram bot's webhooks (for both development and production environments). This template is ideal for digitizing documents on the go, extracting text from scanned files, or converting image-based content into versatile, searchable text. Getting Started To get this powerful OCR bot up and running, follow these two main steps: Set Up Your Telegram Bot: First, you'll need to configure your Telegram bot and its webhooks. Follow the instructions detailed in the Telegram Bot Webhook Setup section to create your bot, obtain its API token, and set up the necessary webhook URLs. Configure Bot Settings: Next, you'll need to define key operational parameters for your bot. Proceed to the Settings Configuration section and populate the variables according to your preferences, including options for whitelist access.
Create Linear tickets from Notion content
This workflow allows you to define multiple tickets/issues in a Notion page, then easily import them into Linear. Why is it useful? We use this workflow internally at n8n for collaboration between Product and Engineering teams: Engineering needs all work to be in our ticketing system (Linear) in order to keep track of it Product prefers to review features in Notion. This is because it and can be used to dump all your thoughts and organise them into themes afterwards, plus it better supports rich content like videos Features Supports rich formatting (bullets, images, videos, links, etc.) Keeps links between the Notion and Linear version, in case you need to refer back Allows you to assign each issue to a team member in the Notion definition Avoids importing the same issues twice if you run it again on the same page (meaning you can issues incrementally) You can see an example of the required format of the Notion page here.
Build document RAG system with Kimi-K2, Gemini embeddings and Qdrant
Generating contextual summaries is an token-intensive approach for RAG embeddings which can quickly rack up costs if your inference provider charges by token usage. Featherless.ai is an inference provider with a different pricing model - they charge a flat subscription fee (starting from $10) and allows for unlimited token usage instead. If you're typically spending over $10 - $25 a month, you may find Featherless to be a cheaper and more manageable option for your projects or team. For this template, Featherless's unlimited token usage is well suited for generating contextual summaries at high volumes for a majority of RAG workloads. LLM: moonshotai/Kimi-K2-Instruct Embeddings: models/gemini-embedding-001 How it works A large document is imported into the workflow using the HTTP node and its text extracted via the Extract from file node. For this demonstration, the UK highway code is used an an example. Each page is processed individually and a contextual summary is generated for it. The contextual summary generation involves taking the current page, preceding and following pages together and summarising the contents of the current page. This summary is then converted to embeddings using Gemini-embedding-001 model. Note, we're using a http request to use the Gemini embedding API as at time of writing, n8n does not support the new API's schema. These embeddings are then stored in a Qdrant collection which can then be retrieved via an agent/MCP server or another workflow. How to use Replace the large document import with your own source of documents such as google drive or an internal repo. Replace the manual trigger if you want the workflow to run as soon as documents become available. If you're using Google Drive, check out my Push notifications for Google Drive template. Expand and/or tune embedding strategies to suit your data. You may want to additionally embed the content itself and perform multi-stage queries using both. Requirements Featherless.ai Account and API Key Gemini Account and API Key for Embeddings Qdrant Vector store Customising this workflow Sparse Vectors were not included in this template due to scope but should be the next step to getting the most our of contextual retrieval. Be sure to explore other models on the Featherless.ai platform or host your own custom/finetuned models.
Build a WhatsApp assistant for text, audio & images using GPT-4o & Evolution API
Build an intelligent WhatsApp assistant that automatically responds to customer messages using AI. This template uses the Evolution API community node for WhatsApp integration and OpenAI for natural language processing, with built-in conversation memory powered by Redis to maintain context across messages. > ⚠️ Self-hosted requirement: This workflow uses the Evolution API community node, which is only available on self-hosted n8n instances. It will not work on n8n Cloud. What this workflow does Receives incoming WhatsApp messages via Evolution API webhook Filters and processes text, audio, and image messages Transcribes audio messages using OpenAI Whisper Analyzes images using GPT-4 Vision Generates contextual responses with conversation memory Sends replies back through WhatsApp Who is this for? Businesses wanting to automate customer support on WhatsApp Teams needing 24/7 automated responses with AI Developers building multimodal chat assistants Companies looking to reduce response time on WhatsApp Setup instructions Evolution API: Install and configure Evolution API on your server. Create an instance and obtain your API key and instance name. Redis: Set up a Redis instance for conversation memory. You can use a local installation or a cloud service like Redis Cloud. OpenAI: Get your API key from platform.openai.com with access to GPT and Whisper models. Webhook: Configure your Evolution API instance to send webhooks to your n8n webhook URL. Customization options Modify the system prompt in the AI node to change the assistant's personality and responses Adjust the Redis TTL to control how long conversation history is retained Add additional message type handlers for documents, locations, or contacts Integrate with your CRM or database to personalize responses Credentials required Evolution API credentials (self-hosted) OpenAI API key Redis connection
Generate AI videos from text prompts with OpenAI Sora 2
Sora 2 Video Generation: Prompt-to-Video Automation with OpenAI API Who’s it for This template is ideal for content creators, marketers, developers, or anyone needing automated AI video creation from text prompts. Perfect for bulk generation, marketing assets, or rapid prototyping using OpenAI's Sora 2 API. Example use cases: E-commerce sellers creating product showcase videos for multiple items without hiring videographers or renting studios Social media managers generating daily content like travel vlogs, lifestyle videos, or brand stories from simple text descriptions Marketing teams producing promotional videos for campaigns, events, or product launches in minutes instead of days How it works / What it does Submit a text prompt using a form or input node. Workflow sends your prompt to the Sora 2 API endpoint to start video generation. It polls the API to check if the video is still processing or completed. When ready, it retrieves the finished video's download link and automatically saves the file. All actions—prompt submission, status checks, and video retrieval—run without manual oversight. How to set up Use your existing OpenAI API key or create a new one at https://platform.openai.com/api-keys Replace YourAPIKey in the following nodes with your OpenAI API key: Sora 2Video, Get Video, Download Video Adjust the Wait node for Video node intervals if needed — video generation typically takes several minutes Enter your video prompt into the Text Prompt trigger form to start the workflow Requirements OpenAI account & OpenAI API key n8n instance (cloud or self-hosted) A form, webhook, or manual trigger for prompt submission How to customize the workflow Connect the prompt input to external forms, bots, or databases. Add post-processing steps like uploading videos to cloud storage or social platforms. Adjust polling intervals for efficient status checking. Limitations and Usage Tips Prompt Clarity: For optimal video generation results, ensure that prompts are clear, concise, and well-structured. Avoid ambiguity and overly complex language to improve AI interpretation. Processing Duration: Video creation may take several minutes depending on prompt complexity and system load. Users should anticipate this delay and design workflows accordingly. Polling Interval Configuration: Adjust polling intervals thoughtfully to balance prompt responsiveness with API rate limits, optimizing both performance and resource usage. API Dependency: This workflow relies on the availability and quota limits of OpenAI’s Sora 2 API. Users should monitor their API usage to avoid interruptions and service constraints.
Enrich CRM contact data with LinkedIn profiles using GPT & multi-CRM support
⚠️ DISCLAIMER: This workflow uses the AnySite LinkedIn community node, which is only available on self-hosted n8n instances. It will not work on n8n.cloud. Overview This n8n workflow automates the enrichment of CRM contact data with professional insights from LinkedIn profiles. The workflow integrates with both Pipedrive and HubSpot CRMs, finding LinkedIn profiles that match your contacts and updating your CRM with valuable information about their professional background and recent activities. Key Features Multi-CRM Support: Works with both Pipedrive and HubSpot AI-Powered Data Enrichment: Uses an advanced AI agent to analyze and summarize professional information Automated Triggers: Activates when new contacts are added or when enrichment is requested Comprehensive Profile Analysis: Captures LinkedIn profile summaries and post activity How It Works Triggers The workflow activates in three scenarios: When a new contact is created in CRM When a contact is updated in CRM with an enrichment flag LinkedIn Data Collection Process Email Lookup: First tries to find the LinkedIn profile using the contact's email Advanced Search: If email lookup fails, uses name and company details to find potential matches Profile Analysis: Collects comprehensive profile information Post Analysis: Gathers and analyzes the contact's recent LinkedIn activity CRM Updates The workflow updates your CRM with: LinkedIn profile URL Professional summary (skills, experience, background) Analysis of recent LinkedIn posts and activity Setup Instructions Requirements Self-hosted n8n instance with the HDW LinkedIn community node installed API access to OpenAI (for GPT-4o) Pipedrive and/or HubSpot account AnySite API key https://AnySite.io Installation Steps Install the HDW LinkedIn Node: npm install n8n-nodes-hdw Follow the detailed instructions at: https://www.npmjs.com/package/n8n-nodes-hdw Configure Credentials: OpenAI: Add your OpenAI API key Pipedrive: Connect your Pipedrive account (if using) HubSpot: Connect your HubSpot account (if using) AnySite LinkedIn: Add your API key from https://AnySite.io CRM Custom Fields Setup: For Pipedrive: Go to Settings → Data Fields → Contact Fields → + Add Field Create the following custom fields: LinkedIn Profile: Field type - Large text Profile Summary: Field type - Large text LinkedIn Posts Summary: Field type - Large text Need Enrichment: Field type - Single option (Yes/No) Detailed instructions for creating custom fields in Pipedrive: https://support.pipedrive.com/en/article/custom-fields For HubSpot: Go to Settings → Properties → Create property Create the following properties for Contact object: linkedin_url: Field type - Single-line text profile_summary: Field type - Multi-line text linkedinpostssummary: Field type - Multi-line text need_enrichment: Field type - Checkbox (Boolean) Detailed instructions for creating properties in HubSpot: https://knowledge.hubspot.com/properties/create-and-edit-properties Import the Workflow: Import the "HDWCRMEnrichment.json" file into your n8n instance Activate Webhooks: Enable the webhook triggers for your CRM to ensure the workflow activates correctly Customization Options AI Agent Prompts You can modify the system prompts in the "Data Enrichment AI Agent" nodes to: Change the focus of profile analysis Adjust the tone and detail level of summaries Customize what information is extracted from posts CRM Field Mapping The workflow is pre-configured to update specific custom fields in Pipedrive and HubSpot. Update the field/property mappings in: "Update data in Pipedrive" nodes "Update data in HubSpot" node Troubleshooting Common Issues LinkedIn Profile Not Found: Check if the contact's email is their work email; consider adjusting the search parameters Webhook Not Triggering: Verify webhook configuration in your CRM Missing Custom Fields: Ensure all required custom fields are created in your CRM with correct names Rate Limits Be aware of LinkedIn API rate limits (managed by AnySite LinkedIn node) Consider implementing delays if processing large batches of contacts Best Practices Use enrichment flags to selectively update contacts rather than enriching all contacts Review and clean contact data in your CRM before enrichment Periodically review the AI-generated summaries to ensure quality and relevance
Update Shopify order tags when a Onfleet event happens
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template automatically updates the tags for a Shopify Order when an Onfleet event occurs. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Shopify node with your Shopify credentials and add your own tags to the Shopify Order
Upload & rename videos to Google Drive via Apps Script from URL
📄 Google Script Workflow: Upload File from URL to Google Drive (via n8n) 🔧 Purpose: This lightweight Google Apps Script acts as a server endpoint that receives a file URL (from n8n), downloads the file, uploads it to your specified Google Drive folder, and responds with the file’s metadata (like Drive file ID and URL). This is useful for large video/audio files that n8n cannot handle directly via HTTP Download nodes. --- 🚀 Setup Steps: Create a New Script Project Go to https://script.google.com Click “New Project” Rename the project to something like: DriveUploader --- Paste the Script Code Replace the default Code.gs content with the following (your custom script): javascript function doPost(e) { const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here try { const data = JSON.parse(e.postData.contents); // 🔒 Check for correct secret key if (!data.secret || data.secret !== SECRET_KEY) { return ContentService.createTextOutput("Unauthorized") .setMimeType(ContentService.MimeType.TEXT); } const videoUrl = data.videoUrl; const folderId = 'YOURFOLDERID_HERE'; // Replace with your target folder ID const folder = DriveApp.getFolderById(folderId); const response = UrlFetchApp.fetch(videoUrl); const blob = response.getBlob(); const file = folder.createFile(blob); file.setName('uploaded_video.mp4'); // You can customize the name return ContentService.createTextOutput(file.getUrl()) .setMimeType(ContentService.MimeType.TEXT); } catch (err) { return ContentService.createTextOutput("Error: " + err.message) .setMimeType(ContentService.MimeType.TEXT); } } --- Generate & Set Up Secret Key To allow authorized post requests to your script only, we need to generate a secret key from aany reliable key generator. You can head over to acte, click generate and copy the "Encryption key 256". Paste it in the 'your-strong-secret-here' placeholder in your script then click save js const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here; Replace Folder ID in Code Open the target Drive folder in your browser The folder ID is the part of the URL after /folders/ Example: https://drive.google.com/drive/u/0/folders/1Xabc12345678defGHIJklmn Paste that ID in the script: js var folderId = "1Xabc12345678defGHIJklmn"; --- Set Up Deployment as Web App Click “Deploy” > “Manage Deployments” > “New Deployment” Under Select type, choose Web app Description: Upload from URL to Drive Execute as: Me Who has access: Anyone Click Deploy Authorize the script when prompted Copy the Web App URL --- 📤 How to Use in n8n HTTP Request Node Method: POST URL: (your web app URL) Secret Key: (Secret Key set in script) Body Content Type: JSON Paste code: json { "videoUrl": "https://example.com/path/to/your.mp4", "secret": "your-strong-secret-here" } videoUrl: The file download URL secret: The generated and set up secret key Rename Node A simple drive update node to rename the file using the file drive url returned from the script. ---