🛠️ Re-Access Binary Data from Any Previous Node
How it works
Ever had binary data (like images, PDFs, or files) disappear in your n8n workflow after an intermediate node processed it? This workflow provides a powerful solution by demonstrating how to re-access and re-attach binary data from any previous node, even if it was dropped along the way. Think of it like having a reliable backup copy of your file always available, no matter what happens to the original as it moves through your workflow.
Here's how this template works step-by-step:
- Initial Binary Fetch: The workflow starts by fetching a binary image (the n8n logo) from a URL using an
HTTP Requestnode. This is our original binary data. - Simulated Data Loss: A
Setnode then processes this data. Crucially, by default,Setnodes (and many others) do not pass binary data to subsequent nodes. This step intentionally simulates a common scenario where your binary data might seem to "disappear" from the workflow's output. - Re-Access and Re-Attach: The core of the solution is a
Codenode. It uses a specific n8n expression ($(nodeName).item) to reach back to the original node that produced the binary data (Get n8n Logo (Binary)). It then retrieves that binary data and usesthis.helpers.prepareBinaryData()to correctly re-attach it to the current item, making it available for all subsequent nodes.
Set up steps
Setup time: 0 minutes!
This is a self-contained tutorial workflow, so no external accounts or credentials are required.
- Simply click the "Execute Workflow" button to run it.
- Observe the output of the
Re-Access Binary Data from Previous Nodeto see the binary data successfully re-attached. - Important for Customization: If you adapt this technique to your own workflows, remember to update the
previousNodeNamevariable within theRe-Access Binary Data from Previous Node(Code node) to match the exact name of the node that originally produced the binary data you wish to retrieve.
n8n Workflow: Re-access Binary Data from Any Previous Node
This n8n workflow demonstrates a fundamental concept in n8n: how to re-access and utilize binary data that was generated by an earlier node in the workflow, even after subsequent nodes have processed or transformed the data. This is particularly useful when dealing with files, images, or other non-textual data that needs to be passed through multiple steps.
What it does
This workflow illustrates the process of handling binary data within n8n, specifically focusing on how to make binary data from a previous node available to a later node.
- Manual Trigger: The workflow starts with a manual trigger, allowing you to execute it on demand.
- HTTP Request: It then makes an HTTP GET request to a URL. While the specific URL is not provided in the JSON, this node is configured to return the response as binary data. This simulates fetching a file or image from an external source.
- Edit Fields (Set): A "Set" node is included, which typically modifies or adds data to the workflow items. In this context, it serves to demonstrate that even after intermediate nodes, the original binary data can still be accessed.
- Code Node: Finally, a Code node would typically contain custom JavaScript logic. This node is where you would implement the code to retrieve and process the binary data from the "HTTP Request" node.
Prerequisites/Requirements
- n8n Instance: You need a running n8n instance to import and execute this workflow.
- HTTP Endpoint (Optional): To fully test the "HTTP Request" node, you would ideally point it to an actual URL that returns binary data (e.g., an image URL, a PDF file URL).
Setup/Usage
- Import the Workflow:
- Copy the provided JSON code.
- In your n8n instance, click "New" in the workflows section.
- Click the three dots (...) in the top right corner and select "Import from JSON".
- Paste the JSON code and click "Import".
- Configure HTTP Request (Optional):
- Click on the "HTTP Request" node.
- In the "URL" field, enter the URL of a resource that returns binary data (e.g.,
https://www.n8n.io/favicon.icofor a small image). - Ensure the "Response Format" is set to "Binary Data" (this is the default when
binaryDatais true in the node's configuration).
- Access Binary Data in Code Node:
- Click on the "Code" node.
- You can access the binary data from the "HTTP Request" node using an expression like
{{ $node["HTTP Request"].binary.data }}. Replacedatawith the actual property name of the binary data if it's different. - For example, to log the binary data's name and size:
const binaryData = $node["HTTP Request"].binary; if (binaryData && Object.keys(binaryData).length > 0) { const firstBinaryItem = binaryData[Object.keys(binaryData)[0]]; // Assuming one binary item console.log(`Binary data name: ${firstBinaryItem.fileName}`); console.log(`Binary data size: ${firstBinaryItem.fileSize} bytes`); // You can then process firstBinaryItem.data (which is a Buffer) } return items;
- Execute the Workflow:
- Click the "Execute Workflow" button in the "Manual Trigger" node or the "Execute Workflow" button in the n8n canvas toolbar.
- Observe the output in the "Code" node to see the re-accessed binary data.
Related Templates
Track competitor SEO keywords with Decodo + GPT-4.1-mini + Google Sheets
This workflow automates competitor keyword research using OpenAI LLM and Decodo for intelligent web scraping. Who this is for SEO specialists, content strategists, and growth marketers who want to automate keyword research and competitive intelligence. Marketing analysts managing multiple clients or websites who need consistent SEO tracking without manual data pulls. Agencies or automation engineers using Google Sheets as an SEO data dashboard for keyword monitoring and reporting. What problem this workflow solves Tracking competitor keywords manually is slow and inconsistent. Most SEO tools provide limited API access or lack contextual keyword analysis. This workflow solves that by: Automatically scraping any competitor’s webpage with Decodo. Using OpenAI GPT-4.1-mini to interpret keyword intent, density, and semantic focus. Storing structured keyword insights directly in Google Sheets for ongoing tracking and trend analysis. What this workflow does Trigger — Manually start the workflow or schedule it to run periodically. Input Setup — Define the website URL and target country (e.g., https://dev.to, france). Data Scraping (Decodo) — Fetch competitor web content and metadata. Keyword Analysis (OpenAI GPT-4.1-mini) Extract primary and secondary keywords. Identify focus topics and semantic entities. Generate a keyword density summary and SEO strength score. Recommend optimization and internal linking opportunities. Data Structuring — Clean and convert GPT output into JSON format. Data Storage (Google Sheets) — Append structured keyword data to a Google Sheet for long-term tracking. Setup Prerequisites If you are new to Decode, please signup on this link visit.decodo.com n8n account with workflow editor access Decodo API credentials OpenAI API key Google Sheets account connected via OAuth2 Make sure to install the Decodo Community node. Create a Google Sheet Add columns for: primarykeywords, seostrengthscore, keyworddensity_summary, etc. Share with your n8n Google account. Connect Credentials Add credentials for: Decodo API credentials - You need to register, login and obtain the Basic Authentication Token via Decodo Dashboard OpenAI API (for GPT-4o-mini) Google Sheets OAuth2 Configure Input Fields Edit the “Set Input Fields” node to set your target site and region. Run the Workflow Click Execute Workflow in n8n. View structured results in your connected Google Sheet. How to customize this workflow Track Multiple Competitors → Use a Google Sheet or CSV list of URLs; loop through them using the Split In Batches node. Add Language Detection → Add a Gemini or GPT node before keyword analysis to detect content language and adjust prompts. Enhance the SEO Report → Expand the GPT prompt to include backlink insights, metadata optimization, or readability checks. Integrate Visualization → Connect your Google Sheet to Looker Studio for SEO performance dashboards. Schedule Auto-Runs → Use the Cron Node to run weekly or monthly for competitor keyword refreshes. Summary This workflow automates competitor keyword research using: Decodo for intelligent web scraping OpenAI GPT-4.1-mini for keyword and SEO analysis Google Sheets for live tracking and reporting It’s a complete AI-powered SEO intelligence pipeline ideal for teams that want actionable insights on keyword gaps, optimization opportunities, and content focus trends, without relying on expensive SEO SaaS tools.
Synchronizing WooCommerce inventory and creating products with Google Gemini AI and BrowserAct
Synchronize WooCommerce Inventory & Create Products with Gemini AI & BrowserAct This sophisticated n8n template automates WooCommerce inventory management by scraping supplier data, updating existing products, and intelligently creating new ones with AI-formatted descriptions. This workflow is essential for e-commerce operators, dropshippers, and inventory managers who need to ensure their product pricing and stock levels are synchronized with multiple third-party suppliers, minimizing overselling and maximizing profit. --- Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. --- How it works The workflow is typically run by a Schedule Trigger (though a Manual Trigger is also shown) to check stock automatically. It reads a list of suppliers and their inventory page URLs from a central Google Sheet. The workflow loops through each supplier: A BrowserAct node scrapes the current stock and price data from the supplier's inventory page. A Code node parses this bulk data into individual product items. It then loops through each individual product found. The workflow checks WooCommerce to see if the product already exists based on its name. If the product exists: It proceeds to update the existing product's price and stock quantity. If the product DOES NOT exist: An If node checks if the missing product's category matches a predefined type (optional filtering). If it passes the filter, a second BrowserAct workflow scrapes detailed product attributes from a dedicated product page (e.g., DigiKey). An AI Agent (Gemini) transforms these attributes into a specific, styled HTML table for the product description. Finally, the product is created in WooCommerce with all scraped details and the AI-generated description. Error Handling: Multiple Slack nodes are configured to alert your team immediately if any scraping task fails or if the product update/creation process encounters an issue. Note: This workflow does not support image uploads for new products. To enable this functionality, you must modify both the n8n and BrowserAct workflows. --- Requirements BrowserAct API account for web scraping BrowserAct n8n Community Node -> (n8n Nodes BrowserAct) BrowserAct templates named “WooCommerce Inventory & Stock Synchronization” and “WooCommerce Product Data Reconciliation” Google Sheets credentials for the supplier list WooCommerce credentials for product management Google Gemini account for the AI Agent Slack credentials for error alerts --- Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node --- Workflow Guidance and Showcase STOP Overselling! Auto-Sync WooCommerce Inventory from ANY Supplier
Create personalized email outreach with AI, Telegram bot & website scraping
Demo Personalized Email This n8n workflow is built for AI and automation agencies to promote their workflows through an interactive demo that prospects can try themselves. The featured system is a deep personalized email demo. --- 🔄 How It Works Prospect Interaction A prospect starts the demo via Telegram. The Telegram bot (created with BotFather) connects directly to your n8n instance. Demo Guidance The RAG agent and instructor guide the user step-by-step through the demo. Instructions and responses are dynamically generated based on user input. Workflow Execution When the user triggers an action (e.g., testing the email demo), n8n runs the workflow. The workflow collects website data using Crawl4AI or standard HTTP requests. Email Demo The system personalizes and sends a demo email through SparkPost, showing the automation’s capability. Logging and Control Each user interaction is logged in your database using their name and id. The workflow checks limits to prevent misuse or spam. Error Handling If a low-CPU scraping method fails, the workflow automatically escalates to a higher-CPU method. ⚙️ Requirements Before setting up, make sure you have the following: n8n — Automation platform to run the workflow Docker — Required to run Crawl4AI Crawl4AI — For intelligent website crawling Telegram Account — To create your Telegram bot via BotFather SparkPost Account — To send personalized demo emails A database (e.g., PostgreSQL, MySQL, or SQLite) — To store log data such as user name and ID 🚀 Features Telegram interface using the BotFather API Instructor and RAG agent to guide prospects through the demo Flow generation limits per user ID to prevent abuse Low-cost yet powerful web scraping, escalating from low- to high-CPU flows if earlier ones fail --- 💡 Development Ideas Replace the RAG logic with your own query-answering and guidance method Remove the flow limit if you’re confident the demo can’t be misused Swap the personalized email demo with any other workflow you want to showcase --- 🧠 Technical Notes Telegram bot created with BotFather Website crawl process: Extract sub-links via /sitemap.xml, sitemap_index.xml, or standard HTTP requests Fall back to Crawl4AI if normal requests fail Fetch sub-link content via HTTPS or Crawl4AI as backup SparkPost used for sending demo emails --- ⚙️ Setup Instructions Create a Telegram Bot Use BotFather on Telegram to create your bot and get the API token. This token will be used to connect your n8n workflow to Telegram. Create a Log Data Table In your database, create a table to store user logs. The table must include at least the following columns: name — to store the user’s name or Telegram username. id — to store the user’s unique identifier. Install Crawl4AI with Docker Follow the installation guide from the official repository: 👉 https://github.com/unclecode/crawl4ai Crawl4AI will handle website crawling and content extraction in your workflow. --- 📦 Notes This setup is optimized for low cost, easy scalability, and real-time interaction with prospects. You can customize each component — Telegram bot behavior, RAG logic, scraping strategy, and email workflow — to fit your agency’s demo needs. 👉 You can try the live demo here: @emaildemobot ---