Mauricio Perera
Automation consultant with over 10 years of experience specializing in AI, no-code, and workflow optimization. I’ve delivered tailored AI and NLP solutions across real estate, healthcare, and more, enhancing efficiency and customer experiences. Proficient in tools like Make, Airtable, and Zapier, I also integrate GPT models to create scalable, innovative automations. Contact me to discuss custom n8n workflows or advanced automations to streamline your processes.
Templates by Mauricio Perera
Analyze images, videos, documents & audio with Gemini Tools and Qwen LLM Agent
📁 Analyze uploaded images, videos, audio, and documents with specialized tools — powered by a lightweight language-only agent. --- 🧭 What It Does This workflow enables multimodal file analysis using Google Gemini tools connected to a text-only LLM agent. Users can upload images, videos, audio files, or documents via a chat interface. The workflow will: Upload each file to Google Gemini and obtain an accessible URL. Dynamically generate contextual prompts based on the file(s) and user message. Allow the agent to invoke Gemini tools for specific media types as needed. Return a concise, helpful response based on the analysis. --- 🚀 Use Cases Customer support: Let users upload screenshots, documents, or recordings and get helpful insights or summaries. Multimedia QA: Review visual, audio, or video content for correctness or compliance. Educational agents: Interpret content from PDFs, diagrams, or audio recordings on the fly. Low-cost multimodal assistants: Achieve multimodal functionality without relying on large vision-language models. --- 🎯 Why This Architecture Matters Unlike end-to-end multimodal LLMs (like Gemini 1.5 or GPT-4o), this template: Uses a text-only LLM (Qwen 32B via Groq) for reasoning. Delegates media analysis to specialized Gemini tools. ✅ Advantages | Feature | Benefit | | ----------------------- | --------------------------------------------------------------------- | | 🧩 Modular | LLM + Tools are decoupled; can update them independently | | 💸 Cost-Efficient | No need to pay for full multimodal models; only use tools when needed | | 🔧 Tool-based Reasoning | Agent invokes tools on demand, just like OpenAI’s Toolformer setup | | ⚡ Fast | Groq LLMs offer ultra-fast responses with low latency | | 📚 Memory | Includes context buffer for multi-turn chats (15 messages) | --- 🧪 How It Works 🔹 Input via Chat Users submit a message and (optionally) files via the chatTrigger. 🔹 File Handling If no files: prompt is passed directly to the agent. If files are included: Files are split, uploaded to Gemini (to get public URLs). Metadata (name, type, URL) is collected and embedded into the prompt. 🔹 Prompt Construction A new chatInput is dynamically generated: User message Media: [array of file data] 🔹 Agent Reasoning The Langchain Agent receives: The enriched prompt File URLs Memory context (15 turns) Access to 4 Gemini tools: IMG: analyze image VIDEO: analyze video AUDIO: analyze audio DOCUMENT: analyze document The agent autonomously decides whether and how to use tools, then responds with concise output. --- 🧱 Nodes & Services | Category | Node / Tool | Purpose | | --------------- | ---------------------------- | ------------------------------------- | | Chat Input | chatTrigger | User interface with file support | | File Processing | splitOut, splitInBatches | Process each uploaded file | | Upload | googleGemini | Uploads each file to Gemini, gets URL | | Metadata | set, aggregate | Builds structured file info | | AI Agent | Langchain Agent | Receives context + file data | | Tools | googleGeminiTool | Analyze media with Gemini | | LLM | lmChatGroq (Qwen 32B) | Text reasoning, high-speed | | Memory | memoryBufferWindow | Maintains session context | --- ⚙️ Setup Instructions 🔑 Required Credentials Groq API key (for Qwen 32B model) Google Gemini API key (Palm / Gemini 1.5 tools) 🧩 Nodes That Need Setup Replace existing credentials on: Upload a file Each GeminiTool (IMG, VIDEO, AUDIO, DOCUMENT) lmChatGroq ⚠️ File Size & Format Considerations Some Gemini tools have file size or format restrictions. You may add validation nodes before uploading if needed. --- 🛠️ Optional Improvements Add logging and error handling (e.g., for upload failures). Add MIME-type filtering to choose the right tool explicitly. Extend to include OCR or transcription services pre-analysis. Integrate with Slack, Telegram, or WhatsApp for chat delivery. --- 🧪 Example Use Case > "Hola, ¿qué dice este PDF?" Uploads a document → Agent routes it to Gemini DOCUMENT tool → Receives extracted content → LLM summarizes it in Spanish. --- 🧰 Tags multimodal, agent, langchain, groq, gemini, image analysis, audio analysis, document parsing, video analysis, file uploader, chat assistant, LLM tools, memory, AI tools --- 📂 Files This template is ready to use as-is in n8n. No external webhooks or integrations required.
Scrape ProductHunt using Google Gemini
Workflow Description: Product Data Extractor This workflow automates the extraction of product data from Product Hunt by combining webhook interactions, HTML processing, AI-based data analysis, and structured output formatting. It is designed to handle incoming requests dynamically and return detailed JSON responses for further usage. Overview The workflow processes a product name submitted through a webhook. It fetches the corresponding Product Hunt page, extracts and analyzes inline scripts, and structures the data into a well-defined JSON format using AI tools. The final JSON response is returned to the client through the webhook. Workflow Steps Webhook Listener Node: Receive Product Request Function: Captures incoming requests containing the product name to process. Details: Accepts HTTP requests and extracts the product parameter from the query string, such as <customwebhookurl>/?product=epigram. Fetch Product HTML Node: Fetch Product HTML Function: Sends an HTTP request to retrieve the HTML content of the specified Product Hunt page. Details: Constructs a dynamic URL using the product name and fetches the page data. Extract Inline Scripts Node: Extract Inline Scripts Function: Parses the HTML content to extract inline scripts located within the <head> section. Details: Excludes scripts containing src attributes and validates the presence of inline scripts. Process Data with LLM Node: Process Script with LLM Function: Analyzes the extracted scripts using a language model to identify key product data. Details: Processes the script to derive structured and meaningful insights. Refine Data with Google Gemini Node: Analyze Script with Google Gemini Function: Leverages Google Gemini AI for enhanced analysis of script data. Details: Ensures the extracted data is precise and enriched. Format Product Data to JSON Node: Format Product Data to JSON Function: Structures the processed data into a clean JSON format. Details: Defines a schema to ensure all relevant fields are included in the output. Send JSON Response to Client Node: Send JSON Response to Client Function: Returns the final structured JSON response to the client. Details: Sends the response back via the same webhook that initiated the request. For example, <customwebhookurl>. Key Features Versatile Use Cases: This workflow can be used to gather Product Hunt data for creating blog posts or as a tool for AI agents to research products efficiently. Dynamic Processing: Adapts to various product names through dynamic URL construction. AI Integration: Utilizes the Gemini 1.5 8B AI model, offering reduced latency and minimal or no cost depending on the use case. Selector Independence: Functions even if Product Hunt's DOM structure changes, as it does not rely on direct DOM selectors. Reliable Data Output: A low temperature setting (0) and a precisely defined JSON schema ensure accurate and real data extraction. Dynamic Processing: Adapts to various product names through dynamic URL construction. AI Integration: Utilizes advanced language models for data extraction and refinement. Structured Output: Ensures the output JSON adheres to a predefined schema for consistency. Error Handling: Includes validations to handle missing or malformed data gracefully. Customization Options Limitations Dependency on Product Hunt: Significant changes to the way Product Hunt loads data on its pages might require modifications to the workflow. Adaptability: Even if changes occur, the workflow can be updated to maintain functionality due to its reliance on AI and not direct DOM selectors. Modify the webhook path to suit your application. Adjust the prompt for the language model to include additional fields. Extend the JSON schema to capture more data fields as needed. Expected Output Performance Metrics Response Time: Typically ~6 seconds per product. Accuracy: Data extracted with >95% precision due to the pre-defined JSON schema. A JSON object containing detailed information about the specified product. Below is an example of a complete response for the product Epigram: json { "id": "861675", "slug": "epigram", "followersCount": 181, "name": "Epigram", "tagline": "Open-Source, Free, and AI-Powered News in Short", "reviewsRating": 0, "logoUuid": "735c2528-554c-467c-9dcf-745ee4b8bbdd.png", "postsCount": 1, "websiteUrl": "https://epigram.news", "websiteDomain": "epigram.news", "metaTitle": "Epigram - Open-source, free, and ai-powered news in short", "postName": "Epigram", "postTagline": "Open-source, free, and ai-powered news in short", "dailyRank": "3", "description": "An open-source, AI-powered news app for busy people. Stay updated with bite-sized news, real-time updates, and in-depth analysis. Experience balanced, trustworthy reporting tailored for fast-paced lifestyles in a sleek, user-friendly interface.", "pricingType": "free", "userName": "Fazle Rahman", "userHeadline": "Co-founder & CEO, Hashnode", "userUsername": "fazlerocks", "userAvatarUrl": "https://ph-avatars.imgix.net/129147/f84e1796-548b-4d6f-9dcf-745ee4b8bbdd.jpeg", "makerName1": "Fazle Rahman", "makerHeadline1": "Co-founder & CEO, Hashnode", "makerUsername1": "fazlerocks", "makerAvatarUrl1": "https://ph-avatars.imgix.net/129147/f84e1796-548b-4d6f-9dcf-745ee4b8bbdd.jpeg", "makerName2": "Sandeep Panda", "makerHeadline2": "Co-Founder @ Hashnode", "makerUsername2": "sandeepg33k", "makerAvatarUrl2": "https://ph-avatars.imgix.net/101872/80b0b618-a540-4110-a6d1-74df39675ad0.jpeg", "primaryLinkUrl": "https://epigram.news/", "media1OriginalHeight": 1080, "media1OriginalWidth": 1440, "media1ImageUuid": "ac426fd1-3854-4734-b43d-34a5e06347ea.gif", "media1MediaType": "video", "media1MetadataUrl": "https://www.loom.com/share/b1a48a9b3cac4ba89ce772a3fbcc2847?sid=75efc771-25fa-4ac0-bb1b-5e38fc447deb", "media1VideoId": "b1a48a9b3cac4ba89ce772a3fbcc2847", "media2OriginalHeight": 630, "media2OriginalWidth": 1200, "media2ImageUuid": "8521a6bd-7640-487b-abd6-29b9f65fee32", "media2MediaType": "image", "media2MetadataUrl": null, "launchState": "featured", "thumbnailImageUuid": "735c2528-554c-467c-9dcf-745ee4b8bbdd.png", "link1StoreName": "Website", "link1WebsiteName": "epigram.news", "link2StoreName": "Github", "link2WebsiteName": "github.com", "latestScore": 233, "launchDayScore": 233, "userId": "129147", "topic1": "News", "topic2": "Open Source", "topic3": "Artificial Intelligence", "weeklyRank": "24", "commentsCount": 20, "postUrl": "https://www.producthunt.com/posts/epigram" } Target Audience This workflow is ideal for developers, marketers, and data analysts seeking to automate the extraction and structuring of product data from Product Hunt for analytics, reporting, or integration with other tools.
Extract invoice data from PDFs to JSON with Gemini AI and XML transformation
This n8n workflow converts invoices in PDF format into a structured, ready-to-use JSON, using AI and XML transformation — without writing any code. 🚀 How it works Upload form → The user uploads a PDF file. Text extraction → The PDF content is extracted as plain text. XML schema definition → A standard invoice structure is defined with fields such as: Invoice number Customer and issuer details Items with description, quantity, and price Totals and taxes Bank account details AI (Gemini) → The model rewrites the PDF text into a valid XML following the predefined schema. XML cleanup → Removes extra tags, line breaks, and unnecessary formatting. JSON conversion → The XML is transformed into a clean, structured JSON object, ready for integrations, APIs, or storage. ✨ Benefits Transforms unstructured PDFs into normalized JSON data. No coding required, only n8n nodes. Scalable to different invoice formats with minimal adjustments. Leverages AI to interpret complex textual content. 🛠️ Use cases Automating invoice data capture. Integration with ERPs, CRMs, or databases. Generating financial reports from PDFs.
Currency conversion workflow
Purpose: This workflow exemplifies a sophisticated yet pragmatic mechanism for automating currency conversions by leveraging simple HTTP queries routed through a webhook. By intercepting user requests, sourcing real-time exchange rate data via Google search results, and formatting the data into actionable responses, it obviates the reliance on third-party APIs. This efficiency renders it an indispensable instrument for diverse applications, including dynamic pricing strategies for AI-driven systems, financial data automation, and real-time currency computation within complex workflows. The workflow's architectural simplicity ensures seamless integration across professional and academic domains, optimizing both scalability and reliability. Workflow Steps: Capture Conversion Query: The workflow initiates by intercepting user input delivered as a GET request through a configured webhook. Inputs should adhere to a structured syntax, such as 5usd to mxn, to ensure flawless processing. Testing Tip: Use tools like Postman or a browser to test GET requests and verify that the Webhook receives inputs correctly. Fetch Exchange Rate: Utilizing the HTTP Request node, a Google search query is executed to retrieve current exchange rate data. This method ensures the workflow remains economical and adaptable while circumventing API dependencies. Extract Conversion Data: By processing the returned HTML from Google's search results, this node extracts precise exchange rate figures and contextual information critical for accurate conversions. Error Handling: If extraction fails, verify that the input format is correct and update CSS selectors to reflect any changes in Google's HTML structure. Format Currency Response: The extracted data undergoes refinement and is formatted into a structured, user-friendly string that conveys the conversion results with clarity and precision. Send Conversion Response: The workflow culminates by dispatching the formatted response back to the request origin, completing the loop with efficiency and reliability. Required Configuration: Configure the Webhook node to accommodate GET requests. The query parameters should follow the format: https://your-webhook-url/currency-converter?q=5usd+to+mxn. Inputs must adhere strictly to the predefined syntax (e.g., 5usd to mxn). Deviations may induce processing errors or yield erroneous outputs. Customization Options: The Extract Conversion Data node’s CSS selectors can be fine-tuned to align with modifications in Google’s HTML structure, ensuring long-term operability. Adjustments to the Format Currency Response node enable bespoke output formatting, incorporating additional metadata or altering the response structure to meet specific project requisites. Advanced Features: This workflow’s modular design supports seamless integration into expansive systems. For instance, an e-commerce platform could employ it to dynamically localize product pricing based on user geolocation. Enhanced functionality can be achieved by appending nodes to log conversions, monitor performance metrics, or trigger auxiliary workflows conditioned on conversion outputs. Expected Results: For a query like 5usd to mxn, the workflow generates a response formatted as: 5 USD = 95 MXN. This output is optimized for readability and practical application. Use Case Examples: AI Integration: Enables artificial intelligence agents to process real-time price conversions efficiently across diverse currencies, enhancing their computational capabilities. Financial Operations: Automates precise currency conversions for corporate reports, international transactions, and market analytics. Personal Financial Planning: Assists users in calculating currency conversions for investment decisions or travel budgeting with minimal manual effort. E-commerce Applications: Facilitates dynamic price adjustments on online marketplaces, displaying localized prices to augment user experience and conversion rates. Workflow Integration: Embeds seamlessly into larger systems, such as CRMs or ERPs, to streamline financial operations and enhance interoperability. Key Benefits: No API Dependency: By leveraging publicly available data from Google, the workflow eliminates the complexities and costs associated with API integration, reducing overhead and enhancing accessibility. Precision and Currency: Ensures accurate and real-time results by querying Google directly. Flexibility: Designed to adapt to various operational contexts and input formats, making it a versatile asset in computational and commercial applications. Tags: currency conversion, automation, webhook, data extraction, AI integration, financial automation, e-commerce, real-time data, scalable workflows.
Calculate the centroid of a set of vectors
n8n Workflow: Calculate the Centroid of a Set of Vectors Overview This workflow receives an array of vectors in JSON format, validates that all vectors have the same dimensions, and computes the centroid. It is designed to be reusable across different projects. Workflow Structure Nodes and Their Functions: Receive Vectors (Webhook): Accepts a GET request containing an array of vectors in the vectors parameter. Expected Input: vectors parameter in JSON format. Example Request: /webhook/centroid?vectors=[[2,3,4],[4,5,6],[6,7,8]] Output: Passes the received data to the next node. Extract & Parse Vectors (Set Node): Converts the input string into a proper JSON array for processing. Ensures vectors is a valid array. If the parameter is missing, it may generate an error. Expected Output Example: json { "vectors": [[2,3,4],[4,5,6],[6,7,8]] } Validate & Compute Centroid (Code Node): Validates vector dimensions and calculates the centroid. Validation: Ensures all vectors have the same number of dimensions. Computation: Averages each dimension to determine the centroid. If validation fails: Returns an error message indicating inconsistent dimensions. Successful Output Example: json { "centroid": [4,5,6] } Error Output Example: json { "error": "Vectors have inconsistent dimensions." } Return Centroid Response (Respond to Webhook Node): Sends the final response back to the client. If the computation is successful, it returns the centroid. If an error occurs, it returns a descriptive error message. Example Response: json { "centroid": [4, 5, 6] } Inputs JSON array of vectors, where each vector is an array of numerical values. Example Input json { "vectors": [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] } Setup Guide Create a new workflow in n8n. Add a Webhook node (Receive Vectors) to receive JSON input. Add a Set node (Extract & Parse Vectors) to extract and convert the data. Add a Code node (Validate & Compute Centroid) to: Validate dimensions. Compute the centroid. Add a Respond to Webhook node (Return Centroid Response) to return the result. Function Node Script Example javascript const input = items[0].json; const vectors = input.vectors; if (!Array.isArray(vectors) || vectors.length === 0) { return [{ json: { error: "Invalid input: Expected an array of vectors." } }]; } const dimension = vectors[0].length; if (!vectors.every(v => v.length === dimension)) { return [{ json: { error: "Vectors have inconsistent dimensions." } }]; } const centroid = new Array(dimension).fill(0); vectors.forEach(vector => { vector.forEach((val, index) => { centroid[index] += val; }); }); for (let i = 0; i < dimension; i++) { centroid[i] /= vectors.length; } return [{ json: { centroid } }]; Testing Use a tool like Postman or the n8n UI to send sample inputs and verify the responses. Modify the input vectors to test different scenarios. This workflow provides a simple yet flexible solution for vector centroid computation, ensuring validation and reliability.
YouTube report generator
YouTube Subtitles Report Generator Overview This template enables users to generate analytical reports from YouTube video subtitles, providing insights into the thematic content of the video. Designed for efficiency and simplicity, it processes video subtitles without requiring an API key, making it an accessible solution for content analysis. The system assumes videos already have subtitles available, excluding live streams and videos without subtitle data. --- Key Features Trigger Webhook: Seamlessly receive video IDs through a webhook trigger. Fetch Video HTML: Retrieves the video’s HTML content directly from YouTube. Extract Subtitles URLs: Processes the HTML content to find and decode subtitle URLs. Fetch Subtitles Content: Downloads the subtitles from the decoded URLs in XML format. Generate Analytical Report: Utilizes an AI model to summarize and analyze the thematic essence of video content. The system supports models such as Google Gemini, OpenAI, and other compatible language models. The quality of the results may vary depending on the model used, with better models providing faster and more accurate summaries. The default prompt focuses on identifying and summarizing the main theme of the video while excluding content related to promotions, subscriptions, or sponsored content. Return Analytical Report: Delivers concise analytical reports as the final response to the webhook, suitable for various use cases like research, content creation, or consumption as plain text. --- Setup Instructions Step 1: Webhook Configuration Set up the webhook URL in your external system (e.g., app, API) to send YouTube video IDs to this workflow. Step 2: API Access Ensure that your environment has access to YouTube’s public HTML content. An API key is not required since the workflow parses publicly available data. Step 3: AI Integration Verify the connection to the AI model used for report generation. Supported models include Google Gemini and OpenAI. Note that the system can be customized by modifying the provided prompt to suit specific analysis needs. Step 4: Testing Run a sample test with a YouTube video ID to ensure subtitles are correctly retrieved and the report is generated successfully. --- Expected Outcomes Effortless Content Analysis: Generate thematic reports with minimal setup. No API Key Dependency: Simplified access by leveraging YouTube’s public HTML. Actionable Insights: Gain valuable information on video content for business, research, or personal projects. Cost Considerations: The execution cost depends primarily on the model used and the length of the video (affecting token usage). Leveraging the free tier of Google Gemini models is recommended to minimize costs. Main Theme Extraction: The default prompt excludes irrelevant promotional content, providing clear and focused thematic summaries. Estimated setup time: 15–20 minutes with a ready environment. --- Prompt Description The workflow includes a customizable prompt used to process subtitles in XML format and generate analytical reports. The generated report includes: Title: A brief phrase capturing the essence of the main theme. Body: An analytical description of the main theme, structured into a maximum of three concise paragraphs. It focuses on summarizing key ideas, recurring themes, and connections without referencing the source explicitly (e.g., avoiding phrases like “this video”). The system also removes content related to sales, subscriptions, or promotions to maintain a clear thematic focus. --- Example Output Title: The Ethical Challenges of Artificial Intelligence Report: Artificial intelligence presents significant challenges in areas such as privacy, fairness, and automated decision-making. Its implementation in critical sectors such as healthcare, justice, and security has sparked debates about inherent biases and lack of transparency. Additionally, there is growing concern over the ethical implications of automation, including its impact on employment and the global economy. Finally, the importance of establishing strong regulatory frameworks is highlighted to balance technological innovation with the protection of human rights.
AI-powered reasoning and response workflow
Overview: This workflow is designed to handle user inputs via a webhook, process the inputs with the Google Gemini API (specifically the gemini-2.0-flash-thinking-exp-1219 model), and return a structured response to the user. The response includes three key elements: reasoning, the final answer, and citation URLs (if applicable). This workflow provides a robust solution for integrating AI reasoning into your processes. This workflow can be utilized as a tool for AI-based agents, intelligent email drafting systems, or as a standalone intelligent automation solution. --- Setup: Webhook Configuration: Ensure the webhook node is properly set up to accept GET requests with an input parameter. Verify that the webhook path matches your application requirements. Test the webhook using tools like Postman to ensure proper data formatting. Google Gemini API Credentials: Set up your Google Gemini API account credentials in the HTTP Request node. Ensure API access and permissions are valid. Parameter Adjustments: Customize the temperature, topK, topP, and maxOutputTokens parameters to fit your use case. --- Customization: Input Parameters: Modify the webhook path or parameters based on the data your application will send. Response Formatting: Adjust the JavaScript code in the "Process API Response" node to fit your desired output structure. Output Expectations: Test the response returned by the "Return Response to User" node to ensure it meets your application requirements. --- Workflow Steps: Receive User Input: Node Type: Webhook Purpose: Captures a GET request containing a user-provided input parameter. Acts as the starting point for the workflow. Send Request to Google Gemini: Node Type: HTTP Request Purpose: Sends the received input to the Gemini-2.0-flash-thinking-exp-1219 model for processing. The API configuration includes parameters for customizing the response. Process API Response: Node Type: Code Node Purpose: Extracts reasoning, the final answer, and citation URLs from the API response. Organizes the output for further use. Return Response to User: Node Type: Respond to Webhook Purpose: Sends the processed and structured response back to the user via the webhook. Ensures the response format meets expectations. --- Expected Outcomes: Input Handling: Successfully captures user input via a webhook. AI Processing: Generates a structured response using the Gemini-2.0-flash-thinking-exp-1219 model, including reasoning, answers, and citations (if available). Output Delivery: Returns a user-friendly response formatted to your specifications. --- Notes: The workflow is inactive by default. Each node is annotated with a Sticky Note to clarify its purpose. Ensure all API credentials are correctly configured before execution. Use this workflow to save time, improve accuracy, and automate repetitive tasks efficiently. --- Tags: Automation Google Gemini AI Agents Intelligent Automation Content Generation Workflow Integration
Extract links and URLs from PDF documents using PDF.co
📝 Description This workflow allows you to extract all links (URLs) contained in a PDF file by converting it to HTML via PDF.co and then extracting the URLs present in the resulting HTML. Unlike the traditional Read PDF node, which only returns visible link text, this flow provides the full active URLs, making further processing and analysis easier. --- 📌 Use Cases Extract all hyperlinks from PDF documents. Automate URL verification and monitoring within documents. Extract links from reports, contracts, catalogs, newsletters, or manuals. Prepare URLs for validation, classification, or storage. --- 🔗 Workflow Overview User uploads a PDF file via a web form. The PDF is uploaded to PDF.co. The PDF is converted to HTML (preserving links). The converted HTML is downloaded. URLs are extracted from the HTML using a custom code node. --- ⚙️ Node Breakdown Load PDF (formTrigger) Uploads a .pdf file. Single file upload. Upload (PDF.co API) Uploads the PDF file to PDF.co using binary data. PDF to HTML (PDF.co API) Converts the uploaded PDF to HTML using its URL. Get HTML (HTTP Request) Downloads the converted HTML from PDF.co. Code1 (Function / Code) Parses the HTML content to extract all URLs (http, https, www). Uses a regex to identify URLs within the HTML text. Outputs an array of objects containing the extracted URLs. --- 📎 Requirements Active PDF.co account with API key. Set up PDF.co credentials in n8n (PDF.co account). Enable webhook to expose the upload form. --- 🛠️ Suggested Next Steps Add nodes to validate extracted URLs (e.g., HTTP requests to check status). Store URLs in a database, spreadsheet, or send via email. Extend the flow to filter URLs by domain, type, or pattern. --- 📤 Importing the Template Import this workflow into n8n via Import workflow and paste the provided JSON. --- If you want help adding extra steps or optimizing the URL extraction, just ask! --- If you want, I can also prepare this as a Canva visual template for you. Would you like that?