Scrape business leads from Google Maps using OpenAI and Google Sheets
Google Maps Data Extraction Workflow for Lead Generation This workflow is ideal for sales teams, marketers, entrepreneurs, and researchers looking to efficiently gather detailed business information from Google Maps for: Lead generation Market analysis Competitive research --- Who Is This Workflow For? Sales professionals aiming to build targeted contact lists Marketers looking for localized business data Researchers needing organized, comprehensive business information --- Problem This Workflow Solves Manually gathering business contact details from Google Maps is: Tedious Error-prone Time-consuming This workflow automates data extraction to increase efficiency, accuracy, and productivity. --- What This Workflow Does Automates extraction of business data (name, address, phone, email, website) from Google Maps Crawls and extracts additional website content Integrates OpenAI to enhance data processing Stores structured results in Google Sheets for easy access and analysis Uses Google Search API to fill in missing information --- Setup Import the provided n8n workflow JSON into your n8n instance. Set your OpenAI and Google Sheets API credentials. Provide your Google Maps Scraper and Website Content Crawler API keys. Ensure SerpAPI is configured to enhance data completeness. --- Customizing This Workflow to Your Needs Adjust scraping parameters: Location Business category Country code Customize Google Sheets output format to fit your current data structure Integrate additional AI processing steps or APIs for richer data enrichment --- Final Notes This structured approach ensures: Accurate and compliant data extraction from Google Maps Streamlined lead generation Actionable and well-organized data ready for business use 📄 Documentation: Notion Guide Demo Video 🎥 Watch the full tutorial here: YouTube Demo
Self-learning AI assistant with permanent memory | GPT,Telegram & Pinecone RAG
Your AI secretary that self-learning every day and remembers everything you said (text, audio, image). Imagine having a personal AI secretary accessible right from your Telegram, ready to assist you with information and remember everything you discuss. This n8n workflow transforms Telegram into your intelligent assistant, capable of understanding text, audio, and images, and continuously learning from your interactions. It integrates RAG's offline data ingestion and online querying functionalities, letting you save inspiration and key information permanently in real-time, and giving you an AI assistant that remembers all your dialogues and information. It builds and queries a powerful vector database in real-time, ensuring relevant and accurate responses. Video guidance on how to set up Telegram integration is also included. Who is this for? This template is ideal for: Individuals seeking a personal AI assistant for quick information retrieval and note-taking. Professionals who need to keep track of important conversations and insights. Anyone interested in leveraging the power of Retrieval-Augmented Generation (RAG) and vector databases for personal knowledge management. Users who want a self-learning AI that improves over time based on their interactions. What problem is this workflow solving? This workflow integrates RAG's offline data ingestion and online querying functionalities, letting you save inspiration and key information permanently in real-time, and giving you an AI assistant that remembers all your dialogues and information. This workflow addresses the challenge of information overload and the need for an easily accessible, personalized knowledge base. It eliminates the need to manually organize notes and search through past conversations. By automatically storing and retrieving information from a vector database, this workflow makes it effortless to access the knowledge you need, when you need it. It also provides a way to retain information from various media types like voice notes and images. What this workflow does: This workflow automates the following steps: Instant Information Capture: Receives text messages, audio notes (transcribed), and images (with content analysis) directly from your Telegram. Intelligent Question Answering: When you ask a question, the AI searches its knowledge base (Pinecone vector store) for relevant information and provides a comprehensive answer. It even considers your recent conversations for context. Automatic Knowledge Storage: When you make a statement or provide information, the AI extracts key details and saves them in a Google Docs "memory palace." Daily Self-Learning: Every day, the workflow automatically takes all the information stored in the Google Docs, converts it into a vector representation, and adds it to its knowledge base (Pinecone vector store). This ensures the AI continuously learns and remembers everything you've shared. Image Understanding: Extracts text and information from images you send. Audio Transcription: Automatically transcribes your voice notes into text for processing and storage. Short-Term Memory: Remembers recent interactions within a session for more context-aware conversations. Setup: To get started, you'll need to connect the following services to your n8n instance: Telegram: Connect your Telegram bot API credentials. A video guidance is included for telegram integration setup. OpenAI: Provide your OpenAI API key for audio transcription and image analysis. Pinecone: Set up a Pinecone account and provide your API key and environment. Create a namespace in Pinecone. Google Docs: Connect your Google account with access to Google Docs. You'll need to create a Google Doc that will serve as the daily "memory palace" and provide its ID in the workflow. How to customize this workflow: Adjust the AI Agent's Personality: Modify the system prompt in the "AI Agent" node to tailor the AI's tone and behavior. Expand Knowledge Sources: Integrate other data sources into the daily learning process, such as emails or other documents, by adding more nodes to the scheduled trigger workflow. Add More Tools for the AI Agent: Integrate additional tools into the AI Agent, such as web search or other APIs, to further enhance its capabilities. Modify the Daily Schedule: Adjust the schedule trigger to run at a different time or interval.
Automate Dutch Public Procurement Data Collection with TenderNed
TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint - Retrieves the complete tender documentation in XML format JSON endpoint - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch 🔗 LinkedIn – Wessel Bulte
Automated Google Sheet to CSV conversion via Slack messages
Step 1: Slack Trigger The workflow starts whenever your Slack bot is mentioned or receives an event in a channel. The message that triggered it (including text and channel info) is passed into the workflow. Step 2: Extract the Sheet ID The workflow looks inside the Slack message for a Google Sheets link. If it finds one, it extracts the unique spreadsheet ID from that link. It also keeps track of the Slack channel where the message came from. If no link is found, the workflow stops quietly. Step 3: Read Data from Google Sheet Using the sheet ID, the workflow connects to Google Sheets and reads the data from the chosen tab (the specific sheet inside the spreadsheet). This gives the workflow all the rows and columns of data from that tab. Step 4: Convert Data to CSV The rows pulled from Google Sheets are then converted into a CSV file. At this point, the workflow has the spreadsheet data neatly packaged as a file. Step 5: Upload CSV to Slack Finally, the workflow uploads the CSV file back into Slack. It can either be sent to a fixed channel or directly to the same channel where the request came from. Slack users in that channel will see the CSV as a file upload. ============================================ How it works The workflow is triggered when your Slack bot is mentioned or receives a message. It scans the message for a Google Sheets link. If a valid link is found, the workflow extracts the unique sheet ID. It then connects to Google Sheets, reads the data from the specified tab, and converts it into a CSV file. Finally, the CSV file is uploaded back into Slack so the requesting user (and others in the channel) can download it. How to use In Slack, mention your bot and include a Google Sheets link in your message. The workflow will automatically pick up the link and process it. Within a short time, the workflow will upload a CSV file back into the same Slack channel. You can then download or share the CSV file directly from Slack. Requirements Slack App & Credentials: Your bot must be installed in Slack with permissions to receive mentions and upload files. Google Sheets Access: The Google account connected in n8n must have at least read access to the sheet. n8n Setup: The workflow must be imported into n8n and connected to your Slack and Google Sheets credentials. Correct Sheet Tab: The workflow needs to know which tab of the spreadsheet to read (set by name or by sheet ID). Customising this workflow Channel Targeting: By default, the file can be sent back to the channel where the request came from. You can also set it to always post in a fixed channel. File Naming: Change the uploaded file name (e.g., include the sheet title or today’s date). Sheet Selection: Adjust the configuration to read a specific tab or allow the user to specify the tab in their Slack message. Error Handling: Add extra steps to send a Slack message if no valid link is detected, or if the Google Sheet cannot be accessed. Formatting: Extend the workflow to clean, filter, or enrich the data before converting it into CSV.