Daily Google Ads performance to Notion and Google Sheets
Description
This workflow automates the daily reporting of Google Ads campaign performance. It pulls click and conversion data from the Google Ads API, merges both datasets, and stores the results into Notion databases and Google Sheets.
It includes a campaign-level log and a daily performance summary. The workflow is triggered automatically every day at 08:00 AM, helping marketing teams maintain a consistent and centralized reporting system without manual effort.
How It Works
-
Scheduled Trigger at 08:00 AM
The workflow begins with aSchedule Triggernode that runs once per day at 08:00. -
Set Yesterday’s Date
TheSetnode defines a variable for the target date (yesterday), which is used in the API queries. -
Query Google Ads API – Clicks & Cost
The first HTTP request pulls campaign-level metrics:campaign.id,campaign.namemetrics.clicks,metrics.impressions,metrics.cost_micros
-
Query Google Ads API – Conversions
The second HTTP request pulls conversion-related data:metrics.conversions,segments.conversion_action_name
-
Split and Merge
Both responses are split into individual campaign rows and merged using:campaign.idsegments.date
-
Store Campaign-Level Data
- Stored in Notion database: "Google Ads Campaign Tracker"
- Appended to Google Sheets tab: "Campaign Daily Report"
-
Generate Daily Summary
A code node calculates daily totals across all campaigns:- Total impressions, clicks, conversions, cost
- Unique conversion types
The summary is stored in: - Notion database: "Google Ads Daily Summary"
- Google Sheets tab: "Summary Report"
Setup Steps
1. Schedule the Workflow
- The workflow is triggered using a
Schedule Triggernode - Set the schedule to run every day at 08:00 AM
- Connect it to the
Set Yesterday Datenode
2. Google Ads API Access
- Create a Google Ads developer account and obtain a developer token
- Set up OAuth2 credentials with Google Ads scope
- In n8n, configure the Google Ads OAuth2 API credential
- Ensure HTTP request headers include:
developer-tokenlogin-customer-idContent-Type: application/json
3. Notion Database Setup
Create two databases in Notion:
- Google Ads Campaign Tracker
- Fields:
Campaign Name,Campaign ID,Impressions,Clicks,Cost,Conversion Type,Conversions,Date
- Fields:
- Google Ads Daily Summary
- Fields:
Date,Total Impressions,Total Clicks,Total Conversions,Total Cost,Conversion Types
- Fields:
- Share both databases with your Notion integration
4. Google Sheets Setup
- Create a spreadsheet with two tabs:
Campaign Daily Report→ for campaign-level rowsSummary Report→ for daily aggregated metrics
- Match all column headers to the workflow fields
- Connect your Google account to n8n using Google Sheets OAuth2
Output Summary
Notion Databases:
Google Ads Campaign Tracker: stores individual campaign metricsGoogle Ads Daily Summary: stores daily totals and conversion types
Google Sheets Tabs:
Campaign Daily Report: per-campaign dataSummary Report: aggregated daily performance
Daily Google Ads Performance to Notion and Google Sheets
This n8n workflow automates the daily retrieval of Google Ads performance data, processes it, and then stores it in both a Google Sheet and a Notion database. It simplifies the process of tracking ad campaign performance across multiple platforms.
What it does
This workflow performs the following steps:
- Triggers on a schedule: The workflow is set to run automatically at predefined intervals (e.g., daily).
- Fetches Google Ads Data: Makes an HTTP request to a Google Ads API endpoint to retrieve performance metrics.
- Processes Data: Uses a "Code" node to transform or filter the raw data received from Google Ads.
- Edits Fields: Further refines the data by setting or modifying specific fields using a "Set" node.
- Merges Data: Combines data streams, likely to prepare a unified dataset for the subsequent actions.
- Updates Google Sheet: Appends or updates rows in a specified Google Sheet with the processed Google Ads performance data.
- Updates Notion Database: Creates or updates pages in a Notion database with the same performance metrics, providing a dashboard-like view.
Prerequisites/Requirements
To use this workflow, you will need:
- n8n Instance: A running instance of n8n.
- Google Ads API Access: Credentials and configuration for accessing the Google Ads API via an HTTP Request node. This typically involves setting up a Google Cloud Project and enabling the Google Ads API.
- Google Sheets Account: A Google account with access to Google Sheets, and a specific spreadsheet and sheet configured to receive the data. You will need to set up a Google Sheets credential in n8n.
- Notion Account: A Notion account with a database set up to store the ad performance data. You will need to set up a Notion credential in n8n.
Setup/Usage
- Import the workflow: Download the provided JSON and import it into your n8n instance.
- Configure Credentials:
- Google Sheets: Set up a Google Sheets OAuth2 credential in n8n.
- Notion: Set up a Notion API Key credential in n8n.
- Google Ads (HTTP Request): Configure the "HTTP Request" node with the necessary authentication (e.g., OAuth2, API Key, or custom headers) for your Google Ads API.
- Customize Nodes:
- HTTP Request (Google Ads): Update the URL, headers, and body of the HTTP request to match your specific Google Ads API query for performance data.
- Code: Modify the JavaScript code within the "Code" node if you need to adjust data processing logic.
- Edit Fields (Set): Adjust the fields being set or modified to match your desired output structure.
- Google Sheets: Specify the Spreadsheet ID and Sheet Name where the data should be written. Configure the operation (e.g., "Append Row" or "Update Row") and map the incoming data to the correct columns.
- Notion: Specify the Database ID in Notion and map the incoming data to the properties of your Notion database pages.
- Activate the workflow: Enable the workflow to run on its schedule.
Related Templates
Automate Dutch Public Procurement Data Collection with TenderNed
TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint - Retrieves the complete tender documentation in XML format JSON endpoint - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch 🔗 LinkedIn – Wessel Bulte
Automate Google Ads copy optimization with Channable feed and Relevance AI
🧠 Google Ads Monthly Performance Optimization (Channable + Google Ads + Relevance AI) 🚀 Overview This workflow automatically analyzes your Google Ads performance every month, identifies top-performing themes and categories, and regenerates optimized ad copy using Relevance AI — powered by insights from your Channable product feed. It then saves the improved ads to Google Sheets for review and sends a detailed performance report to your Slack workspace. Ideal for marketing teams who want to automate ad optimization at scale with zero manual intervention. --- 🔗 Integrations Used Google Ads → Fetch campaign and ad performance metrics using GAQL. Relevance AI → Analyze performance data and regenerate ad copy using AI agents and tools. Channable → Pull updated product feeds for ad refresh cycles. Google Sheets → Save optimized ad copy for review and documentation. Slack → Send a 30-day performance report to your marketing team. --- 🧩 Workflow Summary | Step | Node | Description | | ---- | --------------------------------------------------- | --------------------------------------------------------------------------- | | 1 | Monthly Schedule Trigger | Runs automatically on the 1st of each month to review last 30 days of data. | | 2 | Get Google Ads Performance Data | Fetches ad metrics via GAQL query (impressions, clicks, CTR, etc.). | | 3 | Calculate Performance Metrics | Groups results by ad group and theme to find top/bottom performers. | | 4 | AI Performance Analysis (Relevance AI) | Generates human-readable insights and improvement suggestions. | | 5 | Update Knowledge Base (Relevance AI) | Saves new insights for future ad copy training. | | 6 | Get Updated Product Feed (Channable) | Retrieves the latest catalog items for ad regeneration. | | 7 | Split Into Batches | Splits the feed into groups of 50 to avoid API rate limits. | | 8 | Regenerate Ad Copy with Insights (Relevance AI) | Rewrites ad copy with the latest product and performance data. | | 9 | Save Optimized Ads to Sheets | Writes output to your “Optimized Ads” Google Sheet. | | 10 | Generate Performance Report | Summarizes the AI analysis, CTR trends, and key insights. | | 11 | Email Performance Report (Slack) | Sends report directly to your Slack channel/team. | --- 🧰 Requirements Before running the workflow, make sure you have: A Google Ads account with API access and OAuth2 credentials. A Relevance AI project (with one Agent and one Tool setup). A Channable account with API key and project feed. A Google Sheets document for saving results. A Slack webhook URL for sending performance summaries. --- ⚙️ Environment Variables Add these environment variables to your n8n instance (via .env or UI): | Variable | Description | | -------------------------------- | ------------------------------------------------------------------- | | GOOGLEADSAPI_VERSION | API version (e.g., v17). | | GOOGLEADSCUSTOMER_ID | Your Google Ads customer ID. | | RELEVANCEAIAPI_URL | Base Relevance AI API URL (e.g., https://api.relevanceai.com/v1). | | RELEVANCEAGENTPERFORMANCE_ID | ID of your Relevance AI Agent for performance analysis. | | RELEVANCEKNOWLEDGESOURCE_ID | Knowledge base or dataset ID used to store insights. | | RELEVANCETOOLADCOPYID | Relevance AI tool ID for generating ad copy. | | CHANNABLEAPIURL | Channable API endpoint (e.g., https://api.channable.com/v1). | | CHANNABLECOMPANYID | Your Channable company ID. | | CHANNABLEPROJECTID | Your Channable project ID. | | FEED_ID | The feed ID for product data. | | GOOGLESHEETID | ID of your Google Sheet to store optimized ads. | | SLACKWEBHOOKURL | Slack Incoming Webhook URL for sending reports. | --- 🔐 Credentials Setup in n8n | Credential | Type | Usage | | ----------------------------------------------- | ------- | --------------------------------------------------- | | Google Ads OAuth2 API | OAuth2 | Authenticates your Ads API queries. | | HTTP Header Auth (Relevance AI & Channable) | Header | Uses your API key as Authorization: Bearer <key>. | | Google Sheets OAuth2 API | OAuth2 | Writes optimized ads to Sheets. | | Slack Webhook | Webhook | Sends monthly reports to your team channel. | --- 🧠 Example AI Insight Output json { "insights": [ "Ad groups using 'vegan' and 'organic' messaging achieved +23% CTR.", "'Budget' keyword ads underperformed (-15% CTR).", "Campaigns featuring 'new' or 'bestseller' tags showed higher conversion rates." ], "recommendations": [ "Increase ad spend for top-performing 'vegan' and 'premium' categories.", "Revise copy for 'budget' and 'sale' ads with low CTR." ] } --- 📊 Output Example (Google Sheet) | Product | Category | Old Headline | New Headline | CTR Change | Theme | | ------------------- | -------- | ------------------------ | -------------------------------------------- | ---------- | ------- | | Organic Protein Bar | Snacks | “Healthy Energy Anytime” | “Organic Protein Bar — 100% Natural Fuel” | +12% | Organic | | Eco Face Cream | Skincare | “Gentle Hydration” | “Vegan Face Cream — Clean, Natural Moisture” | +17% | Vegan | --- 📤 Automation Flow Run Automatically on the first of every month (cron: 0 0 1 ). Fetch Ads Data → Analyze & Learn → Generate New Ads → Save & Notify. Every iteration updates the AI’s knowledge base — improving your campaigns progressively. --- ⚡ Scalability The flow is batch-optimized (50 items per request). Works for large ad accounts with up to 10,000 ad records. AI analysis & regeneration steps are asynchronous-safe (timeouts extended). Perfect for agencies managing multiple ad accounts — simply duplicate and update the environment variables per client. --- 🧩 Best Use Cases Monthly ad creative optimization for eCommerce stores. Marketing automation for Google Ads campaign scaling. Continuous learning ad systems powered by Relevance AI insights. Agencies automating ad copy refresh cycles across clients. --- 💬 Slack Report Example 30-Day Performance Optimization Report Date: 2025-10-01 Analysis Period: Last 30 days Ads Analyzed: 842 Top Performing Themes Vegan: 5.2% CTR (34 ads) Premium: 4.9% CTR (28 ads) Underperforming Themes Budget: 1.8% CTR (12 ads) AI Insights “Vegan” and “Premium” themes outperform baseline by +22% CTR. “Budget” ads underperform due to lack of value framing. Next Optimization Cycle: 2025-11-01 --- 🛠️ Maintenance Tips Update your GAQL query occasionally to include new metrics or segments. Refresh Relevance AI tokens every 90 days (if required). Review generated ads in Google Sheets before pushing them live. Test webhook and OAuth connections after major n8n updates. --- 🧩 Import Instructions Open n8n → Workflows → Import from File / JSON. Paste this workflow JSON or upload it. Add all required environment variables and credentials. Execute the first run manually to validate connections. Once verified, enable scheduling for automatic monthly runs. --- 🧾 Credits Developed for AI-driven marketing teams leveraging Google Ads, Channable, and Relevance AI to achieve continuous ad improvement — fully automated via n8n.
🎓 How to transform unstructured email data into structured format with AI agent
This workflow automates the process of extracting structured, usable information from unstructured email messages across multiple platforms. It connects directly to Gmail, Outlook, and IMAP accounts, retrieves incoming emails, and sends their content to an AI-powered parsing agent built on OpenAI GPT models. The AI agent analyzes each email, identifies relevant details, and returns a clean JSON structure containing key fields: From – sender’s email address To – recipient’s email address Subject – email subject line Summary – short AI-generated summary of the email body The extracted information is then automatically inserted into an n8n Data Table, creating a structured database of email metadata and summaries ready for indexing, reporting, or integration with other tools. --- Key Benefits ✅ Full Automation: Eliminates manual reading and data entry from incoming emails. ✅ Multi-Source Integration: Handles data from different email providers seamlessly. ✅ AI-Driven Accuracy: Uses advanced language models to interpret complex or unformatted content. ✅ Structured Storage: Creates a standardized, query-ready dataset from previously unstructured text. ✅ Time Efficiency: Processes emails in real time, improving productivity and response speed. *✅ Scalability: Easily extendable to handle additional sources or extract more data fields. --- How it works This workflow automates the transformation of unstructured email data into a structured, queryable format. It operates through a series of connected steps: Email Triggering: The workflow is initiated by one of three different email triggers (Gmail, Microsoft Outlook, or a generic IMAP account), which constantly monitor for new incoming emails. AI-Powered Parsing & Structuring: When a new email is detected, its raw, unstructured content is passed to a central "Parsing Agent." This agent uses a specified OpenAI language model to intelligently analyze the email text. Data Extraction & Standardization: Following a predefined system prompt, the AI agent extracts key information from the email, such as the sender, recipient, subject, and a generated summary. It then forces the output into a strict JSON structure using a "Structured Output Parser" node, ensuring data consistency. Data Storage: Finally, the clean, structured data (the from, to, subject, and summarize fields) is inserted as a new row into a specified n8n Data Table, creating a searchable and reportable database of email information. --- Set up steps To implement this workflow, follow these configuration steps: Prepare the Data Table: Create a new Data Table within n8n. Define the columns with the following names and string type: From, To, Subject, and Summary. Configure Email Credentials: Set up the credential connections for the email services you wish to use (Gmail OAuth2, Microsoft Outlook OAuth2, and/or IMAP). Ensure the accounts have the necessary permissions to read emails. Configure AI Model Credentials: Set up the OpenAI API credential with a valid API key. The workflow is configured to use the model, but this can be changed in the respective nodes if needed. Connect the Nodes: The workflow canvas is already correctly wired. Visually confirm that the email triggers are connected to the "Parsing Agent," which is connected to the "Insert row" (Data Table) node. Also, ensure the "OpenAI Chat Model" and "Structured Output Parser" are connected to the "Parsing Agent" as its AI model and output parser, respectively. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. The triggers will begin polling for new emails according to their schedule (e.g., every minute), and the automation will start processing incoming messages. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.