Back to Catalog

Analyze Reddit content and comments for sentiment with Deepseek AI

Gerald DenorGerald Denor
79 views
2/3/2026
Official Page

Reddit Sentiment Analysis with AI-Powered Insights

Automatically analyze Reddit posts and comments to extract sentiment, emotional tone, and actionable community insights using AI.

This powerful n8n workflow combines Reddit's API with advanced AI sentiment analysis to help community managers, researchers, and businesses understand public opinion and engagement patterns on Reddit. Get structured insights including sentiment scores, toxicity levels, trending concerns, and moderation recommendations.

Features

  • Comprehensive Sentiment Analysis: Categorizes content as Positive, Negative, or Neutral with confidence scores
  • Emotional Intelligence: Detects emotional tones like excitement, frustration, concern, or sarcasm
  • Content Categorization: Identifies discussion types (questions, complaints, praise, debates)
  • Toxicity Detection: Flags potentially harmful content with severity levels
  • Community Insights: Analyzes engagement quality and trending concerns
  • Actionable Intelligence: Provides moderation recommendations and response urgency levels
  • Batch Processing: Efficiently processes multiple posts and their comments
  • Structured JSON Output: Returns organized data ready for further analysis or integration

How It Works

The workflow follows a two-stage process:

  1. Data Collection: Fetches recent posts from specified subreddits along with their comments
  2. AI Analysis: Processes content through DeepSeek AI to generate comprehensive sentiment and contextual insights

Use Cases

  • Community Management: Monitor sentiment trends and identify posts requiring moderator attention
  • Brand Monitoring: Track public opinion about your products or services on Reddit
  • Market Research: Understand customer sentiment and concerns in relevant communities
  • Content Strategy: Identify what type of content resonates positively with your audience
  • Crisis Management: Quickly detect and respond to negative sentiment spikes

Required Credentials

Before setting up this workflow, you'll need to obtain the following credentials:

Reddit OAuth2 API

  1. Go to Reddit App Preferences
  2. Click "Create App" or "Create Another App"
  3. Choose "web app" as the app type
  4. Fill in the required fields:
    • Name: Your app name
    • Description: Brief description of your app
    • Redirect URI: http://localhost:8080/oauth/callback (or your n8n instance URL + /oauth/callback)
  5. Note down your Client ID and Client Secret

OpenRouter API

  1. Visit OpenRouter
  2. Sign up for an account
  3. Navigate to your API Keys section
  4. Generate a new API key
  5. Copy the API key for use in n8n

Step-by-Step Setup Instructions

Step 1: Import the Workflow

  1. Copy the workflow JSON from this template
  2. In your n8n instance, click the "+" button to create a new workflow
  3. Select "Import from URL" or "Import from Clipboard"
  4. Paste the workflow JSON and click "Import"

Step 2: Configure Reddit Credentials

  1. Click on any Reddit node (e.g., "Get many posts")
  2. In the credentials dropdown, click "Create New"
  3. Select "Reddit OAuth2 API"
  4. Enter your Reddit app credentials:
    • Client ID: Your Reddit app client ID
    • Client Secret: Your Reddit app client secret
    • Auth URI: https://www.reddit.com/api/v1/authorize
    • Access Token URI: https://www.reddit.com/api/v1/access_token
  5. Click "Connect my account" and authorize the app
  6. Save the credentials

Step 3: Configure OpenRouter Credentials

  1. Click on the "OpenRouter Chat Model1" node
  2. In the credentials dropdown, click "Create New"
  3. Select "OpenRouter API"
  4. Enter your OpenRouter API key
  5. Save the credentials

Step 4: Test the Webhook

  1. Click on the "Webhook" node
  2. Copy the webhook URL (it will look like: https://your-n8n-instance.com/webhook/reddit-sentiment)
  3. Test the webhook using a tool like Postman or curl with this sample payload:
{
  "subreddit": "technology",
  "query": "AI",
  "limit": 5
}

Step 5: Customize the Analysis

  1. Modify the Structured Data Generator prompt: Edit the prompt in the "Structured Data Generator" node to adjust the analysis criteria or output format
  2. Change the AI model: In the "OpenRouter Chat Model1" node, you can switch to different models like anthropic/claude-3-haiku or openai/gpt-4 based on your preferences and budget
  3. Adjust post limits: Modify the limit parameter in the "Get many posts" and "Get many comments in a post" nodes to control how much data you process

Usage Instructions

Making API Calls

Send a POST request to your webhook URL with the following parameters:

Required Parameters:

  • subreddit: The subreddit name (without r/)
  • limit: Number of posts to analyze (recommended: 5-15)

Optional Parameters:

  • query: Search term to filter posts (optional)

Example Request:

curl -X POST https://your-n8n-instance.com/webhook/reddit-sentiment \
  -H "Content-Type: application/json" \
  -d '{
    "subreddit": "CustomerService",
    "limit": 10
  }'

Understanding the Output

The workflow returns a JSON array with detailed analysis for each post:

[
  {
    "sentiment_analysis": {
      "overall_sentiment": {
        "category": "Negative",
        "confidence_score": 8
      },
      "emotional_tone": ["frustrated", "concerned"],
      "intensity_level": "High"
    },
    "content_categorization": {
      "discussion_type": "Complaint",
      "key_themes": ["billing issues", "customer support"],
      "toxicity_level": {
        "level": "Low",
        "indicators": "No offensive language detected"
      }
    },
    "contextual_insights": {
      "community_engagement_quality": "Constructive",
      "potential_issues_flagged": ["service disruption"],
      "trending_concerns": ["response time", "resolution process"]
    },
    "actionable_intelligence": {
      "moderator_attention_needed": {
        "required": "Yes",
        "reason": "Customer complaint requiring company response"
      },
      "response_urgency": "High",
      "suggested_follow_up_actions": [
        "Escalate to customer service team",
        "Monitor for similar complaints"
      ]
    }
  }
]

Workflow Nodes Explanation

Data Collection Nodes

  • Webhook: Receives API requests with subreddit and analysis parameters
  • Get many posts: Fetches recent posts from the specified subreddit
  • Split Out: Processes individual posts for analysis
  • Get many comments in a post: Retrieves comments for each post

Processing Nodes

  • Loop Over Items: Manages batch processing of multiple posts
  • Sentiment Analyzer: Primary AI analysis node that processes content
  • Structured Data Generator: Formats AI output into structured JSON
  • Code: Parses and cleans the AI response
  • Respond to Webhook: Returns the final analysis results

Customization Options

Adjusting Analysis Depth

  • Modify the limit parameters to analyze more or fewer posts/comments
  • Update the AI prompts to focus on specific aspects (e.g., product mentions, competitor analysis)

Adding Data Storage

  • Connect database nodes to store analysis results for historical tracking
  • Add email notifications for high-priority findings

Integrating with Other Tools

  • Connect to Slack/Discord for real-time alerts
  • Link to Google Sheets for easy data visualization
  • Integrate with CRM systems for customer feedback tracking

Tips for Best Results

  1. Choose Relevant Subreddits: Focus on communities where your target audience is active
  2. Monitor Regularly: Set up scheduled executions to track sentiment trends over time
  3. Customize Prompts: Tailor the AI prompts to your specific industry or use case
  4. Respect Rate Limits: Reddit API has rate limits, so avoid excessive requests
  5. Review AI Output: Periodically check the AI analysis accuracy and adjust prompts as needed

Troubleshooting

Common Issues

"Reddit API Authentication Failed"

  • Verify your Reddit app credentials are correct
  • Ensure your redirect URI matches your n8n instance
  • Check that your Reddit app is set as "web app" type

"OpenRouter API Error"

  • Confirm your API key is valid and has sufficient credits
  • Check that the selected model is available
  • Verify your account has access to the chosen model

"Webhook Not Responding"

  • Ensure the workflow is activated
  • Check that the webhook URL is correct
  • Verify the request payload format matches the expected structure

"AI Analysis Returns Errors"

  • Review the prompt formatting in the Structured Data Generator
  • Check if the selected AI model supports the required features
  • Ensure the input data is not empty or malformed

Performance Considerations

  • Rate Limits: Reddit allows 60 requests per minute for OAuth applications
  • AI Costs: Monitor your OpenRouter usage to manage costs
  • Processing Time: Larger batches will take longer to process
  • Memory Usage: Consider n8n instance resources when processing large datasets

Contributing

This workflow can be extended and improved. Consider adding:

  • Support for multiple subreddits in a single request
  • Historical sentiment tracking and trend analysis
  • Integration with visualization tools
  • Custom classification models for industry-specific analysis

Ready to start analyzing Reddit sentiment? Import this workflow and start gaining valuable insights into online community discussions!

n8n Workflow: Analyze Reddit Content and Comments for Sentiment with DeepSeek AI

This n8n workflow provides a robust solution for extracting and analyzing sentiment from Reddit posts and their associated comments using the DeepSeek AI model via OpenRouter. It's designed to be triggered by a webhook, allowing for flexible integration with other systems or manual execution.

What it does

This workflow automates the following steps:

  1. Receives a Webhook Trigger: The workflow starts when it receives an incoming webhook. This webhook is expected to contain a Reddit post ID.
  2. Fetches Reddit Post and Comments: Using the provided post ID, the workflow retrieves the main Reddit post and all its comments.
  3. Prepares Data for AI Analysis: It then processes the fetched Reddit data, extracting relevant text content (post title, self-text, and comment bodies) and formatting it into a clean, structured input for the AI model.
  4. Loops Over Items: The workflow iterates through each piece of content (the main post and individual comments) to prepare them for sentiment analysis.
  5. Performs Sentiment Analysis with DeepSeek AI: For each item, it uses a LangChain Basic LLM Chain, powered by the DeepSeek AI model (via OpenRouter), to analyze the sentiment. The AI is prompted to classify the sentiment as "Positive," "Negative," or "Neutral" and provide a brief explanation.
  6. Structures AI Results: The raw AI output is then parsed and structured to extract the sentiment label and explanation.
  7. Responds to Webhook: Finally, the workflow compiles all the sentiment analysis results and sends them back as a response to the initial webhook trigger.

Prerequisites/Requirements

To use this workflow, you will need:

  • n8n Instance: A running n8n instance (cloud or self-hosted).
  • Reddit API Credentials: An authenticated Reddit credential configured in n8n to fetch posts and comments.
  • OpenRouter API Key: An API key for OpenRouter, configured as an "OpenRouter Chat Model" credential in n8n. This is used to access the DeepSeek AI model.

Setup/Usage

  1. Import the Workflow:
    • Download the provided JSON file.
    • In your n8n instance, go to "Workflows" and click "New".
    • Click the three dots next to the "New Workflow" button and select "Import from JSON".
    • Paste the workflow JSON or upload the file.
  2. Configure Credentials:
    • Reddit Node: Click on the "Reddit" node (ID: 445). Select or create a new "Reddit API" credential. You will need to authenticate n8n with your Reddit account.
    • OpenRouter Chat Model Node: Click on the "OpenRouter Chat Model" node (ID: 1281). Select or create a new "OpenRouter Chat API" credential. You will need to provide your OpenRouter API Key.
  3. Activate the Webhook:
    • Click on the "Webhook" node (ID: 47).
    • Copy the "Webhook URL". This URL will be used to trigger the workflow.
  4. Test the Workflow:
    • Activate the workflow by toggling the "Active" switch in the top right corner.
    • Send a POST request to the copied Webhook URL with a JSON body containing a redditPostId property. For example:
      {
        "redditPostId": "example_post_id"
      }
      
      Replace "example_post_id" with an actual Reddit post ID (e.g., from a URL like https://www.reddit.com/r/n8n/comments/189z29r/n8n_workflow_to_get_all_comments_from_a_post/, the ID is 189z29r).
    • Observe the execution results in n8n to see the sentiment analysis output.

This workflow is ready to be integrated into your data pipelines for automated Reddit sentiment monitoring and analysis.

Related Templates

Generate song lyrics and music from text prompts using OpenAI and Fal.ai Minimax

Spark your creativity instantly in any chat—turn a simple prompt like "heartbreak ballad" into original, full-length lyrics and a professional AI-generated music track, all without leaving your conversation. 📋 What This Template Does This chat-triggered workflow harnesses AI to generate detailed, genre-matched song lyrics (at least 600 characters) from user messages, then queues them for music synthesis via Fal.ai's minimax-music model. It polls asynchronously until the track is ready, delivering lyrics and audio URL back in chat. Crafts original, structured lyrics with verses, choruses, and bridges using OpenAI Submits to Fal.ai for melody, instrumentation, and vocals aligned to the style Handles long-running generations with smart looping and status checks Returns complete song package (lyrics + audio link) for seamless sharing 🔧 Prerequisites n8n account (self-hosted or cloud with chat integration enabled) OpenAI account with API access for GPT models Fal.ai account for AI music generation 🔑 Required Credentials OpenAI API Setup Go to platform.openai.com → API keys (sidebar) Click "Create new secret key" → Name it (e.g., "n8n Songwriter") Copy the key and add to n8n as "OpenAI API" credential type Test by sending a simple chat completion request Fal.ai HTTP Header Auth Setup Sign up at fal.ai → Dashboard → API Keys Generate a new API key → Copy it In n8n, create "HTTP Header Auth" credential: Name="Fal.ai", Header Name="Authorization", Header Value="Key [Your API Key]" Test with a simple GET to their queue endpoint (e.g., /status) ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign OpenAI API credentials to the "OpenAI Chat Model" node Assign Fal.ai HTTP Header Auth to the "Generate Music Track", "Check Generation Status", and "Fetch Final Result" nodes Activate the workflow—chat trigger will appear in your n8n chat interface Test by messaging: "Create an upbeat pop song about road trips" 🎯 Use Cases Content Creators: YouTubers generating custom jingles for videos on the fly, streamlining production from idea to audio export Educators: Music teachers using chat prompts to create era-specific folk tunes for classroom discussions, fostering interactive learning Gift Personalization: Friends crafting anniversary R&B tracks from shared memories via quick chats, delivering emotional audio surprises Artist Brainstorming: Songwriters prototyping hip-hop beats in real-time during sessions, accelerating collaboration and iteration ⚠️ Troubleshooting Invalid JSON from AI Agent: Ensure the system prompt stresses valid JSON; test the agent standalone with a sample query Music Generation Fails (401/403): Verify Fal.ai API key has minimax-music access; check usage quotas in dashboard Status Polling Loops Indefinitely: Bump wait time to 45-60s for complex tracks; inspect fal.ai queue logs for bottlenecks Lyrics Under 600 Characters: Tweak agent prompt to enforce fuller structures like [V1][C][V2][B][C]; verify output length in executions

Daniel NkenchoBy Daniel Nkencho
601

Automate Dutch Public Procurement Data Collection with TenderNed

TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint - Retrieves the complete tender documentation in XML format JSON endpoint - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch 🔗 LinkedIn – Wessel Bulte

Wessel BulteBy Wessel Bulte
247

🎓 How to transform unstructured email data into structured format with AI agent

This workflow automates the process of extracting structured, usable information from unstructured email messages across multiple platforms. It connects directly to Gmail, Outlook, and IMAP accounts, retrieves incoming emails, and sends their content to an AI-powered parsing agent built on OpenAI GPT models. The AI agent analyzes each email, identifies relevant details, and returns a clean JSON structure containing key fields: From – sender’s email address To – recipient’s email address Subject – email subject line Summary – short AI-generated summary of the email body The extracted information is then automatically inserted into an n8n Data Table, creating a structured database of email metadata and summaries ready for indexing, reporting, or integration with other tools. --- Key Benefits ✅ Full Automation: Eliminates manual reading and data entry from incoming emails. ✅ Multi-Source Integration: Handles data from different email providers seamlessly. ✅ AI-Driven Accuracy: Uses advanced language models to interpret complex or unformatted content. ✅ Structured Storage: Creates a standardized, query-ready dataset from previously unstructured text. ✅ Time Efficiency: Processes emails in real time, improving productivity and response speed. *✅ Scalability: Easily extendable to handle additional sources or extract more data fields. --- How it works This workflow automates the transformation of unstructured email data into a structured, queryable format. It operates through a series of connected steps: Email Triggering: The workflow is initiated by one of three different email triggers (Gmail, Microsoft Outlook, or a generic IMAP account), which constantly monitor for new incoming emails. AI-Powered Parsing & Structuring: When a new email is detected, its raw, unstructured content is passed to a central "Parsing Agent." This agent uses a specified OpenAI language model to intelligently analyze the email text. Data Extraction & Standardization: Following a predefined system prompt, the AI agent extracts key information from the email, such as the sender, recipient, subject, and a generated summary. It then forces the output into a strict JSON structure using a "Structured Output Parser" node, ensuring data consistency. Data Storage: Finally, the clean, structured data (the from, to, subject, and summarize fields) is inserted as a new row into a specified n8n Data Table, creating a searchable and reportable database of email information. --- Set up steps To implement this workflow, follow these configuration steps: Prepare the Data Table: Create a new Data Table within n8n. Define the columns with the following names and string type: From, To, Subject, and Summary. Configure Email Credentials: Set up the credential connections for the email services you wish to use (Gmail OAuth2, Microsoft Outlook OAuth2, and/or IMAP). Ensure the accounts have the necessary permissions to read emails. Configure AI Model Credentials: Set up the OpenAI API credential with a valid API key. The workflow is configured to use the model, but this can be changed in the respective nodes if needed. Connect the Nodes: The workflow canvas is already correctly wired. Visually confirm that the email triggers are connected to the "Parsing Agent," which is connected to the "Insert row" (Data Table) node. Also, ensure the "OpenAI Chat Model" and "Structured Output Parser" are connected to the "Parsing Agent" as its AI model and output parser, respectively. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. The triggers will begin polling for new emails according to their schedule (e.g., every minute), and the automation will start processing incoming messages. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
1616