4 templates found
Category:
Author:
Sort:

Automate web research with GPT-4, Claude & Apify for content analysis and insights

This n8n template demonstrates how to automate comprehensive web research using multiple AI models to find, analyze, and extract insights from authoritative sources. Use cases are many: Try automating competitive analysis research, finding latest regulatory guidance from official sources, gathering authoritative content for reports, or conducting market research on industry developments! Good to know Each research query typically costs $0.08-$0.34 depending on the number of sources found and processed. The workflow includes smart filtering to minimize unnecessary API calls. The workflow requires multiple AI services and may need additional setup time compared to simpler templates. Qdrant storage is optional and can be removed without affecting performance. How it works Your research question gets transformed into optimized Google search queries that target authoritative sources while filtering out low-quality sites. Apify's RAG Web Browser scrapes the content and converts pages to clean markdown format. Claude Sonnet 4 evaluates each article for relevance and quality before full processing. Articles that pass the filter get analyzed in parallel - one pipeline creates focused summaries while another extracts specific claims and evidence. GPT-4.1 Mini ranks all findings and presents the top 3 most valuable insights and summaries. All processed content gets stored in your Qdrant vector database to prevent duplicate processing and enable future reference. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook, form submissions, or scheduled research. You can modify the configuration variables in the Set Node to customize Qdrant URLs, collection names, and quality thresholds for your specific needs. Requirements OpenAI API account for GPT-4.1 Mini (query optimization, summarization, ranking) Anthropic API account for Claude Sonnet 4 (content filtering) Apify account for web scraping capabilities Qdrant vector database instance (local or cloud) Ollama with nomic-embed-text model for embeddings Customizing this workflow Web research automation can be adapted for many specialized use cases. Try focusing on specific domains like legal research (targeting .gov and .edu sites), medical research (PubMed and health authorities), or financial analysis (SEC filings and analyst reports).

Peter ZendzianBy Peter Zendzian
1132

Create viral storytelling videos with GPT, Gemini & JsonCut from text prompts

Generate ready-to-publish short-form videos from text prompts using AI [](https://drive.google.com/file/d/1Cl0KwgRgcuBPVdGgL-nqAcheyvfVXttD/view) Click on the image to see the Example output in google drive Transform simple text concepts into professional short-form videos complete with AI-generated visuals, narrator voice, background music, and dynamic text overlays - all automatically generated and ready for Instagram, TikTok, or YouTube Shorts. This workflow demonstrates a cost-effective approach to video automation by combining AI-generated images with audio composition instead of expensive AI video generation. Processing takes 1-2 minutes and outputs professional 9:16 vertical videos optimized for social platforms. The template serves as both a showcase and building block for larger automation systems, with sticky notes providing clear guidance for customization and extension. Who's it for Content creators, social media managers, and marketers who need consistent, high-quality video content without manual production work. Perfect for motivational content, storytelling videos, educational snippets, and brand campaigns. How it works The workflow uses a form trigger to collect video theme, setting, and style preferences. ChatGPT generates cohesive scripts and image prompts, while Google Gemini creates themed background images and OpenAI TTS produces narrator audio. Background music is sourced from Openverse for CC-licensed tracks. All assets are uploaded to JsonCut API which composes the final video with synchronized overlays, transitions, and professional audio mixing. Results are stored in NocoDB for management. How to set up JsonCut API: Sign up at jsoncut.com and create an API key at app.jsoncut.com. Configure HTTP Header Auth credential in n8n with header name x-api-key OpenAI API: Set up credentials for script generation and text-to-speech Google Gemini API: Configure access for Imagen 4.0 image generation NocoDB (Optional): Set up instance for video storage and configure database credentials Requirements JsonCut free account with API key OpenAI API access for GPT and TTS Google Gemini API for image generation NocoDB (optional) for result storage How to customize the workflow This template is designed as a foundation for larger automation systems. The modular structure allows easy modification of AI prompts for different content niches (business, wellness, education), replacement of the form trigger with RSS feeds or database triggers for automated content generation, integration with social media APIs for direct publishing, and customization of visual branding through JsonCut configuration. The workflow can be extended for bulk processing, A/B testing multiple variations, or integration with existing content management systems. Sticky notes throughout the workflow provide detailed guidance for common customizations and scaling options.

8AutomatorBy 8Automator
726

Answer real estate questions with AI using PropertyFinder.ae, OpenRouter, and SerpAPI

AI Real Estate Agent with OpenRouter and SrpAPI to talk with property objects from propertyfinder.ae This n8n template demonstrates a simple AI Agent that can: Scrape information from a provided propertyfinder.ae listing link. Answer questions about a specific property using the scraped information. Use SerpAPI to find details that are missing from the scraped data. Answer general real-estate questions using SerpAPI. --- Use Case This workflow serves as a starting point for building complex AI assistants for real estate or other domains. See the demo video --- Potential Enhancements Expand Knowledge: Augment the workflow with your own knowledge base using a vector database (RAG approach). Add More Sources: Adapt the scraper to support other real estate websites. Optimize Speed: Add a cache for scraped data to reduce response latency. Improve Context Handling: Implement reliable persistence to track the current listing instead of iterating through conversation history. Customize Prompts: Write more tailored prompts for your specific needs (the current one is for demonstration only). Integrate Channels: Connect the workflow to communication channels like Instagram, Telegram, or WhatsApp. --- How It Works The workflow is triggered by a "When chat message received" node for simple demonstration. The Chat Memory Manager node extracts the last 30 messages for the current session. A code node finds the property link, first by checking the most recent user message and then by searching the conversation history. If a link is found, an HTTP Request node scrapes the HTML content from the listing page. The Summarize code node parses the HTML, retrieves key information, and passes it to the AI Agent as a temporary knowledge base. The final AI Agent node answers user queries using the scraped knowledge base and falls back to the SerpAPI tool when information is missing. --- How to Use You can test this workflow directly in n8n or integrate it into any social media channel or your website. The AI Agent node is configured to use OpenRouter. Add your OpenRouter credentials, or replace the node with your preferred LLM provider. Add your SerpAPI key to the SerpAPI tool within the AI Agent node. --- Requirements An API key for OpenRouter (or credentials for your preferred LLM provider). A SerpAPI key. You can get one from their website; a free plan is available for testing. --- Need Help Building Something More? Contact me on: Telegram: @ninesfork LinkedIn: George Zargaryan Happy Hacking! 🚀

George ZargaryanBy George Zargaryan
526

Send severe weather alerts from Visual Crossing to Telegram

How it works This workflow automates the process of fetching weather forecasts for your home location, including severe weather alerts, and sends timely notifications. It uses the Visual Crossing API for detailed weather data and integrates with Telegram (or other messaging services) for messaging and alerts. Step-by-step In summary, the workflow runs every hour, grabs the current day's weather conditions for [your city/location of interest], and returns only those items that truly contain one or more weather alerts. 📅 Step 1: Hourly Trigger The workflow begins with the Hourly Trigger node, which is a scheduleTrigger. This node acts as the clock that initiates the entire process at regular hourly intervals. 🌤️ Step 2: Fetch Weather Data Immediately after the trigger, the workflow moves to the Meteo node, an httpRequest. This node makes an external API call to fetch weather data for your specified location. API Used: Visual Crossing Web Services Authentication: Uses your API key (key=[API KEY]) Response format: JSON 🌪🌀 Step 3: Check for Severe Weather The JSON weather data output is analyzed, and if severe weather conditions or alerts are detected, the workflow sends the alert via your preferred communication channel(s). Optional You can replace the Telegram node with email, WhatsApp, SMS notifications, or add multiple notification nodes to receive severe weather alerts across all desired channels.

Razvan BaraBy Razvan Bara
32
All templates loaded