19 templates found
Category:
Author:
Sort:

Host your own AI deep research agent with n8n, Apify and OpenAI o3

This template attempts to replicate OpenAI's DeepResearch feature which, at time of writing, is only available to their pro subscribers. > An agent that uses reasoning to synthesize large amount of online information and complete multi-step research tasks for you. Source Though the inner workings of DeepResearch have not been made public, it is presumed the feature relies on the ability to deep search the web, scrape web content and invoking reasoning models to generate reports. All of which n8n is really good at! Using this workflow, n8n users can enjoy a variation of the Deep Research experience for themselves and their teams at a fraction of the cost. Better yet, learn and customise this Deep Research template for their businesses and/or organisations. Check out the generated reports here: https://jimleuk.notion.site/19486dd60c0c80da9cb7eb1468ea9afd?v=19486dd60c0c805c8e0c000ce8c87acf How it works A form is used to first capture the user's research query and how deep they'd like the researcher to go. Once submitted, a blank Notion page is created which will later hold the final report and the researcher gets to work. The user's query goes through a recursive series of web serches and web scraping to collect data on the research topic to generate partial learnings. Once complete, all learnings are combined and given to a reasoning LLM to generate the final report. The report is then written to the placeholder Notion page created earlier. How to use Duplicate this Notion database template and make sure all Notion related nodes point to it. Sign-up for APIFY.com API Key for web search and scraping services. Ensure you have access to OpenAI's o3-mini model. Alternatively, switch this out for o1 series. You must publish this workflow and ensure the form url is publically accessible. On depth & breadth configuration For more detailed reports, increase depth and breadth but be warned the workflow will take exponentially longer and cost more to complete. The recommended defaults are usually good enough. Depth=1 & Breadth=2 - will take about 5 - 10mins. Depth=1 & Breadth=3 - will take about 15 - 20mins. Dpeth=3 & Breadth=5 - will take about 2+ hours! Customising this workflow I deliberately chose not to use AI-powered scrapers like Firecrawl as I felt these were quite costly and quotas would be quickly exhausted. However, feel free to switch web search and scraping services which suit your environment. Maybe you don't decide to source the web and instead, data collection comes from internal documents instead. This template gives you freedom to change this. Experiment with different Reasoning/Thinking models such as Deepseek and Google's Gemini 2.0. Finally, the LLM prompts could definitely be improved. Refine them to fit your use-case. Credits This template is largely based off the work by David Zhang (dzhng) and his open source implementation of Deep Research: https://github.com/dzhng/deep-research

JimleukBy Jimleuk
73020

Build an OpenAI assistant with Google Drive integration

Workflow Overview This workflow automates the creation and management of a custom OpenAI Assistant for a travel agency ("Travel with us"), leveraging Google Drive for document storage. --- How It Works Create the OpenAI Assistant Node: OpenAI Action: Creates a custom assistant named "Travel with us" Assistant using the gpt-4o-mini model. Instructions: Respond only using the provided document (e.g., agency-specific info). Stay friendly, brief, and focused on travel-related queries. Ignore irrelevant questions politely. Credentials: Requires OpenAI API key. Upload Agency Document Google Drive Node: Action: Downloads a Google Doc as a PDF. OpenAI2 Node: Action: Uploads the PDF to OpenAI with purpose: "assistants". Output: Generates a file_id. Update the Assistant with the Document OpenAI Node: Action: Updates the assistant to include the uploaded file. Chat Interaction Chat Trigger: Activates when a message is received ("When chat message received"). OpenAI Assistant Node: Action: Uses the updated assistant to respond to user queries. Memory: Window Buffer Memory retains chat context for coherent conversations. --- Set Up Steps Prepare the Document: Store your travel agency guide in Google Drive (e.g., as a Google Doc). Update the Google Drive node with your document’s ID. Configure Credentials: Google Drive: Connect via OAuth2 (googleDriveOAuth2Api). OpenAI: Add your API key to all OpenAI nodes. Customize the Assistant: Modify the instructions in the OpenAI node to reflect your agency’s needs. Ensure the document includes FAQs, policies, and travel info. Test the Workflow: Trigger manually ("Test workflow") to create the assistant and upload the file. Send a chat message (e.g., "What are your travel packages?") to test responses. --- Dependencies Google Drive Account: To store and retrieve the agency document. OpenAI API Access: For assistant creation and file uploads. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
14788

Convert PostgreSQL table to CSV

Convert PostgreSQL table to CSV CSV is a super useful and universal way to transfer data between different tools. This workflow gives an example of how to take data from PostgreSQL and convert it easily into a CSV. What you need Before running the workflow, please make sure you have access to a remote PostgreSQL server and have table data: booktitle,bookauthor,read_date Demons,Fyodor Dostoyevsky,2022-09-08 Ulysses,James Joyce,2022-05-06 Catch-22,Joseph Heller,2023-01-04 The Bell Jar,Sylvia Plath,2023-01-21 Frankenstein,Mary Shelley,2023-02-14 How it works Trigger the workflow on click Declare the name of the Excel file and sheet names Remotely connect to the PostgreSQL database and specify query execution Write the query data to CSV The detailed process is explained further in the tutorial: https://blog.n8n.io/postgres-export-to-csv/

Alex EmerichBy Alex Emerich
3157

Book appointments with voice using VAPI & Cal.com

This template uses VAPI and Cal.com to book appointments through a voice conversation. It detects whether the user wants to check availability or book an appointment, then responds naturally with real-time scheduling options. Who is this for? This workflow is perfect for: Voice assistant developers AI receptionists and smart concierge tools Service providers (salons, clinics, coaches) needing hands-free scheduling Anyone building voice-based customer experiences What does it do? This workflow turns a natural voice conversation into a working appointment system. It starts with a Webhook connected to your VAPI voice agent. The Set node extracts user intent (like “check availability” or “book now”). A Switch node branches logic based on the intent. If the user wants to check availability, the workflow fetches available times from Cal.com. If the user wants to book, it creates a new event using Cal.com's API. The final result is sent back to VAPI as a conversational voice response. How to use it Import this workflow into your n8n instance. Set up a Webhook node and connect it to your VAPI voice agent. Add your Cal.com API token as a credential (use HTTP Header Auth). Deploy and test using VAPI’s simulator or real phone input. (Optional) Customize the OpenAI prompt if you're using it to process or moderate inputs. Requirements A working VAPI agent A Cal.com account with API access n8n (cloud or self-hosted) An understanding of how to configure webhook and API credentials in n8n Customization Ideas Swap out Cal.com with another booking API (like Calendly) Add a Google Sheets or Supabase node to log appointments Use OpenAI to summarize or sanitize voice inputs before proceeding Build multi-turn conversations in VAPI for more complex bookings

Nabin BhandariBy Nabin Bhandari
2431

Notify user in Slack of quarantined email and create Jira ticket if opened

This n8n workflow serves as an incident response and notification system for handling potentially malicious emails flagged by Sublime Security. It begins with a Webhook trigger that Sublime Security uses to initiate the workflow by POSTing an alert. The workflow then extracts message details from Sublime Security using an HTTP Request node, based on the provided messageId, and subsequently splits into two parallel paths. In the first path, the workflow looks up a Slack user by email, aiming to find the recipient of the email that triggered the alert. If a user is found in Slack, a notification is sent to them, explaining that they have received a potentially malicious email that has been quarantined and is under investigation. This notification includes details such as the email's subject and sender. The second path checks whether the flagged email has been opened by inspecting the read_at value from Sublime Security. If the email was opened, the workflow prepares a table summarizing the flagged rules and creates a corresponding issue in Jira Software. The Jira issue contains information about the email, including its subject, sender, and recipient, along with the flagged rules. Issues that someone might encounter when setting up this workflow for the first time include potential problems with the Slack user lookup if the user information is not available or if Slack API integration is not configured correctly. Additionally, the issue creation in Jira Software may not work as expected, as indicated by the note that mentions a need for possible node replacement. Thorough testing and validation with sample data from Sublime Security alerts can help identify and resolve any potential issues during setup.

n8n TeamBy n8n Team
2001

Aws news monitoring & LinkedIn content automation with Claude 3 & Feishu

AWS News Analysis and LinkedIn Automation Pipeline Transform AWS industry news into engaging LinkedIn content with AI-powered analysis and automated approval workflows. Who's it for This template is perfect for: Cloud architects and DevOps engineers who want to stay current with AWS developments Content creators looking to automate their AWS news coverage Marketing teams needing consistent, professional AWS content Technical leaders who want to share industry insights on LinkedIn AWS consultants building thought leadership through automated content How it works This workflow creates a comprehensive AWS news analysis and content generation pipeline with two main flows: Flow 1: News Collection and Analysis Scheduled RSS Monitoring: Automatically fetches latest AWS news from the official AWS RSS feed daily at 8 PM AI-Powered Analysis: Uses AWS Bedrock (Claude 3 Sonnet) to analyze each news item, extracting: Professional summary Key themes and keywords Importance rating (Low/Medium/High) Business impact assessment Structured Data Storage: Saves analyzed news to Feishu Bitable with approval status tracking Flow 2: LinkedIn Content Generation Manual Approval Trigger: Feishu automation sends approved news items to the webhook AI Content Creation: AWS Bedrock generates professional LinkedIn posts with: Attention-grabbing headlines Technical insights from a Solutions Architect perspective Business impact analysis Call-to-action engagement Automated Publishing: Posts directly to LinkedIn with relevant hashtags How to set up Prerequisites AWS Bedrock access with Claude 3 Sonnet model enabled Feishu account with Bitable access LinkedIn company account with posting permissions n8n instance (self-hosted or cloud) Detailed Configuration Steps AWS Bedrock Setup Step 1: Enable Claude 3 Sonnet Model Log into your AWS Console Navigate to AWS Bedrock Go to Model access in the left sidebar Find Anthropic Claude 3 Sonnet and click Request model access Fill out the access request form (usually approved within minutes) Once approved, verify the model appears in your Model access list Step 2: Create IAM User and Credentials Go to IAM Console Click Users → Create user Name: n8n-bedrock-user Attach policy: AmazonBedrockFullAccess (or create custom policy with minimal permissions) Go to Security credentials tab → Create access key Choose Application running outside AWS Download the credentials CSV file Step 3: Configure in n8n In n8n, go to Credentials → Add credential Select AWS credential type Enter your Access Key ID and Secret Access Key Set Region to your preferred AWS region (e.g., us-east-1) Test the connection Useful Links: AWS Bedrock Documentation Claude 3 Sonnet Model Access AWS Bedrock Pricing Feishu Bitable Configuration Step 1: Create Feishu Account and App Sign up at Feishu International Create a new Bitable (multi-dimensional table) Go to Developer Console → Create App Enable Bitable permissions in your app Generate App Token and App Secret Step 2: Create Bitable Structure Create a new Bitable with these columns: title (Text) pubDate (Date) summary (Long Text) keywords (Multi-select) rating (Single Select: Low, Medium, High) link (URL) approval_status (Single Select: Pending, Approved, Rejected) Get your App Token and Table ID: App Token: Found in app settings Table ID: Found in the Bitable URL (tbl...) Step 3: Set Up Automation In your Bitable, go to Automation → Create automation Trigger: When field value changes → Select approval_status field Condition: approval_status equals "Approved" Action: Send HTTP request Method: POST URL: Your n8n webhook URL (from Flow 2) Headers: Content-Type: application/json Body: {{record}} Step 4: Configure Feishu Credentials in n8n Install Feishu Lite community node (self-hosted only) Add Feishu credential with your App Token and App Secret Test the connection Useful Links: Feishu Developer Documentation Bitable API Reference Feishu Automation Guide LinkedIn Company Account Setup Step 1: Create LinkedIn App Go to LinkedIn Developer Portal Click Create App Fill in app details: App name: AWS News Automation LinkedIn Page: Select your company page App logo: Upload your logo Legal agreement: Accept terms Step 2: Configure OAuth2 Settings In your app, go to Auth tab Add redirect URL: https://your-n8n-instance.com/rest/oauth2-credential/callback Request these scopes: wmembersocial (Post on behalf of members) r_liteprofile (Read basic profile) r_emailaddress (Read email address) Step 3: Get Company Page Access Go to your LinkedIn Company Page Navigate to Admin tools → Manage admins Ensure you have Content admin or Super admin role Note your Company Page ID (found in page URL) Step 4: Configure LinkedIn Credentials in n8n Add LinkedIn OAuth2 credential Enter your Client ID and Client Secret Complete OAuth2 flow by clicking Connect my account Select your company page for posting Useful Links: LinkedIn Developer Portal LinkedIn API Documentation LinkedIn OAuth2 Guide Workflow Activation Final Setup Steps: Import the workflow JSON into n8n Configure all credential connections: AWS Bedrock credentials Feishu credentials LinkedIn OAuth2 credentials Update webhook URL in Feishu automation to match your n8n instance Activate the scheduled trigger (daily at 8 PM) Test with manual webhook trigger using sample data Verify Feishu Bitable receives data Test approval workflow and LinkedIn posting Requirements Service Requirements AWS Bedrock with Claude 3 Sonnet model access AWS account with Bedrock service enabled IAM user with Bedrock permissions Model access approval for Claude 3 Sonnet Feishu Bitable for news storage and approval workflow Feishu account (International or Lark) Developer app with Bitable permissions Automation capabilities for webhook triggers LinkedIn Company Account for automated posting LinkedIn company page with admin access LinkedIn Developer app with posting permissions OAuth2 authentication setup n8n community nodes: Feishu Lite node (self-hosted only) Technical Requirements n8n instance (self-hosted recommended for community nodes) Webhook endpoint accessible from Feishu automation Internet connectivity for API calls and RSS feeds Storage space for workflow execution logs Cost Considerations AWS Bedrock: ~$0.01-0.05 per news analysis Feishu: Free tier available, paid plans for advanced features LinkedIn: Free API access with rate limits n8n: Self-hosted (free) or cloud subscription How to customize the workflow Content Customization Modify AI prompts in the AI Agent nodes to change tone, focus, or target audience Adjust hashtags in the LinkedIn posting node for different industries Change scheduling frequency by modifying the Schedule Trigger settings Integration Options Replace LinkedIn with Twitter/X, Facebook, or other social platforms Add Slack notifications for approved content before posting Integrate with CRM systems to track content performance Add content calendar integration for better planning Advanced Features Multi-language support by modifying AI prompts for different regions Content categorization by adding tags for different AWS services Performance tracking by integrating analytics platforms Team collaboration by adding approval workflows with multiple reviewers Technical Modifications Change RSS sources to monitor other AWS blogs or competitor news Adjust AI models to use different Bedrock models or external APIs Add data validation nodes for better error handling Implement retry logic for failed API calls Important Notes Service Limitations This template uses community nodes (Feishu Lite) and requires self-hosted n8n Geo-restrictions may apply to AWS Bedrock models in certain regions Rate limits may affect high-frequency posting - adjust scheduling accordingly Content moderation is recommended before automated posting Cost considerations: Each AI analysis costs approximately $0.01-0.05 USD per news item Troubleshooting Common Issues AWS Bedrock Issues: Model not found: Ensure Claude 3 Sonnet access is approved in your region Access denied: Verify IAM permissions include Bedrock service access Rate limiting: Implement retry logic or reduce analysis frequency Feishu Integration Issues: Authentication failed: Check App Token and App Secret are correct Table not found: Verify Table ID matches your Bitable URL Automation not triggering: Ensure webhook URL is accessible and returns 200 status LinkedIn Posting Issues: OAuth2 errors: Re-authenticate LinkedIn credentials Posting failed: Verify company page admin permissions Rate limits: LinkedIn has daily posting limits for company pages Security Best Practices Never hardcode credentials in workflow nodes Use environment variables for sensitive configuration Regularly rotate API keys and access tokens Monitor API usage to prevent unexpected charges Implement error handling for failed API calls

Li CHENBy Li CHEN
1353

Evaluation metric example: Check if tool was called

AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether a specific tool was called by an agent. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular trigger so that the workflow can be started from either one. More info We make sure that the agent outputs the list of tools that it used We then check whether the expected tool (from the dataset) is in that list Finally we pass this information back to n8n as a metric

David RobertsBy David Roberts
1139

Sync Zendesk tickets with subsequent comments to Asana tasks

This workflow creates an Asana task when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the task in Asana. Prerequisites Zendesk account and Zendesk credentials. Asana account and Asana credentials. Asana workspace to create tasks in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new task in Asana. The Asana GID is then saved in one of the ticket's fields (in setup we call this "Asana GID"). The next time a comment is added to the ticket, the workflow retrieves the Asana GID from the ticket's field and adds the comment to the task in Asana. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latestcommenthtml}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Asana GID. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the number field option and give the field a name such as “Asana GID”. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.

n8n TeamBy n8n Team
1060

Parse PDF, DOCX & images with Mistral OCR via Google Drive with Slack alerts

Use cases Monitor Google Drive folder, parsing PDF, DOCX and image file into a destination folder, ready for further processing (e.g. RAG ingestion, translation, etc.) Keep processing log in Google Sheet and send Slack notifications. How it works Trigger: Watch Google Drive folder for new and updated files. Create a uniquely named destination folder, copying the input file. Parse the file using Mistral Document, extracting content and handling non-OCRable images separately. Save the data returned by Mistral Document into the destination Google Drive folder (raw JSON file, Markdown files, and images) for further processing. How to use Google Drive and Google Sheets nodes: Create Google credentials with access to Google Drive and Google Sheets. Read more about Google Credentials. Update all Google Drive and Google Sheets nodes (14 nodes total) to use the credentials Mistral node: Create Mistral Cloud API credentials. Read more about Mistral Cloud Credentials. Update the OCR Document node to use the Mistral Cloud credentials. Slack nodes: Create Slack OAuth2 credentials. Read more about Slack OAuth2 credentials Update the two Slack nodes: Send Success Message and Send Error Message: Set the credentials Select the channel where you want to send the notifications (channels can be different for success and errors). Create a Google Sheets spreadsheet following the steps in Google Sheets Configuration. Ensure the spreadsheet can be accessed as Editor by the account used by the Google Credentials above. Create a directory for input files and a directory for output folders/files. Ensure the directories can be accessed by the account used by the Google Credentials. Update the File Created, File Updated and Workflow Configuration node following the steps in the green Notes. Requirements Google account with Google API access Mistral Cloud account access to Mistral API key. Slack account with access to Slack client ID and secret ID. Basic n8n knowledge: understanding of triggers, expressions, and credential management Who’s it for Anyone building a data pipeline ingesting files to be OCRed for further processing. 🔒 Security All credentials are stored as n8n credentials. The only information stored in this workflow that could be considered sensitive are the Google Drive Directory and Sheet IDs. These directories and the spreadsheet should be secured according to your needs. Need Help? Reach out on LinkedIn or Ask in the Forum!

Yves TkaczykBy Yves Tkaczyk
1034

Send a private message on Zulip

No description available.

tanaypantBy tanaypant
702

Automate droplet snapshots on DigitalOcean

This workflow automates the management of DigitalOcean Droplet snapshots by listing all droplets, filtering based on the number of snapshots, and deleting excess snapshots before creating new ones. It ensures your droplet snapshots stay organized and within a manageable limit, preventing unnecessary storage costs due to an excess of snapshots. Who is this for? This workflow is perfect for users managing DigitalOcean Droplets and looking to automate the process of snapshot creation and cleanup to save on storage costs and maintain efficient resource management. It’s useful for DevOps teams, cloud administrators, or any developer leveraging DigitalOcean for their infrastructure. What problem is this workflow solving? When managing multiple DigitalOcean Droplets, snapshots can quickly accumulate, taking up space and increasing storage costs. Manually deleting and creating snapshots can be time-consuming and inefficient. This automation solves this problem by automating the snapshot management process, ensuring that no more than a defined number of snapshots are kept per droplet. What this workflow does Runs every 48 hours: The workflow is triggered by a cron node that runs every 48 hours, ensuring timely snapshot management. List all droplets: The workflow retrieves all droplets in the DigitalOcean account. Retrieve snapshots: For each droplet, the workflow retrieves a list of existing snapshots. Filter snapshots: If the number of snapshots exceeds 4, the workflow filters for snapshots that need to be deleted. Delete snapshots: Excess snapshots are automatically deleted based on the filter criteria. Create new snapshot: After cleaning up, the workflow creates a new snapshot for each droplet, ensuring that backups are always up-to-date. Setup DigitalOcean API Key: You’ll need to configure the HTTP Request nodes with your DigitalOcean API key. This key is required for authenticating requests to list droplets, retrieve snapshots, delete snapshots, and create new ones. Snapshot Threshold: By default, the workflow is set to keep no more than 4 snapshots per droplet. This can be adjusted by modifying the filter node conditions. Set Execution Frequency: The cron node is set to run every 48 hours, but you can adjust the timing to suit your needs. How to customize this workflow Adjust Snapshot Limit: Change the value in the filter node if you want to keep more or fewer snapshots. Modify Run Frequency: The workflow runs every 48 hours by default. You can change the frequency in the cron node to run more or less often. Enhance with Notifications: You can add a notification node (e.g., Slack or email) to alert you when snapshots are deleted or created. Workflow Summary This workflow automates the management of DigitalOcean Droplet snapshots by keeping the number of snapshots under a defined limit, deleting the oldest ones, and ensuring new snapshots are created at regular intervals.

Darryn BalancoBy Darryn Balanco
691

Generate meeting minutes from videos with Whisper, Ollama LLM and Notion

Automated Meeting Minutes from Video Recordings This workflow automatically transforms video recordings of meetings into structured, professional meeting minutes in Notion. It uses local AI models (Whisper for transcription and Ollama for summarization) to ensure privacy and cost efficiency, while uploading the original video to Google Drive for safekeeping. Ideal for creative teams, production reviews, or any scenario where visual context is as important as the spoken word. 🔄 How It Works Wait & Detect: The workflow monitors a local folder. When a new .mkv video file is added, it waits until the file has finished copying. Prepare Audio: The video is converted into a .wav audio file optimized for transcription (under 25 MB with high clarity). Transcribe Locally: The local Whisper model generates a timestamped text transcript. Generate Smart Minutes: The transcript is sent to a local Ollama LLM, which produces structured, summarized meeting notes. Store & Share: The original video is uploaded to Google Drive, a new page is created in Notion with the notes and a link to the video, and a completion notification is sent via Discord. ⏱️ Setup Steps Estimated Time: 10–15 minutes (for technically experienced users). Prerequisites: Install Python, FFmpeg, and required packages (openai-whisper, ffmpeg-python). Run Ollama locally with a compatible model (e.g., gpt-oss:20b, llama3, mistral). Configure n8n credentials for Google Drive, Notion, and Discord. Workflow Configuration: Update the file paths for the helper scripts (wait-for-file.ps1, createwav.py, transcribereturn.py) in the respective "Execute Command" nodes. Change the input folder path (G:\OBS\videos) in the "File" node to your own recording directory. Replace the Google Drive folder ID and Notion database/page ID in their respective nodes. > 💡 Note: Detailed instructions for each step, including error handling and variable setup, are documented in the Sticky Notes within the workflow itself. --- 📁 Helper Scripts Documentation wait-for-file.ps1 A PowerShell script that checks if a file is still being written to (i.e., locked by another process). It returns 0 if the file is free and 1 if it is still locked. Usage: powershell .\wait-for-file.ps1 -FilePath "C:\path\to\your\file.mkv" create_wav.py A Python script that converts a video file into a .wav audio file. It automatically calculates the necessary audio bitrate to keep the output file under 25 MB—a common requirement for many transcription services. Usage: powershell python create_wav.py "C:\path\to\your\file.mkv" transcribe_return.py A Python script that uses a local Whisper model to transcribe an audio file. It can auto-detect the language or use a language code specified in the filename (e.g., meeting.en.mkv for English, meeting.es.mkv for Spanish). The transcript is printed directly to stdout with timestamps, which is then captured by the n8n workflow. Usage: powershell Auto-detect language python transcribe_return.py "C:\path\to\your\file.mkv" Force language via filename python transcribe_return.py "C:\path\to\your\file.es.mkv"

Facundo CabreraBy Facundo Cabrera
364