Back to Catalog
Mark Shcherbakov

Mark Shcherbakov

I am a business analyst with a development background, dedicated to helping small businesses and entrepreneurs leverage cloud services for increased efficiency. My expertise lies in automating manual workflows, integrating data from multiple cloud service providers, creating insightful dashboards, and building custom CRM systems. https://www.linkedin.com/in/marklowcoding/

Total Views107,011
Templates11

Templates by Mark Shcherbakov

AI agent to chat with Supabase/PostgreSQL DB

Video Guide I prepared a detailed guide that showed the whole process of building a resume analyzer. [](https://youtu.be/-GgKzhCNxjk) Who is this for? This workflow is ideal for developers, data analysts, and business owners who want to enable conversational interactions with their database. It’s particularly useful for cases where users need to extract, analyze, or aggregate data without writing SQL queries manually. What problem does this workflow solve? Accessing and analyzing database data often requires SQL expertise or dedicated reports, which can be time-consuming. This workflow empowers users to interact with a database conversationally through an AI-powered agent. It dynamically generates SQL queries based on user requests, streamlining data retrieval and analysis. What this workflow does This workflow integrates OpenAI with a Supabase database, enabling users to interact with their data via an AI agent. The agent can: Retrieve records from the database. Extract and analyze JSON data stored in tables. Provide summaries, aggregations, or specific data points based on user queries. Dynamic SQL Querying: The agent uses user prompts to create and execute SQL queries on the database. Understand JSON Structure: The workflow identifies JSON schema from sample records, enabling the agent to parse and analyze JSON fields effectively. Database Schema Exploration: It provides the agent with tools to retrieve table structures, column details, and relationships for precise query generation. Setup Preparation Create Accounts: N8N: For workflow automation. Supabase: For database hosting and management. OpenAI: For building the conversational AI agent. Configure Database Connection: Set up a PostgreSQL database in Supabase. Use appropriate credentials (username, password, host, and database name) in your workflow. N8N Workflow AI agent with tools: Code Tool: Execute SQL queries based on user input. Database Schema Tool: Retrieve a list of all tables in the database. Use a predefined SQL query to fetch table definitions, including column names, types, and references. Table Definition: Retrieve a list of columns with types for one table.

Mark ShcherbakovBy Mark Shcherbakov
18735

Cv screening with OpenAI

Video Guide I prepared a detailed guide that showed the whole process of building a resume analyzer. [](https://youtu.be/TWuI3dOcn0E) Who is this for? This workflow is ideal for recruitment agencies, HR professionals, and hiring managers looking to automate the initial screening of CVs. It is especially useful for organizations handling large volumes of applications and seeking to streamline their recruitment process. What problem does this workflow solve? Manually screening resumes is time-consuming and prone to human error. This workflow automates the process, providing consistent and objective analysis of CVs against job descriptions. It helps filter out unsuitable candidates early, reducing workload and improving the overall efficiency of the recruitment process. What this workflow does This workflow automates the resume screening process using OpenAI for analysis. It provides a matching score, a summary of candidate suitability, and key insights into why the candidate fits (or doesn’t fit) the job. Retrieve Resume: The workflow downloads CVs from a direct link (e.g., Supabase storage or Dropbox). Extract Data: Extracts text data from PDF or DOC files for analysis. Analyze with OpenAI: Sends the extracted data and job description to OpenAI to: Generate a matching score. Summarize candidate strengths and weaknesses. Provide actionable insights into their suitability for the job. Setup Preparation Create Accounts: N8N: For workflow automation. OpenAI: For AI-powered CV analysis. Get CV Link: Upload CV files to Supabase storage or Dropbox to generate a direct link for processing. Prepare Artifacts for OpenAI: Define Metrics: Identify the metrics you want from the analysis (e.g., matching percentage, strengths, weaknesses). Generate JSON Schema: Use OpenAI to structure responses, ensuring compatibility with your database. Write a Prompt: Provide OpenAI with a clear and detailed prompt to ensure accurate analysis. N8N Scenario Download File: Fetch the CV using its direct URL. Extract Data: Use N8N’s PDF or text extraction nodes to retrieve text from the CV. Send to OpenAI: URL: POST to OpenAI’s API for analysis. Parameters: Include the extracted CV data and job description. Use JSON Schema to structure the response. Summary This workflow provides a seamless, automated solution for CV screening, helping recruitment agencies and HR teams save time while maintaining consistency in candidate evaluation. It enables organizations to focus on the most suitable candidates, improving the overall hiring process.

Mark ShcherbakovBy Mark Shcherbakov
14816

Copy viral reels with Gemini AI

Video Guide I prepared a detailed guide that shows the whole process of building an AI tool to analyze Instagram Reels using n8n. [](https://youtu.be/SQPPM0KLsrM) Youtube Link Who is this for? This workflow is ideal for social media analysts, digital marketers, and content creators who want to leverage data-driven insights from their Instagram Reels. It's particularly useful for those looking to automate the analysis of video performance to inform strategy and content creation. What problem does this workflow solve? Analyzing video performance on Instagram can be tedious and time-consuming, requiring multiple steps and data extraction. This workflow automates the process of fetching, analyzing, and recording insights from Instagram Reels, making it simpler for users to track engagement metrics without manual intervention. What this workflow does This workflow integrates several services to analyze Instagram Reels, allowing users to: Automatically fetch recent Reels from specified creators. Analyze the most-watched videos for insights. Store and manage data in Airtable for easy access and reporting. Initial Trigger: The process begins with a manual trigger that can later be modified for scheduled automation. Data Retrieval: It connects to Airtable to fetch a list of creators and their respective Instagram Reels. Video Analysis: It handles the fetching, downloading, and uploading of videos for analysis using an external service, simplifying performance tracking through a structured query process. Record Management: It saves relevant metrics and insights into Airtable, ensuring that users can access and organize their video analytics effectively. Setup Create accounts: Set up Airtable, Edify, n8n, and Gemini accounts. Prepare triggers and modules: Replace credentials in each node accordingly. Configure data flow: Ensure modules are set to fetch and analyze the correct data fields as outlined in the guide. Test the workflow: Run the scenario manually to confirm that data is fetched and analyzed correctly.

Mark ShcherbakovBy Mark Shcherbakov
13065

AI agent for project management and meetings with Airtable and Fireflies

Video Guide I prepared a comprehensive guide detailing how to create a Smart Agent that automates meeting task management by analyzing transcripts, generating tasks in Airtable, and scheduling follow-ups when necessary. [](https://www.youtube.com/watch?v=0TyX7G00x3A) Youtube Link Who is this for? This workflow is ideal for project managers, team leaders, and business owners looking to enhance productivity during meetings. It is particularly helpful for those who need to convert discussions into actionable items swiftly and effectively. What problem does this workflow solve? Managing action items from meetings can often lead to missed tasks and poor follow-up. This automation alleviates that issue by automatically generating tasks from meeting transcripts, keeping everyone informed about their responsibilities and streamlining communication. What this workflow does The workflow leverages n8n to create a Smart Agent that listens for completed meeting transcripts, processes them using AI, and generates tasks in Airtable. Key functionalities include: Capturing completed meeting events through webhooks. Extracting relevant meeting details such as transcripts and participants using API calls. Generating structured tasks from meeting discussions and sending notifications to clients. Webhook Integration: Listens for meeting completion events to trigger subsequent actions. API Requests for Data: Pulls necessary details like transcripts and participant information from Fireflies. Task and Notification Generation: Automatically creates tasks in Airtable and notifies clients of their responsibilities. Setup N8N Workflow Configure the Webhook: Set up a webhook to capture meeting completion events and integrate it with Fireflies. Retrieve Meeting Content: Use GraphQL API requests to extract meeting details and transcripts, ensuring appropriate authentication through Bearer tokens. AI Processing Setup: Define system messages for AI tasks and configure connections to the AI chat model (e.g., OpenAI's GPT) to process transcripts. Task Creation Logic: Create structured tasks based on AI output, ensuring necessary details are captured and records are created in Airtable. Client Notifications: Use an email node to notify clients about their tasks, ensuring communications are client-specific. Scheduling Follow-Up Calls: Set up Google Calendar events if follow-up meetings are required, populating details from the original meeting context.

Mark ShcherbakovBy Mark Shcherbakov
11536

Parse PDF with LlamaParse and save to Airtable

Video Guide I prepared a comprehensive guide detailing how to automate the parsing of invoices using n8n and LlamaParse, seamlessly capturing and storing vital billing information. [](https://youtu.be/E4I0nru-fa8) Youtube Link Who is this for? This workflow is ideal for finance teams, accountants, and business operations managers who need to streamline invoice processing. It is particularly helpful for organizations seeking to reduce manual entry errors and improve efficiency in managing billing information. What problem does this workflow solve? Manually processing invoices can be time-consuming and error-prone. This automation eliminates the need for manual data entry by capturing invoice details directly from uploaded documents and storing structured data efficiently. This enhances productivity and accuracy across financial operations. What this workflow does The workflow leverages n8n and LlamaParse to automatically detect new invoices in a designated Google Drive folder, parse essential billing details, and store the extracted data in a structured format. The key functionalities include: Real-time detection of new invoices via Google Drive triggers. Automated HTTP requests to initiate parsing through Lama Cloud. Structured storage of invoice details and line items in a database for future reference. Google Drive Integration: Monitors a specific folder in Google Drive for new invoice uploads. Parsing with LlamaParse: Automatically sends invoices for parsing and processes results through webhooks. Data Storage in Airtable: Creates records for invoices and their associated line items, allowing for detailed tracking. Setup N8N Workflow Google Drive Trigger: Set up a trigger to detect new files in a specified folder dedicated to invoices. File Upload to LlamaParse: Create an HTTP request that sends the invoice file to LlamaParse for parsing, including relevant header settings and webhook URL. Webhook Processing: Establish a webhook node to handle parsed results from LlamaParse, extracting needed invoice details effectively. Invoice Record Creation: Create initial records for invoices in your database using the parsed details received from the webhook. Line Item Processing: Transform string data into structured line item arrays and create individual records for each item linked to the main invoice.

Mark ShcherbakovBy Mark Shcherbakov
11523

Telegram bot with Supabase memory and OpenAI assistant integration

Video Guide I prepared a detailed guide that showed the whole process of building an AI bot, from the simplest version to the most complex in a template. .png) Who is this for? This workflow is ideal for developers, chatbot enthusiasts, and businesses looking to build a dynamic Telegram bot with memory capabilities. The bot leverages OpenAI's assistant to interact with users and stores user data in Supabase for personalized conversations. What problem does this workflow solve? Many simple chatbots lack context awareness and user memory. This workflow solves that by integrating Supabase to keep track of user sessions (via telegramid and openaithread_id), allowing the bot to maintain continuity and context in conversations, leading to a more human-like and engaging experience. What this workflow does This Telegram bot template connects with OpenAI to answer user queries while storing and retrieving user information from a Supabase database. The memory component ensures that the bot can reference past interactions, making it suitable for use cases such as customer support, virtual assistants, or any application where context retention is crucial. 1.Receive New Message: The bot listens for incoming messages from users in Telegram. Check User in Database: The workflow checks if the user is already in the Supabase database using the telegram_id. Create New User (if necessary): If the user does not exist, a new record is created in Supabase with the telegramid and a unique openaithread_id. Start or Continue Conversation with OpenAI: Based on the user’s context, the bot either creates a new thread or continues an existing one using the stored openaithreadid. Merge Data: User-specific data and conversation context are merged. Send and Receive Messages: The message is sent to OpenAI, and the response is received and processed. Reply to User: The bot sends OpenAI’s response back to the user in Telegram. Setup Create a Telegram Bot using the Botfather and obtain the bot token. Set up Supabase: Create a new project and generate a SUPABASEURL and SUPABASEKEY. Create a new table named telegram_users with the following SQL query: create table public.telegram_users ( id uuid not null default genrandomuuid (), date_created timestamp with time zone not null default (now() at time zone 'utc'::text), telegram_id bigint null, openaithreadid text null, constraint telegramuserspkey primary key (id) ) tablespace pg_default; OpenAI Setup: Create an OpenAI assistant and obtain the OPENAIAPIKEY. Customize your assistant’s personality or use cases according to your requirements. Environment Configuration in n8n: Configure the Telegram, Supabase, and OpenAI nodes with the appropriate credentials. Set up triggers for receiving messages and handling conversation logic. Set up OpenAI assistant ID in "++OPENAI - Run assistant++" node.

Mark ShcherbakovBy Mark Shcherbakov
10201

Ai agent to chat with files in Supabase Storage and Google Drive

Video Guide I prepared a detailed guide that illustrates the entire process of building an AI agent using Supabase and Google Drive within N8N workflows. [](https://youtu.be/NB6LhvObiL4) Youtube Link Who is this for? This workflow is designed for developers, data scientists, and business users who wish to automate document management and enable AI-powered interactions over their stored files. It's especially beneficial for scenarios where users need to process, analyze, and retrieve information from uploaded documents rapidly. What problem does this workflow solve? Managing files across multiple platforms often involves tedious manual processes. This workflow facilitates automated file handling, making it easier for users to upload, parse, and interact with documents through an AI agent. It reduces redundancy and enhances the efficiency of data retrieval and management tasks. What this workflow does This workflow integrates Supabase storage with Google Drive and employs an AI agent to manage files effectively. The agent can: Upload files to Supabase storage and activate processes based on file changes in Google Drive. Retrieve and parse documents, converting them into a structured format for easy querying. Utilize an AI agent to answer user queries based on saved document data. Data Collection: The workflow initially gathers files from Supabase storage, ensuring no duplicates are processed in the 'files' table. File Handling: It processes files to be parsed based on their type, leveraging LlamaParse for effective data transformation. Google Drive Integration: The workflow monitors a designated Google Drive folder to upload files automatically and refresh document records in the database with new data. AI Interaction: A webhook is established to enable the AI agent to converse with users, facilitating queries and leveraging stored document knowledge. Setup Supabase Storage Setup: Create a private bucket in Supabase storage, modifying the default name in the URL. Upload your files using the provided upload options. Database Configuration: Establish the 'file' and 'document' tables in Supabase with the necessary fields. Execute any required SQL queries for enabling vector matching features. N8N Workflow Logic: Start with a manual trigger for the initial workflow segment or consider alternative triggers like webhooks. Replace all relevant credentials across nodes with your own to ensure seamless operation. File Processing and Google Drive Monitoring: Set up file processing to take care of downloading and parsing files based on their types. Create triggers to monitor the designated Google Drive folder for file uploads and updates. Integrate AI Agent: Configure the webhook for the AI agent to accept chat inputs while maintaining session context for enhanced user interactions. Utilize PostgreSQL to store user interactions and manage conversation states effectively. Testing and Adjustments: Once everything is set up, run tests with the AI agent to validate its responses based on the documents in your database. Fine-tune the workflow and AI model as needed to achieve desired performance.

Mark ShcherbakovBy Mark Shcherbakov
8771

Call analyzer with AssemblyAI transcription and OpenAI assistant integration

Video Guide I prepared a detailed guide that showed the whole process of building a call analyzer. .png) Who is this for? This workflow is ideal for sales teams, customer support managers, and online education services that conduct follow-up calls with clients. It’s designed for those who want to leverage AI to gain deeper insights into client needs and upsell opportunities from recorded calls. What problem does this workflow solve? Many follow-up sales calls lack structured analysis, making it challenging to identify client needs, gauge interest levels, or uncover upsell opportunities. This workflow enables automated call transcription and AI-driven analysis to generate actionable insights, helping teams improve sales performance, refine client communication, and streamline upselling strategies. What this workflow does This workflow transcribes and analyzes sales calls using AssemblyAI, OpenAI, and Supabase to store structured data. The workflow processes recorded calls as follows: Transcribe Call with AssemblyAI: Converts audio into text with speaker labels for clarity. Analyze Transcription with OpenAI: Using a predefined JSON schema, OpenAI analyzes the transcription to extract metrics like client intent, interest score, upsell opportunities, and more. Store and Access Results in Supabase: Stores both transcription and analysis data in a Supabase database for further use and display in interfaces. Setup Preparation Create Accounts: Set up accounts for N8N, Supabase, AssemblyAI, and OpenAI. Get Call Link: Upload audio files to public Supabase storage or Dropbox to generate a direct link for transcription. Prepare Artifacts for OpenAI: Define Metrics: Identify business metrics you want to track from call analysis, such as client needs, interest score, and upsell potential. Generate JSON Schema: Use GPT to design a JSON schema for structuring OpenAI’s responses, enabling efficient storage, analysis, and display. Create Analysis Prompt: Write a detailed prompt for GPT to analyze calls based on your metrics and JSON schema. Scenario 1: Transcribe Call with AssemblyAI Set Up Request: Header Authentication: Set Authorization with AssemblyAI API key. URL: POST to https://api.assemblyai.com/v2/transcript/. Parameters: audio_url: Direct URL of the audio file. webhook_url: URL for an N8N webhook to receive the transcription result. Additional Settings: speaker_labels (true/false): Enables speaker diarization. speakers_expected: Specify expected number of speakers. languagecode: Set language (default: enus). Scenario 2: Process Transcription with OpenAI Webhook Configuration: Set up a POST webhook to receive AssemblyAI’s transcription data. Get Transcription: Header Authentication: Set Authorization with AssemblyAI API key. URL: GET https://api.assemblyai.com/v2/transcript/<transcript_id>. Send to OpenAI: URL: POST to https://api.openai.com/v1/chat/completions. Header Authentication: Set Authorization with OpenAI API key. Body Parameters: Model: Use gpt-4o-2024-08-06 for JSON Schema support, or gpt-4o-mini for a less costly option. Messages: system: Contains the main analysis prompt. user: Combined speakers’ utterances to analyze in text format. Response Format: type: json_schema. json_schema: JSON schema for structured responses. Save Results in Supabase: Operation: Create a new record. Table Name: demo_calls. Fields: Input: Transcription text, audio URL, and transcription ID. Output: Parsed JSON response from OpenAI’s analysis.

Mark ShcherbakovBy Mark Shcherbakov
7750

Extract insights & analyse YouTube comments via AI agent chat

Video Guide I prepared a detailed guide to help you set up your workflow effectively, enabling you to extract insights from YouTube for content generation using an AI agent. [](https://youtu.be/6RmLZS8Yl4E) Youtube Link Who is this for? This workflow is ideal for content creators, marketers, and analysts looking to enhance their YouTube strategies through data-driven insights. It’s particularly beneficial for individuals wanting to understand audience preferences and improve their video content. What problem does this workflow solve? Navigating the content generation and optimization process can be complex, especially without significant audience insight. This workflow automates insights extraction from YouTube videos and comments, empowering users to create more engaging and relevant content effectively. What this workflow does The workflow integrates various APIs to gather insights from YouTube videos, enabling automated commentary analysis, video transcription, and thumbnail evaluation. The main functionalities include: Extracting user preferences from comments. Transcribing video content for enhanced understanding. Analyzing thumbnails via AI for maximum viewer engagement insights. AI Insights Extraction: Automatically pulls comments and metrics from selected YouTube creators to evaluate trends and gaps. Dynamic Video Planning: Uses transcriptions to help creators outline video scripts and topics based on audience interest. Thumbnail Assessment: Provides analysis on thumbnail designs to improve click-through rates and viewer attraction. Setup N8N Workflow API Setup: Create a Google Cloud project and enable the YouTube Data API. Generate an API key to be included in your workflow requests. YouTube Creator and Video Selection: Start by defining a request to identify top creators based on their video views. Capture the YouTube video IDs for further analysis of comments and other video metrics. Comment Analysis: Gather comments associated with the selected videos and analyze them for user insights. Video Transcription: Utilize the insights from transcriptions to formulate content plans. Thumbnail Analysis: Evaluate your video thumbnails by submitting the URL through the OpenAI API to gain insights into their effectiveness.

Mark ShcherbakovBy Mark Shcherbakov
4809

Automate cryptocurrency funding fee tracking with Binance API and Airtable

Video Guide I prepared a detailed guide that showed the whole process of integrating the Binance API and storing data in Airtable to manage funding statements associated with tokens in a wallet. [](https://youtu.be/GBZRduOzOzg) Youtube Link Who is this for? This workflow is ideal for developers, financial analysts, and cryptocurrency enthusiasts who want to automate the process of managing funding statements and token prices. It’s particularly useful for those who need a systematic approach to track and report funding fees associated with tokens in their wallets. What problem does this workflow solve? Managing funding statements and token prices across multiple platforms can be cumbersome and error-prone. This workflow automates the process, allowing users to seamlessly fetch funding fees from Binance and record them alongside token prices in Airtable, minimizing manual data entry and potential discrepancies. What this workflow does This workflow integrates the Binance API with an Airtable database, facilitating the storage and management of funding statements linked to tokens in a wallet. The agent can: Fetch funding fees and current positions from Binance. Aggregate data to create structured funding statements. Insert records into Airtable, ensuring proper linkage between funding data and tokens. API Authentication: The workflow establishes authentication with the Binance API using a Crypto Node to handle API keys and signatures, ensuring secure and verified requests. Data Collection: It retrieves necessary data, including funding fees and current positions with properly formatted API requests to ensure seamless communication with Binance. Airtable Integration: The workflow inserts aggregated funding statements and token data into the corresponding Airtable records, managing token existence checks to avoid duplicate entries. Setup Set Up Airtable Database: Create an Airtable base with tables for Funding Statements and Tokens. Generate Binance API Key: Log in and create an API key with appropriate permissions. Set Up Authentication in N8N: Utilize a Crypto Node for Binance API authentication. Configure API Request to Binance: Set request method and headers for communication with the Binance API. Fetch Funding Fees and Current Positions: Retrieve funding data and current positions efficiently. Aggregate and Create Statements: Aggregate data to create detailed funding statements. Insert Data into Airtable: Input the structured data into Airtable and manage token records. Using Get Price Node: Implement a Get Price Node to maintain current token price tracking without additional setup.

Mark ShcherbakovBy Mark Shcherbakov
2033

Two-stage document retrieval chatbot with OpenAI and Supabase vector search

Video Guide I prepared a comprehensive guide demonstrating how to build a multi-level retrieval AI agent in n8n that smartly narrows down search results first by file descriptions, then retrieves detailed vector data for improved relevance and answer quality. [](https://www.youtube.com/watch?v=asXVOHg89hs) Youtube Link Who is this for? This workflow suits developers, AI enthusiasts, and data engineers working with vector stores and large document collections who want to enhance the precision of AI retrieval by leveraging metadata-based filtering before deep content search. It helps users managing many files or documents and aiming to reduce noise and input size limits in AI queries. What problem does this workflow solve? Performing vector searches directly on large numbers of document chunks can degrade AI input quality and introduce noise. This workflow implements a two-stage retrieval process that first searches file descriptions to filter relevant files, then runs vector searches only within those files to fetch precise results. This reduces irrelevant data, improves answer accuracy, and optimizes performance when dealing with dozens or hundreds of files split into multiple pieces. What this workflow does This n8n workflow connects to a Supabase vector store to perform: Multi-level Retrieval: File Description Search: Calls a Supabase RPC function to find files whose descriptions (metadata) best match the user query. It filters and limits the number of relevant files based on similarity scores. Document Chunk Retrieval: Uses retrieved file IDs to perform a second RPC call fetching detailed vector pieces only within those files, again filtered by similarity thresholds. OpenAI Integration: The filtered document chunks and associated metadata (like file names and URLs) are passed to an OpenAI message node that includes system instructions to guide the AI in leveraging the knowledge base and linked resources for comprehensive responses. Custom Code Functions: Two code nodes interact with Supabase stored procedures matchfiles and matchdocuments to perform the semantic searches with multiline metadata filtering unavailable in default vector filters. Helper Flows and SQL Setup: Templates and SQL scripts prepare database tables and functions, with additional flows to generate embeddings from file description summaries using OpenAI. N8N Workflow Preparation: Create or verify Supabase account with vector store capability. Set up necessary database tables and RPC functions (matchfiles and matchdocuments) using provided SQL scripts. Replace all credentials in n8n nodes to connect to your Supabase and OpenAI accounts. Optionally upload document files and generate their vector embeddings and description summaries in a separate helper workflow. Main Workflow Logic: Code Function Node 1: Receives user query and calls the match_files RPC to retrieve file IDs by searching file descriptions with vector similarity thresholds and file limits. Code Function Node 2: Takes filtered file IDs, invokes match_documents RPC to fetch vector document chunks only from those files with additional similarity filtering and count limits. OpenAI Message Node: Combines fetched document pieces, their metadata (file URLs, similarity scores), and system prompts to generate precise AI-powered answers referencing the documents. This multi-tiered retrieval process improves search relevance and AI contextual understanding by smartly limiting vector search scope first to relevant files, then to specific document chunks, refining user query results.

Mark ShcherbakovBy Mark Shcherbakov
1076
All templates loaded