Posting from Wordpress to Medium
Usage This workflow gets all the posts from your WordPress site and sorts them into a clear format before publishing them to medium. Step 1. Set up the HTTP node and set the URL of the source destination. This will be the URL of the blog you want to use. We shall be using https://mailsafi.com/blog for this. Step 2. Extract the URLs of all the blogs on the page This gets all the blog titles and their URLs. Its an easy way to sort ou which blogs to share and which not to share. Step 3. Split the entries for easy sorting or a cleaner view. Step 4. Set a new https node with all the blog URLs that we got from the previous steps. Step 5. Extract the contents of the blog Step 6. Add the medium node and then set the contents that you want to be shared out. Execute your workflow and you are good to go
Get data from Hacker News and send to Airtable or via SMS
This n8n workflow automates sending out SMS notifications via Vonage which includes new tech-related vocabulary everyday. To build this handy vocabulary improver, you’ll need the following: n8n – You can find details on how to install n8n on the Quickstart page. LingvaNex account – You can create a free account here. Up to 200,000 characters are included in the free plan when you generate your API key. Airtable account – You can register for free. Vonage account – You can sign up free of charge if you aren’t already.
Vision RAG and image embeddings using Cohere Command-A and Embed v4
Cohere's new multimodal model releases make building your own Vision RAG agents a breeze. If you're new to Multimodal RAG and for the intent of this template, it means to embed and retrieve only document scans relevant to a query and then have a vision model read those scans to answer. The benefits being (1) the vision model doesn't need to keep all document scans in context (expensive) and (2) ability to query on graphical content such as charts, graphs and tables. How it works Page extracts from a technology report containing graphs and charts are downloaded, converted to base64 and embedded using Cohere's Embed v4 model. This produces embedding vectors which we will associate with the original page url and store them in our Qdrant vector store collection using the Qdrant community node. Our Vision RAG agent is split into 2 parts; one regular AI agent for chat and a second Q&A agent powered by Cohere's Command-A-vision model which is required to read contents of images. When a query requires access to the technology report, the Q&A agent branch is activated. This branch performs a vector search on our image embeddings and returns a list of matching image urls. These urls are then used as input for our vision model along with the user's original query. The Q&A vision agent can then reply to the user using the "respond to chat" node. Because both agents share the same memory space, it would be the same conversation to the user. How to use Ensure you have a Cohere account and sufficient credit to avoid rate limit or token usage restrictions. For embeddings, swap out the page extracts for your own. You may need to split and convert document pages to images if you want to use image embeddings. For chat, you may want to structure the agent(s) in another way which makes sense for your environment eg. using MCP servers. Requirements Cohere account for Embeddings and LLM Qdrant for vector store
Construction blueprint to Google Sheets automation with VLM Run and Google Drive
Automatically process Construction Blueprints into structured Google Sheets entries with VLM extraction What this workflow does Monitors Google Drive for new blueprints in a target folder Downloads the file inside n8n for processing Sends the file to VLM Run for VLM analysis Fetches details from the construction.blueprint domain as JSON Appends normalized fields to a Google Sheet as a new row Setup Prerequisites: Google account, VLM Run API credentials, Google Sheets access, n8n. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Create the Drive folder you want to watch and copy its Folder ID Create a Google Sheet with headers like: timestamp, filename, fileid, mimetype, sizebytes, uploaderemail, documenttype, documentnumber, issuedate, authorname, drawingtitlenumbers, revisionhistory, jobname, address, drawingnumber, revision, drawnby, checkedby, scaleinformation, agencyname, documenttitle, blueprintid, blueprintstatus, blueprintowner, blueprint_url Configure Google Drive OAuth2 for the trigger and download nodes Add VLM Run API credentials from https://app.vlm.run/dashboard to the VLM Run node Configure Google Sheets OAuth2 and set Spreadsheet ID and target sheet tab Test by uploading a sample file to the watched Drive folder and activate Perfect for Converting uploaded construction blueprint documents into clean text Organizing extracted blueprint details into structured sheets Quickly accessing key attributes from technical files Centralized archive of blueprint-to-text conversions Key Benefits End to end automation from Drive upload to structured Sheet entry Accurate text extraction of construction blueprint documents Organized attribute mapping for consistent records Searchable archives directly in Google Sheets Hands-free processing after setup How to customize Extend by adding: Version control that links revisions of the same drawing and highlights superseded rows Confidence scores per extracted field with threshold-based routing to manual or AI review Auto-generate a human-readable summary column for quick scanning of blueprint details Split large multi-sheet PDFs into per-drawing rows with individual attributes Cross-system sync to Procore, Autodesk Construction Cloud, or BIM 360 for project-wide visibility