Back to Catalog

SOL trading recommendations w/ multi-timeframe analysis using Gemini & Telegram

AFK CryptoAFK Crypto
342 views
2/3/2026
Official Page

Try It Out!

This workflow builds a Telegram-based Solana (SOL/USDT) Multi-Timeframe AI Market Analyzer that automatically pulls live candlestick data for SOL/USDT, runs structured multi-timeframe technical analysis (1-minute, 5-minute, 1-hour) through an AI Agent, and posts a professional, JSON-structured analysis + trading recommendation straight to your Telegram chat. It combines on-chain / market data aggregation, LLM-driven interpretation, and instant Telegram reporting — giving you concise, actionable market intelligence every hour.

How It Works

Hourly Trigger – The workflow runs once per hour to pull fresh market data.

Market Data Fetch – Three HTTP requests gather candlesticks from CryptoCompare:

1-minute (last 60 candles)

5-minute aggregate (last 60 aggregated candles)

1-hour (last 60 candles)

Merge & Transcribe – The three feeds are merged and a lightweight code node extracts:

symbol, current price, arrays for data_1m, data_5m, data_1h.

AI Agent Analysis – The LLM (configured via your model node) receives the merged payload and runs a structured multi-timeframe technical analysis, returning a strict JSON report containing:

Per-timeframe analysis (momentum, volume, S/R,MA, volatility)

Market structure / confluence findings

Trading recommendation (action, entry, stop, TPs, position sizing)

A final disclaimer

Parse AI Output – Extracts the JSON block from the agent’s reply and validates/parses it for downstream formatting.

Telegram Reporting – Sends two nicely formatted Telegram messages:

Multi-timeframe breakdown (1m / 5m / 1h)

Market structure + Trading Recommendation (TPs, SL, position size, disclaimer)

How to Use

Import the workflow into your n8n workspace (or replicate the nodes shown in the JSON).

Add credentials:

CryptoCompare API Key — for reliable candlestick data.

LLM model credentials — e.g., Google Gemini / OpenAI, configured in the LangChain/LM node.

Telegram Bot Token & Chat ID — to send messages.

(Optional) AFK Crypto API key if you want to enrich data with wallet info later.

Node mapping & endpoints:

Fetch_1m → GET https://min-api.cryptocompare.com/data/v2/histominute?fsym=SOL&tsym=USDT&limit=60

Fetch_5m → GET https://min-api.cryptocompare.com/data/v2/histominute?fsym=SOL&tsym=USDT&limit=60&aggregate=5

Fetch_1h → GET https://min-api.cryptocompare.com/data/v2/histohour?fsym=SOL&tsym=USDT&limit=60

Merge → combine the three responses into a single payload.

Transcribe (code) → extract last close as current price and attach the arrays.

AI Agent → pass the structured prompt (system message instructs exact JSON structure).

Parse AI Output → extract the json ... block and JSON.parse it.

Telegram nodes → format and send two messages (timeframes and recommendation).

Adjust analysis frequency: default is hourly — change the Schedule Trigger node as desired.

Deploy and activate: the workflow will post an AI-driven SOL/USDT market analysis to your Telegram hourly.

(Optional) Extend This Workflow

Add price / orderbook enrichment (e.g., AFK price endpoints or exchange orderbook) to improve context.

Add wallet exposure checks (AFK wallet balances) to tailor position sizing suggestions.

Store AI reports in Notion / Google Sheets for historical auditing and backtesting.

Add alert filtering to only post when the LLM flags high-confidence signals or confluence across timeframes.

Expose Telegram commands to request on-demand analysis (e.g., /analyze now 5m).

Add risk management logic to convert LLM recommendation into automated orders (careful — requires manual review and stronger safety controls).

Safety Mechanisms

Explicit system prompt — forces AI to output only the exact JSON structure to avoid free-form text parsing errors.

JSON parser node — validates the agent response and throws if malformed before any downstream action.

Read-only market analysis — the workflow only reports by default (no auto-trading), reducing operational risk.

Credentials gated — ensure LLM and Telegram credentials are stored securely in n8n.

Disclaimer — every report includes a legal/financial disclaimer from the agent.

Requirements

CryptoCompare API Key (for minute/hour candlesticks)

LLM model credentials (Google Gemini / OpenAI / other supported model in your LangChain node)

Telegram Bot Token + Chat ID (where analysis messages are posted)

Optional: AFK Crypto API key if you plan to add wallet/position context

n8n instance with: HTTP Request, Code, Merge, LangChain/Agent, and Telegram nodes enabled

AFK / External APIs Used

CryptoCompare Candles:

GET https://min-api.cryptocompare.com/data/v2/histominute?fsym=SOL&tsym=USDT&limit=60 (1m)

GET https://min-api.cryptocompare.com/data/v2/histominute?fsym=SOL&tsym=USDT&limit=60&aggregate=5 (5m)

GET https://min-api.cryptocompare.com/data/v2/histohour?fsym=SOL&tsym=USDT&limit=60 (1h)

Telegram Bot API – via n8n Telegram node.

LLM / LangChain – your chosen LLM provider (configured in the workflow).

Summary

The Solana (SOL/USDT) Multi-Timeframe AI Market Analyzer (Telegram) gives you hourly, professional multi-timeframe technical analysis generated by an LLM agent using real candlestick data from CryptoCompare. It combines the speed of automated data collection with the structure and reasoning of an AI analyst, delivering clear trading recommendations and a timestamped analysis to your Telegram chat — ideal for traders who want reliable, concise market intelligence without manual charting.


Our Website: https://afkcrypto.com/ Check our blogs: https://www.afkcrypto.com/blog

Solana (SOL) Trading Recommendations with Multi-Timeframe Analysis using Gemini & Telegram

This n8n workflow provides automated trading recommendations for Solana (SOL) by performing multi-timeframe analysis using the Google Gemini AI model and delivering the insights directly to a Telegram chat. It simplifies the process of staying informed about potential trading opportunities without constant manual monitoring.

What it does

This workflow automates the following steps:

  1. Schedules Execution: The workflow runs on a predefined schedule (e.g., daily, hourly) to fetch the latest data.
  2. Fetches Market Data: It makes an HTTP request to a cryptocurrency API to retrieve real-time market data for Solana (SOL).
  3. Analyzes with AI: The fetched data is then passed to a Google Gemini Chat Model via an AI Agent, which performs multi-timeframe analysis to identify potential trading recommendations.
  4. Generates Recommendations: The AI Agent processes the analysis and generates actionable trading recommendations based on the market data.
  5. Formats Output: The raw output from the AI is formatted into a clear and readable message.
  6. Delivers to Telegram: Finally, the formatted trading recommendations are sent as a message to a specified Telegram chat.

Prerequisites/Requirements

To use this workflow, you will need:

  • n8n Instance: A running instance of n8n.
  • Telegram Bot Token: A Telegram bot token and the chat ID where you want to receive messages.
  • Google Gemini API Key: An API key for the Google Gemini AI model.
  • Cryptocurrency API Endpoint: An API endpoint to fetch Solana (SOL) market data. (The current HTTP Request node is generic, you'll need to configure it for a specific crypto API like CoinGecko, Binance, etc.)

Setup/Usage

  1. Import the Workflow: Download the provided JSON and import it into your n8n instance.
  2. Configure Credentials:
    • Telegram: Set up a Telegram credential with your Bot Token.
    • Google Gemini Chat Model: Configure the Google Gemini Chat Model node with your Gemini API key.
  3. Configure HTTP Request:
    • Update the "HTTP Request" node with the URL of your chosen cryptocurrency API to fetch Solana (SOL) data.
    • Adjust headers or query parameters as needed for authentication or specific data requests.
  4. Configure AI Agent:
    • Review the "AI Agent" node's prompt to ensure it aligns with the type of multi-timeframe analysis and recommendations you expect. You may need to refine the prompt based on the specific data structure returned by your chosen crypto API.
  5. Configure Telegram:
    • In the "Telegram" node, specify the Chat ID where you want the recommendations to be sent.
  6. Activate the Workflow: Enable the workflow to start running on its defined schedule.
  7. Adjust Schedule (Optional): Modify the "Schedule Trigger" node to change how often the workflow runs (e.g., every hour, every 4 hours, daily).

Related Templates

AI-powered code review with linting, red-marked corrections in Google Sheets & Slack

Advanced Code Review Automation (AI + Lint + Slack) Who’s it for For software engineers, QA teams, and tech leads who want to automate intelligent code reviews with both AI-driven suggestions and rule-based linting — all managed in Google Sheets with instant Slack summaries. How it works This workflow performs a two-layer review system: Lint Check: Runs a lightweight static analysis to find common issues (e.g., use of var, console.log, unbalanced braces). AI Review: Sends valid code to Gemini AI, which provides human-like review feedback with severity classification (Critical, Major, Minor) and visual highlights (red/orange tags). Formatter: Combines lint and AI results, calculating an overall score (0–10). Aggregator: Summarizes results for quick comparison. Google Sheets Writer: Appends results to your review log. Slack Notification: Posts a concise summary (e.g., number of issues and average score) to your team’s channel. How to set up Connect Google Sheets and Slack credentials in n8n. Replace placeholders (<YOURSPREADSHEETID>, <YOURSHEETGIDORNAME>, <YOURSLACKCHANNEL_ID>). Adjust the AI review prompt or lint rules as needed. Activate the workflow — reviews will start automatically whenever new code is added to the sheet. Requirements Google Sheets and Slack integrations enabled A configured AI node (Gemini, OpenAI, or compatible) Proper permissions to write to your target Google Sheet How to customize Add more linting rules (naming conventions, spacing, forbidden APIs) Extend the AI prompt for project-specific guidelines Customize the Slack message formatting Export analytics to a dashboard (e.g., Notion or Data Studio) Why it’s valuable This workflow brings realistic, team-oriented AI-assisted code review to n8n — combining the speed of automated linting with the nuance of human-style feedback. It saves time, improves code quality, and keeps your team’s review history transparent and centralized.

higashiyama By higashiyama
90

Generate Weather-Based Date Itineraries with Google Places, OpenRouter AI, and Slack

🧩 What this template does This workflow builds a 120-minute local date course around your starting point by querying Google Places for nearby spots, selecting the top candidates, fetching real-time weather data, letting an AI generate a matching emoji, and drafting a friendly itinerary summary with an LLM in both English and Japanese. It then posts the full bilingual plan with a walking route link and weather emoji to Slack. 👥 Who it’s for Makers and teams who want a plug-and-play bilingual local itinerary generator with weather awareness — no custom code required. ⚙️ How it works Trigger – Manual (or schedule/webhook). Discovery – Google Places nearby search within a configurable radius. Selection – Rank by rating and pick the top 3. Weather – Fetch current weather (via OpenWeatherMap). Emoji – Use an AI model to match the weather with an emoji 🌤️. Planning – An LLM writes the itinerary in Markdown (JP + EN). Route – Compose a Google Maps walking route URL. Share – Post the bilingual itinerary, route link, and weather emoji to Slack. 🧰 Requirements n8n (Cloud or self-hosted) Google Maps Platform (Places API) OpenWeatherMap API key Slack Bot (chat:write) LLM provider (e.g., OpenRouter or DeepL for translation) 🚀 Setup (quick) Open Set → Fields: Config and fill in coords/radius/time limit. Connect Credentials for Google, OpenWeatherMap, Slack, and your LLM. Test the workflow and confirm the bilingual plan + weather emoji appear in Slack. 🛠 Customize Adjust ranking filters (type, min rating). Modify translation settings (target language or tone). Change output layout (side-by-side vs separated). Tune emoji logic or travel mode. Add error handling, retries, or logging for production use.

nodaBy noda
52

Competitor intelligence agent: SERP monitoring + summary with Thordata + OpenAI

Who this is for? This workflow is designed for: Marketing analysts, SEO specialists, and content strategists who want automated intelligence on their online competitors. Growth teams that need quick insights from SERP (Search Engine Results Pages) without manual data scraping. Agencies managing multiple clients’ SEO presence and tracking competitive positioning in real-time. What problem is this workflow solving? Manual competitor research is time-consuming, fragmented, and often lacks actionable insights. This workflow automates the entire process by: Fetching SERP results from multiple search engines (Google, Bing, Yandex, DuckDuckGo) using Thordata’s Scraper API. Using OpenAI GPT-4.1-mini to analyze, summarize, and extract keyword opportunities, topic clusters, and competitor weaknesses. Producing structured, JSON-based insights ready for dashboards or reports. Essentially, it transforms raw SERP data into strategic marketing intelligence — saving hours of research time. What this workflow does Here’s a step-by-step overview of how the workflow operates: Step 1: Manual Trigger Initiates the process on demand when you click “Execute Workflow.” Step 2: Set the Input Query The “Set Input Fields” node defines your search query, such as: > “Top SEO strategies for e-commerce in 2025” Step 3: Multi-Engine SERP Fetching Four HTTP request tools send the query to Thordata Scraper API to retrieve results from: Google Bing Yandex DuckDuckGo Each uses Bearer Authentication configured via “Thordata SERP Bearer Auth Account.” Step 4: AI Agent Processing The LangChain AI Agent orchestrates the data flow, combining inputs and preparing them for structured analysis. Step 5: SEO Analysis The SEO Analyst node (powered by GPT-4.1-mini) parses SERP results into a structured schema, extracting: Competitor domains Page titles & content types Ranking positions Keyword overlaps Traffic share estimations Strengths and weaknesses Step 6: Summarization The Summarize the content node distills complex data into a concise executive summary using GPT-4.1-mini. Step 7: Keyword & Topic Extraction The Keyword and Topic Analysis node extracts: Primary and secondary keywords Topic clusters and content gaps SEO strength scores Competitor insights Step 8: Output Formatting The Structured Output Parser ensures results are clean, validated JSON objects for further integration (e.g., Google Sheets, Notion, or dashboards). Setup Prerequisites n8n Cloud or Self-Hosted instance Thordata Scraper API Key (for SERP data retrieval) OpenAI API Key (for GPT-based reasoning) Setup Steps Add Credentials Go to Credentials → Add New → HTTP Bearer Auth* → Paste your Thordata API token. Add OpenAI API Credentials* for the GPT model. Import the Workflow Copy the provided JSON or upload it into your n8n instance. Set Input In the “Set the Input Fields” node, replace the example query with your desired topic, e.g.: “Google Search for Top SEO strategies for e-commerce in 2025” Execute Click “Execute Workflow” to run the analysis. How to customize this workflow to your needs Modify Search Query Change the search_query variable in the Set Node to any target keyword or topic. Change AI Model In the OpenAI Chat Model nodes, you can switch from gpt-4.1-mini to another model for better quality or lower cost. Extend Analysis Edit the JSON schema in the “Information Extractor” nodes to include: Sentiment analysis of top pages SERP volatility metrics Content freshness indicators Export Results Connect the output to: Google Sheets / Airtable for analytics Notion / Slack for team reporting Webhook / Database for automated storage Summary This workflow creates an AI-powered Competitor Intelligence System inside n8n by blending: Real-time SERP scraping (Thordata) Automated AI reasoning (OpenAI GPT-4.1-mini) Structured data extraction (LangChain Information Extractors)

Ranjan DailataBy Ranjan Dailata
632