Make.com Automation

My 16-Module Make.com Automation That Publishes Itself Every Week

Every Monday, a scenario I built fires automatically. It queries Google Search Console. It pulls Google Analytics 4. It saves a snapshot to Airtable, reads back the full historical record, feeds everything into Claude with a precision prompt, gets back a complete HTML dashboard with Chart.js trend lines, and pushes that file to GitHub — which triggers a deploy. The dashboard is live. I clicked nothing. Here's the full architecture.

Why I Built This

For the first year of running this site, checking performance meant logging into Google Search Console, opening GA4, looking at the numbers, and trying to hold the picture together in my head long enough to decide what it meant. It worked, in the way that manually doing something always technically works. But it was fragmented, irregular, and easy to skip when the week got busy — which is most weeks.

What I wanted was something closer to a CFO's weekly dashboard: a single document that gave me the current state of the site, trend lines going back several weeks, and one clear callout about what I should pay attention to. A document that was always fresh, always current, and required no effort to produce.

That's what this automation does. It produces the document. Every week. Without me touching it.

16
Make.com modules in the scenario that runs weekly — pulling live data from Google Search Console and GA4, saving historical snapshots, calling Claude, generating a Chart.js dashboard, and pushing it live to GitHub automatically.

The Full Pipeline — Module by Module

Here's the complete data flow. I'll walk through each stage in plain language — no prior Make.com experience required to follow this.

Stage 1: Trigger and Data Pull

1
Scheduler (weekly trigger)

The scenario fires every Monday morning on a schedule. No webhook, no manual trigger — just a clock.

2
Google Search Console — Query Performance

Pulls the past 28 days of data: total clicks, total impressions, average CTR, average position. Also pulls the top 10 queries by click volume.

3
Google Search Console — Page Performance

Pulls performance broken down by page URL — so I can see which specific posts are driving traffic vs. which are sitting flat.

4
Google Analytics 4 — Core Metrics

Pulls users, sessions, pageviews, engagement rate, and average engagement time for the same 28-day window.

Stage 2: Snapshot and History

5
Data Aggregator

Combines the GSC and GA4 data into a single structured object with a timestamp. This is what gets saved to Airtable.

6
Airtable — Save Snapshot

Writes the aggregated data as a new row in the site tracker table. The timestamp is the primary key. Each weekly run adds one row.

7
Airtable — Read Full History

Reads back all rows in the table — not just this week, but the entire history. This is what enables trend lines in the dashboard rather than just a point-in-time snapshot.

8
History Aggregator

Structures the historical data into the format Claude's prompt expects — ordered arrays of weekly values for each metric, ready to be embedded in the API call.

Stage 3: Claude API Call

9
HTTP — Claude API Request

This is the centerpiece. A precision-engineered 400-word prompt sends the current week's data plus the full historical arrays to Claude. The prompt specifies exactly what to produce: a complete, self-contained HTML file using a defined CSS structure, Chart.js for the trend charts, a key insight callout box, and a specific format for the data table.

10
Response Parser

Extracts the HTML content from Claude's JSON response. Claude returns the full dashboard HTML — sometimes 300+ lines — and this module pulls it out cleanly.

Stage 4: GitHub Publish

11
GitHub — Get Current File SHA

Before you can update a file via the GitHub API, you need the SHA of the current version. This module fetches it.

12
Base64 Encoder

GitHub's API requires file content to be Base64-encoded. This module encodes the HTML from Claude.

13
GitHub — Update File

Pushes the encoded HTML to the repo as a commit, using the SHA from module 11 to identify the file being replaced. The commit triggers GitHub Actions.

14
GitHub Actions (automatic)

The push triggers the existing deploy workflow — FTP upload to the host. The dashboard file goes live within about 60 seconds of the commit.

15
Error Handler

If the GitHub update fails (rate limit, auth issue, network error), this module catches the error and logs it to Airtable rather than silently failing.

16
Completion Log

Writes a completion record to Airtable with timestamp, status, and a note on whether the Claude response was within expected length bounds.

~60s
From scheduled trigger to live dashboard. The entire pipeline — data pull, Airtable snapshot, Claude API call, GitHub push, FTP deploy — completes in about a minute every week with no human involvement.

The Prompt Engineering That Makes It Work

The most important single module in the whole scenario is the Claude API call — and the most important part of that module is the prompt. I spent significant time tuning it, and it took real iteration to get right.

The prompt needs to do several things simultaneously: instruct Claude to produce valid HTML (not markdown, not prose), specify the exact CSS class names that match the dashboard's design system, tell it how to structure the Chart.js data arrays from the historical records I'm passing in, define the format and content of the insight callout, and constrain the response to a specific length range that the response parser can handle reliably.

Early versions produced dashboards with chart data that wouldn't render — the timestamp parsing was off and Chart.js couldn't interpret the date labels. Claude Code helped me diagnose this: I described the rendering failure, it read the generated HTML, identified that the timestamp format in the arrays didn't match what Chart.js expected, and suggested a fix to both the aggregator logic and the prompt instruction. That fix has held for every weekly run since.

The prompt is now stable enough that I rarely touch it. When the output drifts — a chart label that's slightly off, a callout that misses the key trend — I update the prompt, run a test execution, and confirm the output is back on target. That's a 10-minute session, not a rebuild.

What It Actually Feels Like to Run This

Honestly? Like nothing. That's the point. On Monday mornings I'll sometimes remember to check whether the new dashboard is up, and it always is. I didn't do anything. The scenario ran. The data is fresh. The chart lines updated.

There's something genuinely strange about seeing a Chart.js trend line that reflects last week's traffic data, knowing that no human produced it — that the entire pipeline from raw API data to rendered HTML was assembled and published by a system running on a schedule. I built that system, but I didn't run it. I don't have to run it. It just runs.

For someone building in the margins of a full-time job and family life, this is what automation at this level actually delivers: not time savings in the sense of doing the same task faster, but the complete removal of a recurring task from your mental queue. The dashboard exists. It's current. It didn't cost me Monday morning.

How Claude Code Contributed to This Build

This scenario didn't emerge fully formed. It was built iteratively, and several of the harder problems were solved with Claude Code in the loop.

The Airtable schema — deciding which fields to store, how to structure the historical arrays, what format would work cleanly in the Claude prompt — was sketched out in a Cowork session before I moved to Code for the implementation. When the Chart.js rendering broke, it was a Claude Code session that diagnosed it. When I needed to add the error handler after a one-off GitHub API failure caused the scenario to run silently without a dashboard, Claude wrote the error-logging module map and explained exactly how to wire it in.

The scenario is documented in full in my site reference doc — every module, every field, every data structure. That documentation was also produced with Claude, from a description of the architecture. It's the kind of structured technical writeup that takes hours manually and took one session with Claude to produce in full.

If you're thinking about building something like this, start with the reference doc system first — that's what makes iterating on a complex multi-module scenario tractable across sessions.

⚡ Try Make.com Free — No Credit Card Required

Free plan: 1,000 operations/month.