Why I Built This
For the first year of running this site, checking performance meant logging into Google Search Console, opening GA4, looking at the numbers, and trying to hold the picture together in my head long enough to decide what it meant. It worked, in the way that manually doing something always technically works. But it was fragmented, irregular, and easy to skip when the week got busy — which is most weeks.
What I wanted was something closer to a CFO's weekly dashboard: a single document that gave me the current state of the site, trend lines going back several weeks, and one clear callout about what I should pay attention to. A document that was always fresh, always current, and required no effort to produce.
That's what this automation does. It produces the document. Every week. Without me touching it.
The Full Pipeline — Module by Module
Here's the complete data flow. I'll walk through each stage in plain language — no prior Make.com experience required to follow this.
Stage 1: Trigger and Data Pull
The scenario fires every Monday morning on a schedule. No webhook, no manual trigger — just a clock.
Pulls the past 28 days of data: total clicks, total impressions, average CTR, average position. Also pulls the top 10 queries by click volume.
Pulls performance broken down by page URL — so I can see which specific posts are driving traffic vs. which are sitting flat.
Pulls users, sessions, pageviews, engagement rate, and average engagement time for the same 28-day window.
Stage 2: Snapshot and History
Combines the GSC and GA4 data into a single structured object with a timestamp. This is what gets saved to Airtable.
Writes the aggregated data as a new row in the site tracker table. The timestamp is the primary key. Each weekly run adds one row.
Reads back all rows in the table — not just this week, but the entire history. This is what enables trend lines in the dashboard rather than just a point-in-time snapshot.
Structures the historical data into the format Claude's prompt expects — ordered arrays of weekly values for each metric, ready to be embedded in the API call.
Stage 3: Claude API Call
This is the centerpiece. A precision-engineered 400-word prompt sends the current week's data plus the full historical arrays to Claude. The prompt specifies exactly what to produce: a complete, self-contained HTML file using a defined CSS structure, Chart.js for the trend charts, a key insight callout box, and a specific format for the data table.
Extracts the HTML content from Claude's JSON response. Claude returns the full dashboard HTML — sometimes 300+ lines — and this module pulls it out cleanly.
Stage 4: GitHub Publish
Before you can update a file via the GitHub API, you need the SHA of the current version. This module fetches it.
GitHub's API requires file content to be Base64-encoded. This module encodes the HTML from Claude.
Pushes the encoded HTML to the repo as a commit, using the SHA from module 11 to identify the file being replaced. The commit triggers GitHub Actions.
The push triggers the existing deploy workflow — FTP upload to the host. The dashboard file goes live within about 60 seconds of the commit.
If the GitHub update fails (rate limit, auth issue, network error), this module catches the error and logs it to Airtable rather than silently failing.
Writes a completion record to Airtable with timestamp, status, and a note on whether the Claude response was within expected length bounds.
The Prompt Engineering That Makes It Work
The most important single module in the whole scenario is the Claude API call — and the most important part of that module is the prompt. I spent significant time tuning it, and it took real iteration to get right.
The prompt needs to do several things simultaneously: instruct Claude to produce valid HTML (not markdown, not prose), specify the exact CSS class names that match the dashboard's design system, tell it how to structure the Chart.js data arrays from the historical records I'm passing in, define the format and content of the insight callout, and constrain the response to a specific length range that the response parser can handle reliably.
Early versions produced dashboards with chart data that wouldn't render — the timestamp parsing was off and Chart.js couldn't interpret the date labels. Claude Code helped me diagnose this: I described the rendering failure, it read the generated HTML, identified that the timestamp format in the arrays didn't match what Chart.js expected, and suggested a fix to both the aggregator logic and the prompt instruction. That fix has held for every weekly run since.
The prompt is now stable enough that I rarely touch it. When the output drifts — a chart label that's slightly off, a callout that misses the key trend — I update the prompt, run a test execution, and confirm the output is back on target. That's a 10-minute session, not a rebuild.
What It Actually Feels Like to Run This
Honestly? Like nothing. That's the point. On Monday mornings I'll sometimes remember to check whether the new dashboard is up, and it always is. I didn't do anything. The scenario ran. The data is fresh. The chart lines updated.
There's something genuinely strange about seeing a Chart.js trend line that reflects last week's traffic data, knowing that no human produced it — that the entire pipeline from raw API data to rendered HTML was assembled and published by a system running on a schedule. I built that system, but I didn't run it. I don't have to run it. It just runs.
For someone building in the margins of a full-time job and family life, this is what automation at this level actually delivers: not time savings in the sense of doing the same task faster, but the complete removal of a recurring task from your mental queue. The dashboard exists. It's current. It didn't cost me Monday morning.
How Claude Code Contributed to This Build
This scenario didn't emerge fully formed. It was built iteratively, and several of the harder problems were solved with Claude Code in the loop.
The Airtable schema — deciding which fields to store, how to structure the historical arrays, what format would work cleanly in the Claude prompt — was sketched out in a Cowork session before I moved to Code for the implementation. When the Chart.js rendering broke, it was a Claude Code session that diagnosed it. When I needed to add the error handler after a one-off GitHub API failure caused the scenario to run silently without a dashboard, Claude wrote the error-logging module map and explained exactly how to wire it in.
The scenario is documented in full in my site reference doc — every module, every field, every data structure. That documentation was also produced with Claude, from a description of the architecture. It's the kind of structured technical writeup that takes hours manually and took one session with Claude to produce in full.
If you're thinking about building something like this, start with the reference doc system first — that's what makes iterating on a complex multi-module scenario tractable across sessions.