Automation

Building a Workflow Automation System That Runs While You Sleep

A single scenario is just a script. A system is alive. Here's the 5-layer architecture I use to build automation that actually handles real work.

🔗Affiliate disclosure: Some links here are affiliate links. If you sign up for Make.com through my link, I earn a small commission at no extra cost to you.

System vs. Scenario: What's the Difference?

A scenario is a path. Point A to Point B. Trigger happens, stuff runs, output appears.

A system is an organism. Data flows in from multiple sources. Gets processed by multiple scenarios. Gets stored for future reference. Gets delivered to different places. Gets monitored so you know when something breaks.

When I built my first automation, it was a scenario: webhook → parse data → send email. Done. Worked for one thing. Broke in seven ways I didn't anticipate.

By the time I built Shadow Hound, I was thinking like an architect, not a scripter. I needed something that could:

  • Accept resumes from multiple sources (web form, email, API)
  • Process them consistently even when OpenAI had rate limits
  • Store results so I could see what I'd already analyzed
  • Deliver feedback to users in different formats
  • Alert me when something failed

That's a system. And it looks completely different from a scenario.

The Architecture: Five Layers

Here's the model I use for everything now. Think of it like a building: foundation, walls, utilities, interior, security.

  1. Trigger Layer — How does work enter the system?
  2. Processing Layer — What happens to the work once it's inside?
  3. Storage Layer — Where do we remember what we did?
  4. Delivery Layer — How does the result get back to the user?
  5. Monitoring Layer — How do we know if something broke?

Most people skip layer 5. That's why they build systems that fail silently and they don't find out for a week.

73%
of automation failures could be caught immediately with proper monitoring. Instead we find out when a customer complains.

Trigger Layer: How Work Enters

The trigger is your system's front door. Be intentional about it. You can't process something you never see.

Make.com webhooks are my favorite trigger. They're flexible, reliable, and you control the input format completely. When someone submits a resume through my web form, it POSTs to a Make.com webhook with structured data. No ambiguity. No manual parsing.

But webhooks aren't the only option:

  • Scheduled triggers — Run this every Tuesday at 9am (good for batch processing, bad for responsiveness)
  • Email triggers — When an email arrives at this address, do this (good for distributed teams, hard to debug)
  • Watch folder triggers — When a file appears in Dropbox, process it (good for file-based workflows)
  • API polling — Check this API every 5 minutes for new data (necessary evil, inefficient but works)

I use webhooks for my public tools (Shadow Hound, Social Spark) because they're immediate and I control the interface. I use scheduled triggers for internal stuff (like weekly reports) because they're predictable.

The key rule: make your trigger explicit. Don't let work hide. If a user submits something, you should see it immediately (or know why you're seeing it delayed).

Processing Layer: The Core Logic

This is where the actual work happens. For Shadow Hound, it's: validate resume → call OpenAI API → extract improvements → format response.

Keep processing simple. I mean it. Every branching condition you add, you're adding a place for the system to fail.

Shadow Hound's processing layer is straightforward:

  1. Check if resume is empty (it might be)
  2. Send it to OpenAI
  3. Parse the response
  4. Store it (more on this next)
  5. Signal completion to delivery layer

That's it. Five steps. No fancy conditional logic trying to detect "is this a good resume" or "should we process this differently based on the industry." Keep it dumb. Let OpenAI be smart.

Social Spark's processing layer is slightly more complex because it needs to generate different content for different platforms. But it's still straightforward: parse input → call OpenAI with platform-specific prompt → validate output → store → deliver.

When your processing layer has 25 conditional branches, your system is brittle. Redesign it.

Storage Layer: The System's Memory

This is the layer most solo builders skip, and it costs them dearly. You need to remember what you've done. For debugging. For deduplication. For analytics.

I use Airtable for everything. It's not fancy, but it's reliable and I can query it from anywhere.

Shadow Hound stores:

  • Raw resume data (for audit purposes)
  • Processing timestamp
  • OpenAI response
  • Which user requested it (so I don't double-process)
  • Delivery status (was it sent successfully?)

This sounds like overhead, but it's not. When a user says "I never got my results," I can see exactly what happened: was the resume received? Was it processed? Did delivery fail? Or did the user just lose the email?

Without storage, you're guessing. With storage, you're debugging.

Use a database. Doesn't have to be expensive. Airtable's free tier handles thousands of records. Google Sheets works for smaller operations (and it's searchable). The point is: build memory into your system from day one.

Delivery Layer: Results Going Out

Your system processed the work. Now what? Get the result back to the user.

Delivery mechanisms I use:

  • Email — Most reliable for one-off results. Subject lines matter. Include context.
  • Webhook back — If the original request came via API, POST the response back to the originating system.
  • Database entry — Store the result somewhere the user can query it (API, dashboard, etc.).
  • Google Doc/Sheet — Good for collaborative results that need human review.

Shadow Hound delivers via email because that's what users expect. Social Spark delivers back to Google Sheets because content creators want to review and edit before posting.

Here's the crucial part: build error handling into delivery. Email sending fails sometimes. Webhooks timeout. Google Sheets API hits rate limits. If delivery fails, you need to:

  1. Know it failed (not silent failure)
  2. Retry it (with backoff)
  3. Eventually alert a human if it keeps failing

I learned this the hard way. Launched Blog Post Generator with simple email delivery. Email provider had an outage one evening. Users didn't get their content. I found out three days later when someone complained. Now there's retry logic and I get an alert if three consecutive deliveries fail.

Monitoring Layer: How You Know When Things Break

Build a system and forget about it. Don't. It will fail and you won't notice.

My monitoring approach:

  • Execution logs — Every scenario run logs: input, output, any errors. I keep 90 days. Costs nothing. Saves everything.
  • Failure alerts — If a scenario fails three times in a row, I get an email immediately. Not a digest. Immediately.
  • Latency monitoring — If processing takes longer than expected (slower API, more data), I log it. Not an error, but something to watch.
  • Monthly reports — Total runs, success rate, top errors. I review it every month. Takes 10 minutes. Catches trends.

Make.com has built-in execution history. Use it. Every scenario you build should have its error path logged to a database or email. When something breaks, you want context, not just "failed."

The KPI Dashboard actually started as a monitoring layer tool. I was building so many scenarios, I needed visibility into how they were all performing. Now it's a product in its own right.

Putting It All Together: Shadow Hound Architecture

Let me walk you through exactly how Shadow Hound works as a complete system:

Trigger: User submits resume via web form. Form uses JavaScript to validate file type and POST resume text to a Make.com webhook.

Processing: Webhook receives data. Scenario validates it's not empty. Calls OpenAI API with a structured prompt asking for improvement suggestions. Parses response into structured format.

Storage: Everything gets written to Airtable: timestamp, resume text, OpenAI response, user email, processing status. If the same email submits twice in one week, we flag it (optional: skip reprocessing).

Delivery: Format OpenAI response nicely and send via email with a link back to the dashboard. Delivery is wrapped in a retry function — if email fails, try again in 5 minutes, then 30 minutes, then give up and alert me.

Monitoring: Every run creates an execution log. Failed scenarios send me an alert. Monthly, I review success rates and API costs.

That's a complete system. It handles failures. It's debuggable. It scales. It runs while I sleep.

Building Toward a System From Your First Scenario

You don't need to build all five layers on day one. But design for it from the start.

Build the scenario first. Make it work. Then:

  1. Week 1: Scenario is live and working (no storage yet, just making results).
  2. Week 2: Add storage. Start logging every run to a spreadsheet. Costs nothing, saves debugging.
  3. Week 3: Add error handling to delivery. Make sure failures don't happen silently.
  4. Week 4: Add monitoring. Email yourself when something fails. Review logs once a week.

This progression took me years to figure out because I was building scenarios, not systems. Now I build like this automatically.

Your first system will be simpler than Shadow Hound. Maybe it's: trigger → one API call → email result. That's fine. Just ensure all five layers exist, even if they're minimal.

Because the moment you forget layer 5 (monitoring), something will break, and you won't know until a customer tells you.

⚡ Try Make.com Free — No Credit Card Required

Free plan: 1,000 operations/month.