Shipping

I'm a Working Dad With a Full-Time Job. Here's Everything I've Shipped With Claude.

I want to be upfront about something before this post: I am not a developer. I work in technology — I run various programs and projects at a large financial institution — but I'm not a coder. I have a family. I build in the evenings, in small blocks, when the house is quiet. What follows is the honest inventory of what I've shipped using Claude over the past several months — and what I think it means for anyone who has ever said "I would build something, but I don't have the coding skills or the time."

The List, Up Front

I keep having to say this out loud to believe it. Here's what exists now:

  • A completely custom static website — rebuilt from WordPress from scratch, with a bespoke design system, no CMS, no framework, nothing inherited
  • 10 live, working apps deployed at lancecalamita.com
  • A 16-module Make.com automation that pulls Google Search Console and GA4 data weekly, runs it through Claude, and publishes a live analytics dashboard to GitHub automatically
  • An SEO audit dashboard and a complete Make.com pipeline documentation suite
  • A blog-post factory with a 10-assertion Python eval harness that generates verified, publish-ready HTML from a two-sentence brief
  • 40+ published blog posts
  • Two Progressive Web Apps installable on your phone

None of this existed before I started using Claude seriously. None of it was possible within my previous constraints — limited evenings, no dev background, no server-side code knowledge, no budget to hire anyone.

Here's the story of how it happened.

10
Live apps deployed and working — built by a non-developer in evening blocks around a full-time job and family, using Claude as the primary building tool.

The 10 Apps

Every app is a Make.com automation on the backend — a webhook receives the user's input, triggers a multi-step scenario that calls Claude, saves the output to Airtable, and delivers the result via both the web and a follow-up email. The front ends are pure HTML, CSS, and JavaScript. Nothing server-side. Nothing to maintain.

What the list doesn't capture: each of these has real users. Real Airtable rows. Real email deliveries. They're not demos — they're tools people actually use, some of which I use myself every week.

The Site Rebuild

Before I started building seriously with Claude, my site was WordPress. Kadence theme. Reasonable enough — it did the job. But every change was a fight with the CMS. Plugins conflicting. Template overrides stacking. Zero control over what the HTML actually looked like.

I migrated everything to a fully custom static site — pure HTML, CSS, and JavaScript, with a design system built from scratch. Color palette defined as CSS custom properties. Two fonts: Playfair Display for headings, Inter for body. Grid system. Card components. A glassmorphism nav. Responsive mobile layouts with accessible hamburger menus. PWA manifests and service workers at the repo root. GitHub Actions FTP deployment that publishes changes in under 60 seconds.

Every line of that is intentional. Nothing inherited from a theme. Nothing I don't understand. When something breaks, I know where to look. That shift — from a site I was configuring to a site I own — is one of the things I'm most proud of in this whole project.

The Self-Publishing Pipeline

The automation I'm most impressed by my own ability to have built: a 16-module Make.com scenario that runs every week and publishes a live analytics dashboard without any input from me.

The pipeline: query Google Search Console → pull GA4 → save snapshot to Airtable → read back full historical record → feed to Claude API with a precision prompt → receive complete Chart.js HTML dashboard → Base64 encode → push to GitHub via API → trigger deploy → live in 60 seconds.

I did not write the code for any of this. I described what I wanted. Claude helped me architect the module map, wrote the canonical API prompt, diagnosed a Chart.js rendering bug and fixed the aggregator logic, and produced the full technical documentation for the scenario. I wired the modules together in Make.com.

The dashboard updates itself. Every. Single. Week. I do nothing.

40+
Blog posts published — produced with a factory system that takes a two-sentence brief and returns a verified, publish-ready HTML file with 10 automated quality checks. Built without writing Python.

How the Time Actually Works

People ask: when do you find the time? The honest answer is: I don't find it. I use what's there.

I build in two to three evening blocks per week, usually 45 minutes to 90 minutes each. I don't always have the energy for a productive session. Some evenings I open VS Code, stare at a half-finished feature, and close it again. That's fine. The sessions that work are productive enough that the output compounds.

The key thing Claude changed isn't the amount of time I have — it's what I can do with the time I have. Before, an evening session might produce a rough working draft of one small feature. Now, a well-prepared session with good context in place can produce a tested, deployed feature or a complete published blog post. The output per hour went up, not the hours.

That said: this system has a setup cost. The reference document, the CLAUDE.md file, the blog-post skill, the eval harness — none of those were free. They took sessions to build. But they're infrastructure investments: you pay once, they pay back across every session after. The ratio is very favorable once the system is running.

What This Is Not

I want to be clear about what I'm not claiming here. I am not saying Claude does all the work and I'm just along for the ride. Every piece of this system required judgment calls that only I could make: what to build, what architecture makes sense, what the quality bar is, when something is good enough to ship vs. needs another iteration.

Claude makes mistakes. I've debugged AI-generated HTML that had invisible content on mobile. I've fixed broken JavaScript from unescaped characters. I've rewritten prompts that produced outputs that technically ran but weren't what I wanted. The eval harness exists because I can't trust AI-generated code without checking it.

What I'm claiming is this: the gap between having an idea and having a working implementation is dramatically smaller than it used to be. Not gone. Smaller. Smaller in a way that, for someone building in the margins of a full life, is the difference between "I might get to that someday" and "I shipped that last Tuesday."

What This Means for You

If you've been following the AI hype cycle and wondering whether any of it is real: some of it is. Specifically, the part that says non-developers can build real, working software in reasonable amounts of time — that part is real. I'm living it.

The part that isn't real: the idea that you can use AI tools casually and get serious results. The people getting serious results are the ones investing in the infrastructure — the context systems, the quality checks, the iteration discipline. That investment is real and it takes time.

But if you're willing to make that investment, in the margins of a full life, the ceiling of what you can build has genuinely shifted. I don't know exactly how high it goes. I'm still finding out. Every week, something exists that didn't exist the week before.

If you want to go deeper on the specific tools and systems behind all of this: start with Claude Code in VS Code, then read how I handle persistent context across sessions. Those two things, more than anything else, are what make the inventory above possible.

⚡ Try Make.com Free — No Credit Card Required

Free plan: 1,000 operations/month.