Workflow automation isn't a no-code problem
The pitch is always 'drag boxes, ship workflows.' The reality is that durable, debuggable, multi-week processes need real engineering — even when the surface is visual.
Trillion Thoughts Engineering
Every two or three years, a new wave of "no-code workflow automation" tools sweeps through. The pitch is always the same: drag boxes onto a canvas, connect them with arrows, ship a workflow. No engineers required.
We've seen this play out across maybe two dozen customer engagements now, and the pattern is depressingly consistent. The first three workflows are a delight. The fourth is "weird". The seventh is unmaintainable. The tenth is the catalyst for an emergency project to rip the platform out and replace it with code.
What goes wrong
The failure mode is rarely the tool. It's the modelling. Specifically:
- Branches multiply. Real business processes have conditional paths. A canvas with seven decision diamonds and forty arrows is not easier to read than the equivalent function — it's harder.
- Versioning is hostile. What does it mean to "deploy" a new version of a workflow when there are 800 in-flight instances of the old one? Most no-code tools answer this badly.
- Testing is missing. You cannot unit-test a canvas. You can run a happy-path manually, but you can't lock in regression tests the way you can with a function.
- Observability is a black box. When something goes wrong, the logs say "step failed" and that's it. The hop from symptom to root cause goes through the vendor's UI, not through your tooling.
What actually works
Three principles, learned the hard way:
- Workflows are code. The orchestration layer should be a programming language with a workflow library on top, not a drawing tool with code escape hatches. Temporal, AWS Step Functions (with Lambda), and Inngest are good examples.
- Visual is for users, not for engineers. A visual builder for non-technical operators (ops people building notification rules, marketers building drip flows) is a perfectly legitimate product. A visual builder for engineers building revenue-critical processes is a trap.
- Durability is non-negotiable. Whatever you pick has to handle "the worker died mid-workflow" without your team thinking about it. If retries and resumes are something you have to design, you'll get it wrong eventually.
The honest middle ground
We've shipped products where the public surface is a visual builder and the runtime underneath is durable code. That's our preferred shape: end-users get the canvas they want, and engineers get the runtime that doesn't wake them up. Zuzuflow is exactly this, and a good chunk of our consulting work is helping customers move toward it.
The right question isn't "code or no-code?" It's "who is using the builder, and who is on call when it breaks?"
If you're evaluating tools
Three questions to ask vendors that usually separate the wheat from the chaff:
- What happens to in-flight workflows when I deploy a new version?
- How do I write an automated test for a workflow before I deploy it?
- Show me the audit log for a single workflow execution. Where do the inputs and outputs of each step live?
If the answers are "we don't really do that" or "you can see it in our UI" — keep shopping.
More notes
From monolith to modular: the cuts that actually pay off
Most monolith decompositions fail at boundary selection. Here's the heuristic we use to find the cuts worth making — and the ones that look attractive but cost more than they return.
Prompt caching at production scale
Prompt caching can drop your inference cost by an order of magnitude — but only if you structure prompts the way the cache wants you to.
