Inngest and durable workflows: a practical take
I’ve been using Inngest for about a year now at Four13 Group for ETL pipelines. I have opinions.
What Inngest actually is
Inngest is a durable workflow engine. You write step functions — sequences of operations — and Inngest ensures they complete even if individual steps fail. Each step is independently retryable. State is persisted between steps automatically.
Think of it as a managed queue + state machine + retry system that you interact with by writing regular TypeScript functions.
Why I reached for it
At Four13, we had ETL jobs that could take hours. Pricing syncs across thousands of SKUs, migrations of 90,000+ records, inventory updates that needed to hit multiple APIs in sequence. The previous approach was cron jobs with manual error handling — if something failed at step 47 of 200, you’d restart from the beginning and hope.
Inngest let me write each step as an isolated unit:
const syncPricing = inngest.createFunction(
{ id: "sync-pricing" },
{ event: "pricing/sync.requested" },
async ({ event, step }) => {
const items = await step.run("fetch-items", () =>
fetchPricingItems(event.data.clientId)
);
for (const batch of chunk(items, 50)) {
await step.run(`transform-batch-${batch[0].id}`, () =>
transformAndLoad(batch)
);
}
}
);
If the transform fails at batch 15, Inngest retries batch 15. Batches 1-14 don’t re-run. The state from the fetch step is cached. This is the fundamental value proposition.
What works well
Step-level retries are the killer feature. In integration work, most failures are transient — rate limits, network blips, temporary API outages. Retrying the failed step (with backoff) handles 90% of these automatically.
Event-driven architecture fits naturally. Rather than calling Inngest functions directly, you emit events. This decouples producers from consumers and makes it easy to add new workflows triggered by existing events.
Local development is solid. The Inngest dev server runs locally and gives you visibility into function execution, step states, and event history.
What’s tricky
Step granularity is an art. Too many fine-grained steps and you’re paying overhead for state serialization. Too few coarse steps and you lose the retry benefits. I’ve settled on: one step per external API call, one step per batch transformation, and one step per significant state change.
Cold starts with Cloudflare Workers. Inngest calls your function via HTTP. On Workers, this is fast. But if your function imports heavy dependencies, the initial parse time can eat into your execution budget. I’ve had to be thoughtful about what I import.
Debugging failed steps requires reading Inngest’s dashboard, not your application logs. The mental model shift takes time — your application is no longer the source of truth for execution state.
The bigger picture
Durable workflows changed how I think about backend architecture. Instead of asking “what happens if this fails?” for every operation, I can assume individual steps will eventually succeed and focus on the business logic.
It’s not magic — you still need to think about idempotency, data consistency, and error handling. But the infrastructure-level concerns (retry, resume, state management) are handled. For a solo engineer building production systems, that leverage is enormous.
It’s become a core part of how I think about async backend architecture. For the right workload — long-running, multi-step, failure-prone — it’s hard to beat.