Transactionally triggering Temporal Workflows OR How you can guarantee consistency between database changes and Temporal

Hello Temporal Community!

I’m from the Sequin Stream team, and I wanted to share a tutorial we’ve created that addresses a common challenge: how to reliably trigger a Temporal workflow when data changes in your database.

The Problem You Might Have

You need to guarantee consistency between your database and Temporal workflows. Such that anytime a change happens in your database you know that a temporal workflow is executed.

As noted in a discussion about external DB events driving workflows, “There are situations where this becomes less clear though and there’s value in the ability to separate consumers of changes from the initial action.”

Traditional approaches often involve:

  • Manually implementing the outbox pattern
  • Setting up complex message brokers like Kafka or RabbitMQ
  • Creating polling mechanisms that may miss events or be inefficient

Using CDC with Temporal

The tutorial demonstrates how to transactionally trigger a Temporal workflow when data changes in your database, ensuring consistency across services and building complex, transactionally-consistent workflows.

The approach uses:

  1. Sequin to detect database changes and deliver webhooks (though you can use any CDC / Trigger)
  2. A worker that receives these webhooks
  3. Temporal workflows that handle the business logic

In our example, we show how to trigger a workflow to ensure a user is deleted across multiple services as soon as the user is removed from the database. Sequin provides guarantees that the workflow is always triggered regardless of how the change is made, and then Temporal ensures the workflow executes reliably.

Why We Think This Matters

We believe this pattern is valuable because:

  • It creates a clean separation between data changes and workflow triggers
  • Sometimes it’s hard to ensure that changes to the database always flow through the API that triggers the Temporal workflow (or make Temporal the source of truth). This approach uses the database as the ultimate source of truth.
  • It avoids many of the challenges of managing event queues in systems like Kafka
  • You get transactional guarantees without having to hand-rebuild the outbox pattern

Check Out Our Tutorial

If you’re interested in implementing this pattern, check out our complete tutorial here:

We wanted to post into the community to make sure we can provide support here too and be present to answer any questions!

@Eric_Goldman in your example, consider adding a Workflow Id Reuse Policy of “Reject Duplicate”. If there’s a problem between the webhook handler calling signalWithStart and the webhook success being acknowledged by Sequin, and if the workflow has finished processing at the point that Sequin retries the webhook, the workflow could be started a second time. In your example users.id is a random uuid (globally unique and won’t be reused as long as the application uses the default gen_random_uuid()), and you use it in your workflow id, so it can be used as an idempotent deduplication key. This gives you “exactly once” processing.

More generally, is there something in Sequin which uniquely identifies a database change? For example, if I set a particular value in the database to 1, later change it to 2, and then later set it back to 1 again, is there something which identifies the two updates as being two distinct updates (maybe a timestamp, or a change id which is different for each database change)? Perhaps something like that could be used for a more general idempotent deduplication solution. Or if all application tables have a updated_at column, maybe that could be used as well.