I’ve seen threads like: Pattern for ensuring consistency between DB save and Workflow. Where it’s clear to me that if Temporal is the source of truth, it gets easier to manage follow-up actions / triggers. There are situations where this becomes less clear though and there’s value in the ability to separate consumers of changes from the initial action.
In my immediate case, I’m looking to keep an external search system eventually consistent with changes to products in our db. These products can be updated from a variety of sources, as data ingest pipelines are always a joy. This makes it no particularly practical to retrofit our entire model and make Temporal the source of truth.
We currently have eventing tables for some of these tables in our DB, where a row is inserted with context on the change. This doesn’t exist for all the tables I care about, but it’s a start. So one concept would be to periodically read the events to see if there were any updates and then feed them to an activity or child workflow for further processing, if it meets some condition.
But I would like to make this a more regular pattern in our system for other use cases, to enable change driven processing vs full re-indexes. I’m concerned that to scale this pattern, every workflow would either need to maintain it’s own mechanism/logic to determine “were there changes.” If I centralize it to a single workflow, then I’m going to end up with a monolithic workflow querying against a ton of event tables with a bunch of “if x and y, signal this workflow” logic together.
I like is that it avoids a lot of the pains of managing event queues in Kafka (e.g. all the things mentioned here When to use SQS? - #3 by maxim). But I’m not sure yet if this is an anti-pattern for the expected use-cases of Temporal. Is there a different paradigm to approaching this with Temporal that I should consider or this is generally the path I should go down?