Pattern for ensuring consistency between DB save and Workflow

We have a use-case where a micro-service (e.g. order management) should first persist the order in its own data base and trigger a fulfillment workflow (using Temporal). We want to make sure that overall the process is transactionally consistent. E.g. if the save of the order fails the workflow should not be started and vice-versa.

There are some patterns like the outbox pattern that is used in other such scenarios (like publishing an event to Kafka). Wanted to understand if there is a recommended pattern that we should be using with Temporal.

The recommended pattern is to use workflow as the source of truth while the workflow is open. In this case, you trigger the workflow and let the workflow update DB through an activity.

All queries and future updates (in form of signals) are against the workflow which is fully consistent. So no inconsistency is present. Before completing the workflow the DB is updated with the latest state and can be used for historical reasons.

5 Likes

Thanks @maxim ! Overall from a consistency PoV, this approach makes sense. One trade-off with this approach, IMHO, is that the complete Entity/Aggregate model payload would need to be passed to the workflow. Whereas if you use something like the outbox pattern the workflow can just be based on the persisted entity (just a key / id).

I’m not sure why you need to pass the whole entity to the workflow. In most cases, the workflow state itself is such an entity and is built step by step during workflow execution.

Interesting. What about checking the validity of the trigger request synchronously?

Suppose I implement create <id> API where the id must be unique within a customer account.
My API gateway could atomically check and insert this id into the database as part of the API call and notify the user if something is wrong. It seems I can not do it if I move this part into the asynchronous workflow.

Temporal provides id uniqueness guarantees. So it is not possible to have two open workflows that share the same id. So your use case is trivially implementable with Temporal.

1 Like

@maxim What if the size of domain object grows over multiple MBs? Let say Order is the domain object

public Order {

List<Item> items; 
//Item is a complex object with 100s of primitive objects

}

There could be huge orders where list could be in 1000s even during the creation stage and continue to grow with subsequent updates. E.g. to add another 10,000 items to the Order.

Below is my understanding please correct me if I’m wrong and suggest the best approach possible here

I see the temporal limitations with 4MB GRPC input as constraint to design such solution as you suggest as the workflow cannot accept large data inputs

  1. I won’t be able to start the workflow to create order with large number of items
  2. I can’t signal the workflow to update order with multiple items

Another use case is, What if the creation sends less number of items but over a period of time the item list grows and becomes huge adding small items several thousand times? Will the workflow be able to function with large domain object within?

Could you ask your question in a new thread? It doesn’t look related to the original one.

@maxim I have created a new thread here Request size limitations to create workflow . Please share your input. Thank you.

@Pradeep @maxim What was the conclusion? The link is broken or the thread has been deleted.
I have a similar question

@andrey the thread was deleted, would you mind posting your question in a new thread?

@antonio.perez I will create thread, thanks!