Is there any way in temporal to resequence the messages to process them as a group identified by a functional key in the message?
Here is the proposed approach.
Have a single workflow per unique functional key which receives a message (possibly with SignalWithStart) and queues, workflow will complete processing everything in the queue and go to wait state and wait for more messages to process.
The actual processing is implemented as a child workflow or directly as part of the parent workflow.
Once after I finish processing all the events in the queue,
I am moving the workflow to await state until it receive a new request using getEntry-Signal.
Any new entry will be processed by coming out from the wait state. Once after processing all the events in the WorkflowQueue, the workflow will await for new requests to be added to the queue.
When I checked the logs, I see that the workflow thread ID is same, every time a new request come and the workflow start running.
Does it mean that this thread is not release when it go to await state?
We are planning to run a million instances of such workflows (not always active) and would like to understand the performance and memory impact.
Hi regarding the original question does the await() blocks the thread ? Or it’s suspends the flow and later recover it from the the event source ?
It suspends the workflow execution and later recovers it. So a workflow that is blocked for a long time doesn’t consume resources on the worker.
We plan to use it as a batch aggregator, a signal will add to the batch and as soon the batch will meet size or time window requirements the messages will be processed by a child flow.
Keep in mind that a single workflow instance cannot grow very large or process a large update rate. What is the batch size and number of signals per second processed by a single workflow execution (instance)?