Hi! I’m looking for a solution on handling sequential workflow executions triggered by HTTP requests, where each workflow is bound to a user ID. The requirement is that these user-tied workflows should run one after the other — no two workflows for the same user should run concurrently.
Here’s the scenario: if a workflow for a user is running and a new request comes in for that user, I want to queue it and only start after the current one finishes. However, workflows for different users should run in parallel without blocking each other.
Is there an existing feature in Temporal that can queue tasks per user (workflow id) and ensure their sequential execution?
Custom queue or database table and watching for a processing workflow at the current moment? Or with signals to the current processing workflow.
Thanks for any help!
My general suggestion for controlling the logic of what you want to do when is to use a workflow, unless there’s a reason you need to do something more complicated.
Thus, have a control workflow tied to the user ID which receives requests and runs a child workflow to process each request.
The HTTP request sends a signal with the request to the control workflow. The control workflow runs a child workflow, one at a one, to handle each request.
A single workflow instance can handle something like 10 requests per second. So this simple approach would work if you don’t need to be able to process more requests per second than that for a user.
I can’t let workflows wait indefinitely because there’s a chance a user might stop sending requests.
So, I initiate a new workflow with each incoming request if there is no working at the moment. The workflow processes the request and then enters a loop where it listens for signals. If no additional signals are received within a certain timeout period, the workflow finishes.
Something like that?
Oh, you want to run a single workflow for each request, but ensure that requests for a particular user are executed one at a time?
There isn’t actually a cost to having a workflow waiting for a signal for a long time. The workflow execution history is simply stored in the database and replayed if needed when a new signal comes in. It doesn’t consume worker resources long term.
I think we would want to use continueAsNew to avoid having the workflow execution history grow too large.
@maxim is this what you’d recommend? As I understand it, the requirements are:
each incoming request should be processed in a workflow
for a particular user ID, processing of requests need to occur one at a time
requests for different user IDs can be processed in parallel
One solution might be to use signal-with-start with the workflow ID based on the user ID. This ensures that all requests for a particular user ID are routed to the same workflow. The workflow would receive the requests as signals and execute them one at a time. It would need to call continueAsNew periodically to avoid growing the workflow history too much.
To avoid having the workflow running indefinitely (to avoid cluttering the UI if nothing else), as @ilya1 suggests the workflow could set a timer and stop executing if no requests have been received within a timeout. If a new request does come in the workflow would be started with signal-with-start as usual.
I have a similar use case.
I have several clients executing workflows but I have limited workers (#workflows > # of workers) so some workflows are queued in the server and eventually get executed. The problem is that there is no fairness - the order of submissions has nothing to do to which one is getting picked up.
I’ve read that Temporal doesn’t support order/ fairness so what is the best way to maintain some sort of fairness ? If I have a single WF which will invoke child WF upon signal , I’ll get the same results (these multiple child WFs will reach the server and will eventually run in any order).
We are working on directly supporting fairness and priorities. But until then, you might use different task queues for different priorities. Once workflows have started, fairness is not supported.