send each signal to each workflow type, something like:
send to workflow-id=“type1-userId-5”:
send to workflow-id=“type2-userId-5”:
send to workflow-id=“type3-userId-5”:
…
send to workflow-id=“type20-userId-5”:
it means, send 2000*20 signals every second
send batch, something like:
temporal workflow signal
–name NewUserTransactionSignal
–input ‘{“Input”: “As-JSON”}’
–query 'ExecutionStatus = “Running” AND (WorkflowId=“type1-userId-5” OR WorkflowId=“type2-userId-5” OR WorkflowId=“type3-userId-5” …type20-userId-5) ’ \
Thanks for the explanation. Processing 40k requests per second is possible, but it would require a pretty beefy cluster.
I recommend having a single workflow per user and implementing each “business workflow” as a function/class inside that workflow. This way, adding a new “business workflow” will not increase the load on the cluster.
At Uber, we had a similar use case where the number of “business workflows” was up to 100 per user. The incoming stream had up to 10k requests per second. Back then, we couldn’t process 1 million requests per second. However, collocating all these 100 “business workflows” inside a per-user Temporal workflow allowed us to support the load.
You don’t need to run a workflow for a user that has no “business workflows” running at the time. You can use signalWithStart to start the user workflow on demand.
You could use signals for this. But exiting user workflow when there are not running workflows is simpler IMHO.