Hi Team,
- We have a few use-cases were we want to block workflows on external signals, which gets triggered on user actions. However, since we can’t put an upper bound on the time taken for the user actions, we might end up having many workflows blocked on signals infinitely. Now these, workflows can be in the range 2-30M difficult to predict at this point of time.
- We have a few use-cases were we want to put workflows to sleep (in the middle of execution) till a point in time in the day say 11PM in the night and then execute activities. This is typically to perform few activities in non-business hours kinda use-cases. Now, there will be 20-30 million of workflows which will be at sleep at any given point and timers for all might fire at once.
I understand from this thread, that cost of putting your workflow to sleep is negligible from a scaling POV and same should be applicable to signals.
However, wanted to understand what is the long term scaling implications of having such workflows ? Is there any recommendation on the same regards ?
PS: Consider in 2nd case mentioned above, we have done correct worker performance tuning.