Here’s some context to help understand why I’m asking this:
We want to start with a simple deployment where all workers are deployed in the same JVM, and then scale-out as needed. This means that I want to have all workflows and activities specified in a way that I can start dividing them into different workers as needed, at deployment time, without the need to change workflow implementation code. This would also allow to configure different deployments as needed (for example, using a generic Worker launch implementation that receives the task list name, workflow class and activity class and other settings by configuration/parameters).
One way of doing it dynamically would be to use a single task list name. However this cannot be done because then each Worker would have to register all workflows and activities implementation for every type of task, which destroys the initial goal
So the only way to allow such separation is by having independent task list names since the very beginning.
In this case, to start with a single JVM containing all workflow and activities workers, I was planning to start a single WorkerFactory, and then create a Worker per task list name.
But my question is whether I can optimize the hardware resources usage with this approach.
If there’s a single thread pool in the WorkerFactory that is shared with all Workers created from that factory, then I’m sure the thread pool will be optimized. But if each Worker has its own thread pool then I would need to reduce the thread pool of each worker to avoid overleading the JVM and then I may ending up having most of the threads from other thread pools in idle state.
I would bet that the thread pool is per WorkerFactory, but in the API I can only specify thread pool settings per Worker so I’m not sure how is actually implemented.
Note: if you have a better suggestion in how to implement this use-case please let me know.
Thank you in advance!