We’re in the process of prototyping an architecture for handling workflows and would appreciate your insights on that.
We want to have a system where different service owners can define their own activities/workflows and then trigger them as needed. We are prototyping this using dynamic workers - that is short lived workers that are created/started dynamically upon receiving a request.
In our dynamic worker framework, the following would happen upon receiving a workflow request.
A worker is started after dynamically importing/registering the activities with the worker.
Workflow is started
Watch the workflow and when it is done (completed, terminated etc) shutdown the worker.
So in summary, each request will result in creation of a worker which will live for the duration of the workflow, and will be terminated once the workflow is done.
Since this approach is different than the standard long running temporal workers, we want to get the community’s opinion on whether this is an anti pattern that should be avoided? Or any other issues that we could run into using this approach in the long run.
Our use case is that we want to provide a temporal based workflow platform, that can serve requests from different service owners. So anyone can define their own activities and then send a request to execute the workflow. Since workflow requests with new activities can come from anywhere/anytime we want a way to dynamically register those activities without restarting the workers.
Just for reference we’re using the Python DSL sample recently released by temporal, and we’re expecting that a request would come with a DSL input listing the activities, we would extract the activities from the input, dynamically import/register them to the worker and start the dynamic worker for that specific request.
The one worker/task_queue per workflow seemed to cater our needs.
But please let us know if you have any other ideas based on our use case, thanks !