For temporal worker, it would be very nice to have autoscale & per resource per task ability.
Use k8s “deployment” type worker, auto scale base on task/worker metric is not enough. For different task input it’s resource requirement may be different. And auto scale is hard to avoid killing pod which is running task (which run a really long task for a while).
A idea pattern may be to use k8s “job” pattern, inpsect task input, running resource cost estimator and submit a k8s job type worker (poll & complete task). But it seems hard to achieve in temporal.
I think long term we will provide some sort of basic worker auto-scaling documentation which users can extend to their needs. Each deployment is different and apps have different scaling needs so having something that works for everyone I think would be hard to provide.
For now this post might help to get an idea of what sdk and server metrics you could use to make scaling decisions, as well as the docs worker tuning guide.
I was thinking about job patterns and came to this idea:
- (optional) Before enqueuing a job in temporal, estimate if nodegroup has resources and if not, spin up a node.
- For heavy task, that eats all the node’s resources, run a kubernetes job with antiaffinity set to run one job per node.
- K8s job is just a worker process, that exits with a successful exit status code after n mins of no tasks
- Node is autoscaled down if no jobs are running.
1 Like