We are exploring temporal to see if it can work as a workflow engine to suit our requirements.
Based on experiments done so far, temporal looks to be a great fit to act as a workflow engine for our project.
However one of our major requirements is to have a very low memory footprint for our workflow engine.
We see the following memory footprint while running temporal:
Temporal server - 112 MB
It contains 23 threads by default. It contains 12 TCP connections in ESTABLISHED state.
Worker process with 2 threads - 35 MB
We have to launch a worker process with atleast 2 threads for parallel execution of workflows
Total memory required in steady state without any workflow executions - 150 MB
Apart from this there is a possibility that this memory footprint could increase during the runtime of the workflow engine.
We would like to know if there is anyway we could reduce the memory footprint by whatever setting that is available in temporal.
We dont need to run very heavy workflows. But key requirement is low memory footprint.
On SDK side, tune worker cache size (see docs here). Size of your workflow state your executions hold for use case is also going to play a factor on how big your worker in-memory cache grows.
When I run the below command, I dont see any improvement in memory:
temporal server start-dev --dynamic-config-value history.cacheInitialSize=* --dynamic-config-value history.cacheMaxSize=* --dynamic-config-value history.eventsCacheInitialSize=* --dynamic-config-value history.eventsCacheMaxSize=*
Am I starting the server with right params?
Should I download source code and make above suggested changes and build it and then load it?
Temporal uses the Go runtime, which is garbage collected. These days the garbage collection pacer can be tuned specifically to target a soft memory limit. I’m not a Temporal expert, but I imagine this tuning should be accessible using the GOMEMLIMIT and GOGC environment variables. Those GC tuning parameters are only available with Go 1.19 and newer, but they can be used to more aggressively sweep unused memory to fit within a certain soft requirement.
That said, I’d be very curious to know the origin of the “very low memory footprint” requirement.