Hi! I am using go-sdk to build Temporal workflows. I have many integration workflows being executed by the worker. I have allocated around 8GB memory to one Kubernetes pod /one worker. Suddenly we are seeing pod restarts due to Out of Memory errors. We are not able to identify which is the problematic temporal workflow causing out of memory ? Are there any metrics through which we can identify the root cause. As a workaround , we have increased the memory limit to 16 GB. I am new to Temporal.
You can reduce the size of the workflow cache through worker.SetStickyWorkflowCacheSize function.
@maxim what does the above method call actuall do? We are facing similar issues where our pods start crashing after deployment and then re-stabilize after 5-10 mins.
This call limits the number of cached workflows.