Upon using the default number of cached workflows set on a worker(=1000), the memory used by the worker gradually increases with time. This is expected since these are long running workflows.
It would be helpful to get the memory occupied by one cached workflow to pre-calculate amount of memory to be reserved for a worker.
This is a bit hard to obtain in Python. While it is an object you can
sys.getsizeof, there are many recursive objects which also make this difficult. And different workflows have different memory sizes at any given time based on how they are written, how far along they are, what has happened, what is pending, etc.
We recommend benchmarking to determine worker resources needed for your expected workload based on your workflow code. This is how you would pre-calculate.