Cassandra consuming lot of memory

Hi Folks

We are using Cassandra as the database for our Temporal deployment. We’ve noticed that in the last one week since we went live, the Cassandra’s memory has constantly gone up. I understand this is not Temporal specific issue but was wondering if you ever experienced any such behaviour and if there are any Temporal knobs that we can tweak.

We’ve experienced same behaviour with both of the following Cassandra config

  1. 4CPU 16GB 3 Node cluster
  2. 8CPU 32GB 3 Node cluster

Thanks
Chitresh

@tihomir @Wenquan_Xing

I have similary query. I have deployed Cassandra on 3 node GKE cluster of 16 cores and 64GB memory each. I have configured Cassandra(3 pods) with max heap size of 16GB.

As you can see in above screenshot, all the pods are using their max heap memory (16GB * 3)even when any workflows are not being executed. And we don’t see any GC occuring either.

For our workflow runs(120 ms fixed due to worker execution time):
30tps: 278.56ms
50tps: 340.49 ms

But anything more than that, the response time goes in seconds. Hence we were suspecting cassandra to be the bottleneck.

Hence need your suggestion.

Hi @Ruchir would you mind creating a new post for your question (not reuse existing one). Thanks!