Temporal io restart problem on Kubernetes node restart

Hi Maxim,

Just to clarify, the cassandra DB on K8’s that is used for the temporal keyspaces was not deployed using the temporal helm charts. It’s a separate deployment that temporal connects to.

Can you provide details on why the helm chart based cassandra deployment can have data losses? Although we did not use the temporal helm charts for deploying cassandra it would be handy knowing what could cause the losses so I can check our cassandra configuration for similar issues.

I’m not expert in K8s. But AFAIK running production quality persistence on it is very non trivial. And I believe Helm chart is not a right technology to manage databases on K8s.

Thanks Maxim.