Helm Deployment: Pod unable to reach Cassandra during Init

I’m trying to run temporal using helm chart

helm install --set server.replicaCount=1 --set cassandra.config.cluster_size=1 --set prometheus.enabled=false --set grafana.enabled=false --set elasticsearch.enabled=false temporal . --timeout 150m

Several pods are stuck in init state

I let it try and complete for over an hour, no progress though

On further inspection, it was found that all the pods are trying to reach out to cassandra, following are the commands for init containers of pods stuck in init state

until nslookup temporal-cassandra.default.svc.cluster.local; do echo waiting for cassandra service; sleep 1; done;

until cqlsh temporal-cassandra.default.svc.cluster.local 9042 -e “SHOW VERSION”; do echo waiting for cassandra to start; sleep 1; done;

until cqlsh temporal-cassandra.default.svc.cluster.local 9042 -e “SELECT keyspace_name FROM system_schema.keyspaces” | grep temporal$; do echo waiting for default keyspace to become ready; sleep 1; done;

I tried with latest master and with release 1.12 to consider an older setup, both setups run into same issue

Docker : Community Version 23.0.1
Minikube : v1.29.0
Helm : v3.11.2
OS : Centos7

Can you check k8s’ log to see why cassandra is unreachable? e.g.

minikube logs -f

I’m not sure if you changed release name. But the one you used in helm install is temporaltest, which should produce cassandra host as temporaltest-cassandra.default.svc.cluster.local. However the commands you posted contain different address: temporal-cassandra.default.svc.cluster.local.

I see a lot of these logs under minikube logs | grep cassandra :


Are the helpful?

About the deployment name, I accidentally copied the command from docs to paste here but I have been using “temporal” as the deployment name, Updated the same in the post

On cloning the VM I was facing this issue on, the helm chart was brought online with 0 changes in configuration. It was some networking error in the cloned VM.

Thanks @tao for dedicating time to this issue!