Temporal Worker pod crashing in local kubernetes environment configured with MySQL

Hi ,
I am trying to deploy temporal in local k8s cluster using helm charts and MySQL . After deployement the worker pod keeps crashing


The command used to install :

elm install       -f values/values.mysql.yaml temporaltest       --set server.replicaCount=3       --set server.config.persistence.default.sql.user=<MYSQLUSER>       --set server.config.persistence.default.sql.password=<MYSQLPWD>       --set server.config.persistence.visibility.sql.user=<MYSQLUSER>       --set server.config.persistence.visibility.sql.password=<MYSQLPWD>       --set server.config.persistence.default.sql.host=<MYSQLHOST>       --set server.config.persistence.visibility.sql.host=<MYSQLHOST>       --set server.config.persistence.default.sql.database=temporal       --set server.config.persistence.visibility.sql.database=temporal_visibility       --set elasticsearch.enabled=false        --set kafka.enabled=false . \
 1004  helm install       -f values/values.mysql.yaml temporaltest       --set server.replicaCount=3       --set server.config.persistence.default.sql.user=root       --set server.config.persistence.default.sql.password=test1234       --set server.config.persistence.visibility.sql.user=root       --set server.config.persistence.visibility.sql.password=test1234       --set server.config.persistence.default.sql.host=10.221.226.92       --set server.config.persistence.visibility.sql.host=10.221.226.92       --set server.config.persistence.default.sql.database=temporal       --set server.config.persistence.visibility.sql.database=temporal_visibility       --set elasticsearch.enabled=false        --set kafka.enabled=false .       --timeout 15m --debug -n temporal

Output of tctl cluster health : context deadline exceeded

Are you using the latest from helm charts repo? Are you able to see the pod logs and could share?

Yes , using the latest version . Yes I could access pod logs . for worker pod these are the logs

{"level":"info","ts":"2022-04-08T09:03:38.087Z","msg":"Current reachable members","service":"worker","component":"service-resolver","service":"frontend","addresses":["172.16.22.234:7233"],"logging-call-at":"rpServiceResolver.go:266"}
{"level":"info","ts":"2022-04-08T09:03:38.087Z","msg":"Current reachable members","service":"worker","component":"service-resolver","service":"history","addresses":["172.16.22.232:7234"],"logging-call-at":"rpServiceResolver.go:266"}
{"level":"info","ts":"2022-04-08T09:03:38.087Z","msg":"Current reachable members","service":"worker","component":"service-resolver","service":"matching","addresses":["172.16.22.235:7235"],"logging-call-at":"rpServiceResolver.go:266"}
{"level":"info","ts":"2022-04-08T09:03:38.087Z","msg":"Current reachable members","service":"worker","component":"service-resolver","service":"worker","addresses":["172.16.22.226:7239"],"logging-call-at":"rpServiceResolver.go:266"}
{"level":"info","ts":"2022-04-08T09:03:38.087Z","msg":"RuntimeMetricsReporter started","service":"worker","logging-call-at":"runtime.go:139"}
{"level":"info","ts":"2022-04-08T09:03:38.087Z","msg":"worker starting","service":"worker","component":"worker","logging-call-at":"service.go:317"}
{"level":"info","ts":"2022-04-08T09:03:38.088Z","msg":"Get dynamic config","name":"worker.ScannerMaxConcurrentActivityExecutionSize","value":10,"default-value":10,"logging-call-at":"config.go:79"}
{"level":"info","ts":"2022-04-08T09:03:38.088Z","msg":"Get dynamic config","name":"worker.ScannerMaxConcurrentWorkflowTaskExecutionSize","value":10,"default-value":10,"logging-call-at":"config.go:79"}
{"level":"info","ts":"2022-04-08T09:03:38.088Z","msg":"Get dynamic config","name":"worker.ScannerMaxConcurrentActivityTaskPollers","value":8,"default-value":8,"logging-call-at":"config.go:79"}
{"level":"info","ts":"2022-04-08T09:03:38.088Z","msg":"Get dynamic config","name":"worker.ScannerMaxConcurrentWorkflowTaskPollers","value":8,"default-value":8,"logging-call-at":"config.go:79"}
{"level":"info","ts":"2022-04-08T09:03:38.088Z","msg":"Get dynamic config","name":"worker.executionsScannerEnabled","value":false,"default-value":false,"logging-call-at":"config.go:79"}
{"level":"info","ts":"2022-04-08T09:03:38.088Z","msg":"Get dynamic config","name":"worker.taskQueueScannerEnabled","value":true,"default-value":true,"logging-call-at":"config.go:79"}
{"level":"fatal","ts":"2022-04-08T09:03:48.219Z","msg":"error starting scanner","service":"worker","error":"context deadline exceeded","logging-call-at":"service.go:436","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Fatal\n\t/temporal/common/log/zap_logger.go:150\ngo.temporal.io/server/service/worker.(*Service).startScanner\n\t/temporal/service/worker/service.go:436\ngo.temporal.io/server/service/worker.(*Service).Start\n\t/temporal/service/worker/service.go:343\ngo.temporal.io/server/service/worker.ServiceLifetimeHooks.func1.1\n\t/temporal/service/worker/fx.go:79"}

let me know if you want see logs from other pods as well