Temporal worker can not connect to temporal server

Hello, I am trying to start temporal services using docker container. So far, I was able to start frontend, history, matching services successfully. But when I start worker service then it throws an error like this

{"level":"warn","ts":"2023-05-17T22:12:47.804Z","msg":"error creating sdk client","service":"worker","error":"failed reaching server: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:7233: connect: connection refused\"","logging-call-at":"factory.go:114"}
{"level":"fatal","ts":"2023-05-17T22:12:47.804Z","msg":"error creating sdk client","service":"worker","error":"failed reaching server: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:7233: connect: connection refused\"","logging-call-at":"factory.go:121","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Fatal\n\t/home/builder/temporal/common/log/zap_logger.go:174\ngo.temporal.io/server/common/sdk.(*clientFactory).GetSystemClient.func1\n\t/home/builder/temporal/common/sdk/factory.go:121\nsync.(*Once).doSlow\n\t/usr/local/go/src/sync/once.go:74\nsync.(*Once).Do\n\t/usr/local/go/src/sync/once.go:65\ngo.temporal.io/server/common/sdk.(*clientFactory).GetSystemClient\n\t/home/builder/temporal/common/sdk/factory.go:108\ngo.temporal.io/server/service/worker/scanner.(*Scanner).Start\n\t/home/builder/temporal/service/worker/scanner/scanner.go:179\ngo.temporal.io/server/service/worker.(*Service).startScanner\n\t/home/builder/temporal/service/worker/service.go:504\ngo.temporal.io/server/service/worker.(*Service).Start\n\t/home/builder/temporal/service/worker/service.go:392\ngo.temporal.io/server/service/worker.ServiceLifetimeHooks.func1.1\n\t/home/builder/temporal/service/worker/fx.go:139"}

Here are the ports that I am using in config_template.yaml

global:
    membership:
        maxJoinDuration: 30s

publicClient:
    hostPort: "127.0.0.1:7233"

clusterMetadata:
  enableGlobalNamespace: false
  failoverVersionIncrement: 11
  masterClusterName: "active"
  currentClusterName: "active"
  clusterInformation:
    active:
      enabled: true
      initialFailoverVersion: 2
      rpcAddress: "127.0.0.1:7233"

services:
    history:
        rpc:
            grpcPort: 7234
            membershipPort: 6934
    matching:
        rpc:
            grpcPort: 7235
            membershipPort: 6935
            bindOnLocalHost: true

    frontend:
        rpc:
            grpcPort: 7233
            membershipPort: 6933
            bindOnLocalHost: true

    worker:
        rpc:
            grpcPort: 7239
            membershipPort: 6939
            bindOnLocalHost: true

Can you share your full static config (generated) on worker container? It’s
/etc/temporal/config/docker.yaml

Also which server version are you using?

Also what do you get via

tctl cl h

or using temporal cli:

temporal operator cluster health

here is my docker.yaml file

numHistoryShards: 4

defaultStore: postgres-default

advancedVisibilityStore: es-visibility

datastores:

postgres-default:

sql:

pluginName: "postgres"

databaseName: "<>"

connectAddr: "<>"

connectProtocol: "tcp"

user: "<>"

password: "<>"

maxConns: 10

maxIdleConns: 10

maxConnLifetime: "1h"

es-visibility:

elasticsearch:

version: ""

url:

scheme: "http"

host: "<>"

username: "<>"

password: "<>"

indices:

visibility: "temporal_visibility_v1_dev"

global:

membership:

maxJoinDuration: 30s

I am running 1.20.3.0 server version (from docker tag)

on running temporal cluster health I get

~ $ temporal operator cluster health
Error: unable to health check "temporal.api.workflowservice.v1.WorkflowService" service: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:7233: connect: connection refused"
('export TEMPORAL_CLI_SHOW_STACKS=1' to see stack traces)

I think the more interesting piece of information is this one where I am defining what ports my services are working

publicClient:
    hostPort: "127.0.0.1:8933"

clusterMetadata:
  enableGlobalNamespace: false
  failoverVersionIncrement: 11
  masterClusterName: "active"
  currentClusterName: "active"
  clusterInformation:
    active:
      enabled: true
      initialFailoverVersion: 2
      rpcAddress: "127.0.0.1:7233"

services:
    history:
        rpc:
            grpcPort: 7234
            membershipPort: 6934
    matching:
        rpc:
            grpcPort: 7235
            membershipPort: 6935
            bindOnLocalHost: true

    frontend:
        rpc:
            grpcPort: 7233
            membershipPort: 6933
            bindOnLocalHost: true

    worker:
        rpc:
            grpcPort: 7239
            membershipPort: 6939
            bindOnLocalHost: true

Also, I noticed that if I change the publicClient.hostPort then worker fails to connect to the new one.

Weird, could you compare the generated static config for your worker service with whats generated with this docker compose? I upgraded it to 1.20.3 as well so server versions match. Just note it does not use ES for advanced vis but the new SQL-based advanced vis feature.