Temporal worker can not connect to temporal server

Hello, I am trying to start temporal services using docker container. So far, I was able to start frontend, history, matching services successfully. But when I start worker service then it throws an error like this

{"level":"warn","ts":"2023-05-17T22:12:47.804Z","msg":"error creating sdk client","service":"worker","error":"failed reaching server: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:7233: connect: connection refused\"","logging-call-at":"factory.go:114"}
{"level":"fatal","ts":"2023-05-17T22:12:47.804Z","msg":"error creating sdk client","service":"worker","error":"failed reaching server: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:7233: connect: connection refused\"","logging-call-at":"factory.go:121","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Fatal\n\t/home/builder/temporal/common/log/zap_logger.go:174\ngo.temporal.io/server/common/sdk.(*clientFactory).GetSystemClient.func1\n\t/home/builder/temporal/common/sdk/factory.go:121\nsync.(*Once).doSlow\n\t/usr/local/go/src/sync/once.go:74\nsync.(*Once).Do\n\t/usr/local/go/src/sync/once.go:65\ngo.temporal.io/server/common/sdk.(*clientFactory).GetSystemClient\n\t/home/builder/temporal/common/sdk/factory.go:108\ngo.temporal.io/server/service/worker/scanner.(*Scanner).Start\n\t/home/builder/temporal/service/worker/scanner/scanner.go:179\ngo.temporal.io/server/service/worker.(*Service).startScanner\n\t/home/builder/temporal/service/worker/service.go:504\ngo.temporal.io/server/service/worker.(*Service).Start\n\t/home/builder/temporal/service/worker/service.go:392\ngo.temporal.io/server/service/worker.ServiceLifetimeHooks.func1.1\n\t/home/builder/temporal/service/worker/fx.go:139"}

Here are the ports that I am using in config_template.yaml

global:
    membership:
        maxJoinDuration: 30s

publicClient:
    hostPort: "127.0.0.1:7233"

clusterMetadata:
  enableGlobalNamespace: false
  failoverVersionIncrement: 11
  masterClusterName: "active"
  currentClusterName: "active"
  clusterInformation:
    active:
      enabled: true
      initialFailoverVersion: 2
      rpcAddress: "127.0.0.1:7233"

services:
    history:
        rpc:
            grpcPort: 7234
            membershipPort: 6934
    matching:
        rpc:
            grpcPort: 7235
            membershipPort: 6935
            bindOnLocalHost: true

    frontend:
        rpc:
            grpcPort: 7233
            membershipPort: 6933
            bindOnLocalHost: true

    worker:
        rpc:
            grpcPort: 7239
            membershipPort: 6939
            bindOnLocalHost: true

Can you share your full static config (generated) on worker container? It’s
/etc/temporal/config/docker.yaml

Also which server version are you using?

Also what do you get via

tctl cl h

or using temporal cli:

temporal operator cluster health

here is my docker.yaml file

numHistoryShards: 4

defaultStore: postgres-default

advancedVisibilityStore: es-visibility

datastores:

postgres-default:

sql:

pluginName: "postgres"

databaseName: "<>"

connectAddr: "<>"

connectProtocol: "tcp"

user: "<>"

password: "<>"

maxConns: 10

maxIdleConns: 10

maxConnLifetime: "1h"

es-visibility:

elasticsearch:

version: ""

url:

scheme: "http"

host: "<>"

username: "<>"

password: "<>"

indices:

visibility: "temporal_visibility_v1_dev"

global:

membership:

maxJoinDuration: 30s

I am running 1.20.3.0 server version (from docker tag)

on running temporal cluster health I get

~ $ temporal operator cluster health
Error: unable to health check "temporal.api.workflowservice.v1.WorkflowService" service: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:7233: connect: connection refused"
('export TEMPORAL_CLI_SHOW_STACKS=1' to see stack traces)

I think the more interesting piece of information is this one where I am defining what ports my services are working

publicClient:
    hostPort: "127.0.0.1:8933"

clusterMetadata:
  enableGlobalNamespace: false
  failoverVersionIncrement: 11
  masterClusterName: "active"
  currentClusterName: "active"
  clusterInformation:
    active:
      enabled: true
      initialFailoverVersion: 2
      rpcAddress: "127.0.0.1:7233"

services:
    history:
        rpc:
            grpcPort: 7234
            membershipPort: 6934
    matching:
        rpc:
            grpcPort: 7235
            membershipPort: 6935
            bindOnLocalHost: true

    frontend:
        rpc:
            grpcPort: 7233
            membershipPort: 6933
            bindOnLocalHost: true

    worker:
        rpc:
            grpcPort: 7239
            membershipPort: 6939
            bindOnLocalHost: true

Also, I noticed that if I change the publicClient.hostPort then worker fails to connect to the new one.

Weird, could you compare the generated static config for your worker service with whats generated with this docker compose? I upgraded it to 1.20.3 as well so server versions match. Just note it does not use ES for advanced vis but the new SQL-based advanced vis feature.

I also encountered the same problem, after checking found that the dependency configuration is missing, what should I do next? How do you configure dependencies? Any suggestions or configuration examples?

{"level":"fatal","ts":"2023-10-11T18:12:27.784+0800","msg":"error getting system sdk client","service":"worker","error":"unable to create SDK client: failed reaching server: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 0.0.0.0:7233: connect: connection refused\"","logging-call-at":"factory.go:107","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Fatal\n\t/data/apps/temporal-eco/temporal/common/log/zap_logger.go:150\ngo.temporal.io/server/common/sdk.(*clientFactory).GetSystemClient.func1\n\t/data/apps/temporal-eco/temporal/common/sdk/factory.go:107\nsync.(*Once).doSlow\n\t/usr/local/go/src/sync/once.go:68\nsync.(*Once).Do\n\t/usr/local/go/src/sync/once.go:59\ngo.temporal.io/server/common/sdk.(*clientFactory).GetSystemClient\n\t/data/apps/temporal-eco/temporal/common/sdk/factory.go:104\ngo.temporal.io/server/service/worker.(*Service).startScanner\n\t/data/apps/temporal-eco/temporal/service/worker/service.go:466\ngo.temporal.io/server/service/worker.(*Service).Start\n\t/data/apps/temporal-eco/temporal/service/worker/service.go:374\ngo.temporal.io/server/service/worker.ServiceLifetimeHooks.func1.1\n\t/data/apps/temporal-eco/temporal/service/worker/fx.go:155"}

/etc/temporal/config/docker.yaml

log:
    stdout: true
    level: info

persistence:
    numHistoryShards: 512
    defaultStore: default
    visibilityStore: visibility
#    advancedVisibilityStore: es-visibility
    datastores:
        default:
            sql:
                pluginName: "mysql"
                databaseName: "temporal_eco"
                connectAddr: "..."
                connectProtocol: "tcp"
                user: "temporal_eco"
                password: "..."
                connectAttributes:
                    tx_isolation: "'READ-COMMITTED'"
                maxConns: 20
                maxIdleConns: 20
                maxConnLifetime: 1h
                tls:
                    enabled: false
                    сaFile: 
                    certFile: 
                    keyFile: 
                    enableHostVerification: false
                    serverName: 
        visibility:
            sql:
                pluginName: "mysql"
                databaseName: "temporal_eco_vis"
                connectAddr: "..."
                connectProtocol: "tcp"
                user: "temporal_eco_vis"
                password: "..."
                connectAttributes:
                    tx_isolation: "'READ-COMMITTED'"
                maxConns: 10
                maxIdleConns: 10
                maxConnLifetime: 1h
                tls:
                    enabled: false
                    сaFile: 
                    certFile: 
                    keyFile: 
                    enableHostVerification: false
                    serverName: 
        es-visibility:
            elasticsearch:
                version: v7
                url:
                    scheme: http
                    host: "..."
                username: "..."
                password: "..."
                indices:
                    visibility: "temporal-visibility-online"

global:
    membership:
        maxJoinDuration: 30s
        broadcastAddress: "..."
    authorization:
        jwtKeyProvider:
            keySourceURIs:
                - 
                - 
            refreshInterval: 1m
        permissionsClaimName: permissions
        authorizer: 
        claimMapper: 
services:
    frontend:
        rpc:
            grpcPort: 7233
            membershipPort: 6933
            bindOnIP: 0.0.0.0

    matching:
        rpc:
            grpcPort: 7235
            membershipPort: 6935
            bindOnIP: 0.0.0.0

    history:
        rpc:
            grpcPort: 7234
            membershipPort: 6934
            bindOnIP: 0.0.0.0

    worker:
        rpc:
            grpcPort: 7239
            membershipPort: 6939
            bindOnIP: 0.0.0.0

clusterMetadata:
    enableGlobalNamespace: false
    failoverVersionIncrement: 10
    masterClusterName: "active"
    currentClusterName: "active"
    clusterInformation:
        active:
            enabled: true
            initialFailoverVersion: 1
            rpcName: "frontend"
            rpcAddress: 127.0.0.1:7233

dcRedirectionPolicy:
    policy: "noop"
    toDC: ""

archival:
    history:
        state: "enabled"
        enableRead: true
        provider:
            filestore:
                fileMode: "0666"
                dirMode: "0766"
    visibility:
        state: "enabled"
        enableRead: true
        provider:
            filestore:
                fileMode: "0666"
                dirMode: "0766"

namespaceDefaults:
    archival:
        history:
            state: "disabled"
            URI: "file:///tmp/temporal_archival/development"
        visibility:
            state: "disabled"
            URI: "file:///tmp/temporal_vis_archival/development"

publicClient:
    hostPort: "0.0.0.0:7233"

dynamicConfigClient:
    filepath: "/etc/temporal/dynamicconfig/docker.yaml"
    pollInterval: "60s

I am running 1.17.x server version

Compared to https://github.com/tsurdilo/my-temporal-dockercompose/blob/main/compose-services.yml found less component dependencies

Why is split deployment recommended when there are dependencies between frontend, history, matching, and worker components?