How do I deploy each service as a separate pod container,temporal worker can not connect to temporal server

I am trying to deploy each service as separate pod container.All other pods frontend,history ,admintools are running fine , just the worker pod is crashing continuously.
server-version":"1.17.x

{"level":"fatal","ts":"2023-10-11T18:12:27.784+0800","msg":"error getting system sdk client","service":"worker","error":"unable to create SDK client: failed reaching server: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 0.0.0.0:7233: connect: connection refused\"","logging-call-at":"factory.go:107","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Fatal\n\t/data/apps/temporal-eco/temporal/common/log/zap_logger.go:150\ngo.temporal.io/server/common/sdk.(*clientFactory).GetSystemClient.func1\n\t/data/apps/temporal-eco/temporal/common/sdk/factory.go:107\nsync.(*Once).doSlow\n\t/usr/local/go/src/sync/once.go:68\nsync.(*Once).Do\n\t/usr/local/go/src/sync/once.go:59\ngo.temporal.io/server/common/sdk.(*clientFactory).GetSystemClient\n\t/data/apps/temporal-eco/temporal/common/sdk/factory.go:104\ngo.temporal.io/server/service/worker.(*Service).startScanner\n\t/data/apps/temporal-eco/temporal/service/worker/service.go:466\ngo.temporal.io/server/service/worker.(*Service).Start\n\t/data/apps/temporal-eco/temporal/service/worker/service.go:374\ngo.temporal.io/server/service/worker.ServiceLifetimeHooks.func1.1\n\t/data/apps/temporal-eco/temporal/service/worker/fx.go:155"}

/etc/temporal/config/docker.yaml

log:
    stdout: true
    level: info

persistence:
    numHistoryShards: 512
    defaultStore: default
    visibilityStore: visibility
#    advancedVisibilityStore: es-visibility
    datastores:
        default:
            sql:
                pluginName: "mysql"
                databaseName: "temporal_eco"
                connectAddr: "..."
                connectProtocol: "tcp"
                user: "temporal_eco"
                password: "..."
                connectAttributes:
                    tx_isolation: "'READ-COMMITTED'"
                maxConns: 20
                maxIdleConns: 20
                maxConnLifetime: 1h
                tls:
                    enabled: false
                    сaFile: 
                    certFile: 
                    keyFile: 
                    enableHostVerification: false
                    serverName: 
        visibility:
            sql:
                pluginName: "mysql"
                databaseName: "temporal_eco_vis"
                connectAddr: "..."
                connectProtocol: "tcp"
                user: "temporal_eco_vis"
                password: "..."
                connectAttributes:
                    tx_isolation: "'READ-COMMITTED'"
                maxConns: 10
                maxIdleConns: 10
                maxConnLifetime: 1h
                tls:
                    enabled: false
                    сaFile: 
                    certFile: 
                    keyFile: 
                    enableHostVerification: false
                    serverName: 
        es-visibility:
            elasticsearch:
                version: v7
                url:
                    scheme: http
                    host: "..."
                username: "..."
                password: "..."
                indices:
                    visibility: "temporal-visibility-online"

global:
    membership:
        maxJoinDuration: 30s
        broadcastAddress: "..."
    authorization:
        jwtKeyProvider:
            keySourceURIs:
                - 
                - 
            refreshInterval: 1m
        permissionsClaimName: permissions
        authorizer: 
        claimMapper: 
services:
    frontend:
        rpc:
            grpcPort: 7233
            membershipPort: 6933
            bindOnIP: 0.0.0.0

    matching:
        rpc:
            grpcPort: 7235
            membershipPort: 6935
            bindOnIP: 0.0.0.0

    history:
        rpc:
            grpcPort: 7234
            membershipPort: 6934
            bindOnIP: 0.0.0.0

    worker:
        rpc:
            grpcPort: 7239
            membershipPort: 6939
            bindOnIP: 0.0.0.0

clusterMetadata:
    enableGlobalNamespace: false
    failoverVersionIncrement: 10
    masterClusterName: "active"
    currentClusterName: "active"
    clusterInformation:
        active:
            enabled: true
            initialFailoverVersion: 1
            rpcName: "frontend"
            rpcAddress: 127.0.0.1:7233

dcRedirectionPolicy:
    policy: "noop"
    toDC: ""

archival:
    history:
        state: "enabled"
        enableRead: true
        provider:
            filestore:
                fileMode: "0666"
                dirMode: "0766"
    visibility:
        state: "enabled"
        enableRead: true
        provider:
            filestore:
                fileMode: "0666"
                dirMode: "0766"

namespaceDefaults:
    archival:
        history:
            state: "disabled"
            URI: "file:///tmp/temporal_archival/development"
        visibility:
            state: "disabled"
            URI: "file:///tmp/temporal_vis_archival/development"

publicClient:
    hostPort: "0.0.0.0:7233"

dynamicConfigClient:
    filepath: "/etc/temporal/dynamicconfig/docker.yaml"
    pollInterval: "60s

The deployment sequence in history ->matching-> frontend>worker starts normally. Does this indicate that there are dependencies between components? If so, how do you split the deployment?

error address

After checking, it is found that there is a dependency between frontend, history, matching, and worker. In this case, how to split components and deploy them separately?