How to properly isolate internal-frontend from public frontend with JWT auth enabled? Worker fails with "Request unauthorized"

How to properly isolate internal-frontend from public frontend with JWT auth enabled? Worker fails with “Request unauthorized”

Context:
I’m running Temporal Server 1.28.1 on Kubernetes with the following setup:

  • Public frontend (temporal-frontend): JWT authentication enabled for external applications
  • Internal frontend (temporal-internal-frontend): No authentication for internal components (worker, history, matching)
  • Both frontends share the same Cassandra and Elasticsearch backend

Goal:
I want to:

  1. Keep JWT authentication on the public frontend for external client applications
  2. Have internal Temporal components (especially worker service) communicate through internal-frontend WITHOUT authentication
  3. Avoid using separate namespaces or complex network policies if possible

Problem:
The worker service keeps failing with “Request unauthorized” error even though:

  • Worker is configured to use temporal-internal-server-config (which has NO authorization block)
  • clusterMetadata.clusterInformation.active.rpcAddress points to temporal-internal-frontend:7233
  • TEMPORAL_GRPC_ADDRESS env var is set to temporal-internal-frontend:7233

Root Cause (After Investigation):
Through logs, I discovered that the worker is discovering ALL frontend members via Ringpop, including the JWT-enabled public frontend:

{“level”:“info”,“msg”:“Current reachable members”,“component”:“service-resolver”,“service”:“frontend”,“addresses”:[“10.240.15.55:7233”,“10.240.16.127:7233”,“10.240.22.102:7233”,“10.240.30.248:7233”]}

Where:

  • 10.240.22.102 = temporal-frontend (JWT auth enabled)
  • 10.240.16.127, 10.240.30.248 = temporal-internal-frontend (no auth)

The worker round-robins requests to all discovered frontends, and when it hits the public frontend, it gets “Request unauthorized”.

What I Tried:

  1. Separate Ringpop Clusters - Created separate Ringpop configuration in internal-configmap.yaml with a different cluster name (temporal-cbt-internal-cluster) and separate headless service (temporal-internal-cluster-nodes). Result: Worker still discovers all frontends because they all eventually join the same Ringpop gossip network.

  2. Changed rpcName - Changed clusterMetadata.clusterInformation.active.rpcName from “frontend” to “internal-frontend”. Result: Didn’t help. Worker still uses service resolver to find “frontend” members.

Questions:

  1. Is there a proper way to isolate internal-frontend from public frontend when they share the same Ringpop/membership cluster?

  2. Can we configure the worker service to ONLY use a specific frontend address (bypassing Ringpop service discovery)?

  3. Is there a dynamic config or setting to force worker’s frontend client to use direct gRPC address instead of Ringpop membership?

  4. Should I be using a completely different architecture? For example:

    • Running worker in a separate Temporal cluster?
    • Using mTLS for internal components instead of disabling auth?
    • Disabling JWT auth entirely and moving authentication to ingress/gateway layer?

Current Workaround (Not Ideal):
The only solution I found is to disable JWT auth on the public frontend as well and handle authentication at the ingress/gateway level. But I prefer to keep JWT auth at the Temporal server level if possible.

Configuration Summary:

internal-configmap.yaml (abbreviated):
global:
  ringpop:
    name: temporal-cbt-internal-cluster
    bootstrapMode: k8s
    k8s:
      serviceName: temporal-internal-cluster-nodes
      namespace: temporal
# No authorization block here
clusterMetadata:
  clusterInformation:
    active:
      rpcAddress: "temporal-internal-frontend:7233"

public configmap.yaml (abbreviated):
global:
  ringpop:
    name: temporal-cbt-cluster
    bootstrapMode: k8s
    k8s:
      serviceName: temporal-cluster-nodes
      namespace: temporal
  authorization:
    authorizer: "default"
    claimMapper: "default"
    permissionsClaimName: "scope"
    jwtKeyProvider:
      keySourceURIs:
        - "https://sso.my.company.net/oauth2/jwks"

worker.yaml:
volumes:
  - name: config
    configMap:
      name: temporal-internal-server-config
env:
  - name: TEMPORAL_GRPC_ADDRESS
    value: "temporal-internal-frontend:7233"

Any guidance would be greatly appreciated!

Hi,

just wondering which worker you are referring to. The Temporal system worker or your own worker? The former is, per default, connecting to the internal frontend if it is enabled - see temporal/common/resource/fx.go at 6a67cb75e01c4482b08fde4824f0540da11cbf83 · temporalio/temporal · GitHub

The system worker is also using the standard sdk client to communicate with the server. It makes standard Temporal API requests, either to the frontend or internal frontend if it is enabled. Ringpop membership does not play a role here.

Not sure which Temporal setup you are using and how exactly is it configured. We successfully run with an internal-frontend setup enabled. Actual workers connect to via JWT to the frontend, whereas system worker connect to internal frontend. In our case with mTLS enabled.

–Hardy