How to properly isolate internal-frontend from public frontend with JWT auth enabled? Worker fails with “Request unauthorized”
Context:
I’m running Temporal Server 1.28.1 on Kubernetes with the following setup:
- Public frontend (temporal-frontend): JWT authentication enabled for external applications
- Internal frontend (temporal-internal-frontend): No authentication for internal components (worker, history, matching)
- Both frontends share the same Cassandra and Elasticsearch backend
Goal:
I want to:
- Keep JWT authentication on the public frontend for external client applications
- Have internal Temporal components (especially worker service) communicate through internal-frontend WITHOUT authentication
- Avoid using separate namespaces or complex network policies if possible
Problem:
The worker service keeps failing with “Request unauthorized” error even though:
- Worker is configured to use temporal-internal-server-config (which has NO authorization block)
- clusterMetadata.clusterInformation.active.rpcAddress points to temporal-internal-frontend:7233
- TEMPORAL_GRPC_ADDRESS env var is set to temporal-internal-frontend:7233
Root Cause (After Investigation):
Through logs, I discovered that the worker is discovering ALL frontend members via Ringpop, including the JWT-enabled public frontend:
{“level”:“info”,“msg”:“Current reachable members”,“component”:“service-resolver”,“service”:“frontend”,“addresses”:[“10.240.15.55:7233”,“10.240.16.127:7233”,“10.240.22.102:7233”,“10.240.30.248:7233”]}
Where:
- 10.240.22.102 = temporal-frontend (JWT auth enabled)
- 10.240.16.127, 10.240.30.248 = temporal-internal-frontend (no auth)
The worker round-robins requests to all discovered frontends, and when it hits the public frontend, it gets “Request unauthorized”.
What I Tried:
-
Separate Ringpop Clusters - Created separate Ringpop configuration in internal-configmap.yaml with a different cluster name (temporal-cbt-internal-cluster) and separate headless service (temporal-internal-cluster-nodes). Result: Worker still discovers all frontends because they all eventually join the same Ringpop gossip network.
-
Changed rpcName - Changed clusterMetadata.clusterInformation.active.rpcName from “frontend” to “internal-frontend”. Result: Didn’t help. Worker still uses service resolver to find “frontend” members.
Questions:
-
Is there a proper way to isolate internal-frontend from public frontend when they share the same Ringpop/membership cluster?
-
Can we configure the worker service to ONLY use a specific frontend address (bypassing Ringpop service discovery)?
-
Is there a dynamic config or setting to force worker’s frontend client to use direct gRPC address instead of Ringpop membership?
-
Should I be using a completely different architecture? For example:
- Running worker in a separate Temporal cluster?
- Using mTLS for internal components instead of disabling auth?
- Disabling JWT auth entirely and moving authentication to ingress/gateway layer?
Current Workaround (Not Ideal):
The only solution I found is to disable JWT auth on the public frontend as well and handle authentication at the ingress/gateway level. But I prefer to keep JWT auth at the Temporal server level if possible.
Configuration Summary:
internal-configmap.yaml (abbreviated):
global:
ringpop:
name: temporal-cbt-internal-cluster
bootstrapMode: k8s
k8s:
serviceName: temporal-internal-cluster-nodes
namespace: temporal
# No authorization block here
clusterMetadata:
clusterInformation:
active:
rpcAddress: "temporal-internal-frontend:7233"
public configmap.yaml (abbreviated):
global:
ringpop:
name: temporal-cbt-cluster
bootstrapMode: k8s
k8s:
serviceName: temporal-cluster-nodes
namespace: temporal
authorization:
authorizer: "default"
claimMapper: "default"
permissionsClaimName: "scope"
jwtKeyProvider:
keySourceURIs:
- "https://sso.my.company.net/oauth2/jwks"
worker.yaml:
volumes:
- name: config
configMap:
name: temporal-internal-server-config
env:
- name: TEMPORAL_GRPC_ADDRESS
value: "temporal-internal-frontend:7233"
Any guidance would be greatly appreciated!