How to expose temporal-frontend running on port 7233 so workers can connect to it?

Hi all,

We are working on production setup of Temporal. We were able to use Docker for our PoC, but now we setup Temporal following: GitHub - temporalio/helm-charts: Temporal Helm charts so setup is done with Kubernetes.

Thing we are not clear is how to expose temporal-frontend running on port 7233 so workers can connect to it? Our assumption is that we will run workers on separate machines.

We tried LB to proxy traffic to this port but this did not worked.

How to properly setup Temporal Workers? Is there any documentation / tutorial on approaching this?

Thanks for any help in advance!

For Go SDK, you can use HostPort when creating a new client for example:

c, err := client.NewClient(client.Options{
		HostPort: client.DefaultHostPort,

(client.DefaultHostPort is, but you can change to yourExternalFrontendServiceIP:7233 in case you have a load balancer service set up).

For Java SDK, you can use setTarget on WorkflowServiceStubsOptions, for example,

WorkflowServiceStubs service =

Note that temporal frontend is deployed as a headless service (via provided helm charts) and it does expose endpoint but note that it will be directly to a single pod instead of a LB service that could load balance across pods in a prod setup.
Can use kubectl get endpoints to get that info.

Thank you for this hint. Will try it.

I connect a worker running on the same cluster to the temporal frontend sevice (not headless service) by using the following env (I am running workers in another namespace, connecting to temporal running in the temporal namespace).

TEMPORAL_ADDRESS: temporal-frontend.temporal:7233

If your workers are running outside the cluster (e.g. laptop connecting to cloud over MTLS, or another VPC) then you need an ingress.

On AWS you can add annotations (requires ALB load balancer controller) like this. It does not look like the chart currently supports Ingress for the frontend. If you needed an Ingress (because on AWS you want an ALB, not and ALB), you could add an ingress object separately after deploying the chart (at least to validate)–or fork and modify the chart.

      annotations: external
        # or "internal" for internal NLB internet-facing ip k8s-temporal
        # Requires external DNS
      type: LoadBalancer

The chart does suport an ingress for web ui (via helm values). I exposed it via ALB ingress like this. (I also had to modify the chart to set “pathType: Prefix” to fix 404s.)

    enabled: true
    annotations: internal <ARN>
      # Access over VPN temporal-internal-ingress alb ip '[{"HTTPS":443}]'

It is a bit weird, but on AWS you expose an NLB via service annotations and you expose and ALB via Ingress annotations. If you are not using AWS maybe this provides hints.

1 Like

@Liam_Murray thank you very much. This would be a great addition do official docs for workers deployment so one can find easy. This worked! Appreciate your help!

1 Like