How to expose temporal-frontend running on port 7233 so workers can connect to it?

Hi all,

We are working on production setup of Temporal. We were able to use Docker for our PoC, but now we setup Temporal following: GitHub - temporalio/helm-charts: Temporal Helm charts so setup is done with Kubernetes.

Thing we are not clear is how to expose temporal-frontend running on port 7233 so workers can connect to it? Our assumption is that we will run workers on separate machines.

We tried LB to proxy traffic to this port but this did not worked.

How to properly setup Temporal Workers? Is there any documentation / tutorial on approaching this?

Thanks for any help in advance!

For Go SDK, you can use HostPort when creating a new client for example:

c, err := client.NewClient(client.Options{
		HostPort: client.DefaultHostPort,
	})

(client.DefaultHostPort is 127.0.0.1:7233, but you can change to yourExternalFrontendServiceIP:7233 in case you have a load balancer service set up).

For Java SDK, you can use setTarget on WorkflowServiceStubsOptions, for example,

WorkflowServiceStubs service =
        WorkflowServiceStubs.newInstance(
            WorkflowServiceStubsOptions.newBuilder()
                .setTarget(targetEndpoint)
                .build());

Note that temporal frontend is deployed as a headless service (via provided helm charts) and it does expose endpoint but note that it will be directly to a single pod instead of a LB service that could load balance across pods in a prod setup.
Can use kubectl get endpoints to get that info.

Thank you for this hint. Will try it.

I connect a worker running on the same cluster to the temporal frontend sevice (not headless service) by using the following env (I am running workers in another namespace, connecting to temporal running in the temporal namespace).

TEMPORAL_ADDRESS: temporal-frontend.temporal:7233

If your workers are running outside the cluster (e.g. laptop connecting to cloud over MTLS, or another VPC) then you need an ingress.

On AWS you can add annotations (requires ALB load balancer controller) like this. It does not look like the chart currently supports Ingress for the frontend. If you needed an Ingress (because on AWS you want an ALB, not and ALB), you could add an ingress object separately after deploying the chart (at least to validate)–or fork and modify the chart.

frontend:
    service:
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: external
        # or "internal" for internal NLB
        service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
        service.beta.kubernetes.io/aws-load-balancer-name: k8s-temporal
        # Requires external DNS
        external-dns.alpha.kubernetes.io/hostname: temporal-frontend.example.com
      type: LoadBalancer

The chart does suport an ingress for web ui (via helm values). I exposed it via ALB ingress like this. (I also had to modify the chart to set “pathType: Prefix” to fix 404s.)

web:
  ingress:
    enabled: true
    annotations:
      alb.ingress.kubernetes.io/scheme: internal
      alb.ingress.kubernetes.io/certificate-arn: <ARN>
      # Access over VPN
      alb.ingress.kubernetes.io/group.name: temporal-internal-ingress
      external-dns.alpha.kubernetes.io/hostname: temporal.example.com
      kubernetes.io/ingress.class: alb
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'

It is a bit weird, but on AWS you expose an NLB via service annotations and you expose and ALB via Ingress annotations. If you are not using AWS maybe this provides hints.

1 Like

@Liam_Murray thank you very much. This would be a great addition do official docs for workers deployment so one can find easy. This worked! Appreciate your help!

1 Like