Our dev team is experimenting with temporal. We currently have the Temporal services installed on Kubernetes. In our environment, we have an NGINX cluster in front of Kubernetes. I wonder there is an example of a configuration file.
I’ve found one of the comments. what would the load balancer look like?
Use the approach you are already using. Temporal doesn’t impose any requirements on the client tier deployment besides that it should be able to reach the service gRPC frontends through some sort of load balancer.
Here is another comment from previous questions.
They usually communicate through some sort of application-specific proxy. If all the applications are part of the same organization they can communicate through the service gRPC API directly or through the Java or Go SDKs.
Now, I am thinking since it’s using gPRC. would it mean at the NGINX level, I will need to enable http2?
I don’t think this is really a Temporal question and more of a GRPC one, but more than happy to answer. The most straightforward route is to simply take in GRPC traffic over HTTP/2 as ingress for the l7 load balancer. I know that nginx has supported GRPC over HTTP/2 for a while, and IIRC even AWS ALB supports it now. If it doesn’t there are a few other options. The first is using TLS passthrough with l4 load balancer like AWS NLB. This way you can pass GRPC traffic untouched directly to your services. It’s not a perfect solution and comes with its own problems.
There is also likely libraries which exist to proxy traffic from HTTP to GRPC transparently, but I’m not sure of any off the top of my head. Hope this helps.
So far we know gRPC is best of its kind and Nginx does a great job proxying.
I am looking into a tech stack that would wrap and automate temporal. And I believe that we’ve got points in common. You can check my recent post where I’m asking for comments and suggestions.
It’ll be nice we could share findings with the community.
Let me know if you found it at least interesting.