I want to understand how we are load balancing the grpc connections to temporal server internally. So, let’s say if we use a kubernetes service to expose an endpoint for the underlying pods of temporal server and then if we configure this endpoint in temporal client, then how are we establishing/maintaining grpc connections and making sure that load to temporal server pods is evenly distributed .
Also, please tell me if there is any other configuration required to make sure of this. Currently I am supplying a DNS:port in the hostport config.
I went through the list of questions posted already till July and didn’t find the topic. So, posting it as a new topic.