Unable to Start Multiple Instances of temporal-frontend


I’m trying to horizontally scale the temporal-frontend service, and I’m running into some issues. I’ve scaled all the other services (worker, history, matching) with no issues. For some reason though with the FE service, everytime I try to increase the number of kubernetes pods over 1, the subsequent pods get shut down via an interrupt.

The weird thing is we have a few different environments and this issue is only happening in our release environment. So I don’t think it’s a configuration issue since they are all configured the same.

It’s getting interrupted by this code block
We are currently running on temporal 1.20.0, and I know this code was changed a bit in 1.21.0. Maybe upgrading would help? Any other thoughts?

Were you able to resolve this issue? Do you have loadbalancing infront of your frontends? Are you setting different grpcPort for each replica?

Yeah I actually just realized earlier today that we had autoscaling on our temporal infra…
So no issue from the temporal side :slight_smile: