I was able to root cause and resolve this issue. It was the result of connection age based cycling from temporal server as controlled by these params:
frontend.keepAliveMaxConnectionAge:
- value: 5m
frontend.keepAliveMaxConnectionAgeGrace:
- value: 70s
What was happening was that temporal server was closing connections to the AWS ALB when they aged out, which resulted in clients receiving 502 Bad Gateway errors on their following requests. I adjusted these values to higher than the lifespan I allow my temporal server instances, which I redeploy nightly. My 502 errorrate dropped from 0.1% with the default settings to 0%.
If you do something similar I would recommend looking out for hot spotting. I did not see any hot spotting or connection growth as a result of this change, but mileage may vary depending on the load balancer and traffic you have.