Hi guys, I have a temporal cluster (version 1.21.6) running on GKE and the workers are flooded with these errors
[2m2024-10-29T09:46:58.066017Ze[0m e[31mERRORe[0m e[2mtemporal_client::retrye[0me[2m:e[0m gRPC call poll_workflow_task_queue retried 361 times e[3merrore[0me[2m=e[0mStatus { code: Internal, message: "protocol error: received message with invalid compression flag: 60 (valid flags are 0 and 1) while receiving response with status: 504 Gateway Timeout", metadata: MetadataMap { headers: {"date": "Tue, 29 Oct 2024 09:46:58 GMT", "content-type": "text/html", "content-length": "160", "strict-transport-security": "max-age=15724800; includeSubDomains"} }, source: None }
I bump the frontend ingress controller timeouts to 3600 seconds and also bump the backend service on gcp to 3600 seconds, but I’m still getting the error.
I don’t have any errors on the workflows and the frontend pod doesn’t have any errors on the logs either, just the amount of logs on the worker it’s huge.
I know this has been asked couple years back but couldn’t really find if there’s a solution to this.
Thanks.