Continuous "Failure in poller thread Activity Poller taskQueue" when there is no task

I’ve deployed my Temporal worker in prod, and I found out that when no task in queue, workers continuously log debug like below:

22:50:29.272 [grpc-default-worker-ELG-1-2] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x2d883d51, L:/100.68.154.79:45148 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.121.115:443] OUTBOUND HEADERS: streamId=7 headers=GrpcHttp2OutboundHeaders[:authority: temporal-api-staging.us-east-1.dpegrid.net:443, :path: /temporal.api.workflowservice.v1.WorkflowService/PollWorkflowTaskQueue, :method: POST, :scheme: https, content-type: application/grpc, te: trailers, user-agent: grpc-java-netty/1.50.2, client-version: 1.17.0, supported-server-versions: >=0.31.0 <2.0.0, client-name: temporal-java, x-datadog-trace-id: 5599978682582106858, x-datadog-parent-id: 4804484857005478224, x-datadog-sampling-priority: 1, x-datadog-tags: _dd.p.upstream_services=bXVzaHUtd29ya2Zsb3ctc3RhZ2luZw|1|1|1, grpc-accept-encoding: gzip, grpc-timeout: 69999927u] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false
22:50:29.272 [grpc-default-worker-ELG-1-2] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x2d883d51, L:/100.68.154.79:45148 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.121.115:443] OUTBOUND DATA: streamId=7 padding=0 endStream=true length=161 bytes=000000009c0a056d75736875125d0a5931406d757368752d776f726b666c6f772d73746167696e672d6465706c6f796d656e742d353864646366343735362d70...
22:50:33.411 [grpc-connection-manager-thread-0] DEBUG io.temporal.serviceclient.ChannelManager - Resetting gRPC connection backoff on the gRPC channel ManagedChannelOrphanWrapper{delegate=ManagedChannelImpl{logId=1, target=temporal-api-staging.us-east-1.dpegrid.net:443}}
22:50:42.957 [grpc-connection-manager-thread-0] DEBUG io.temporal.serviceclient.ChannelManager - Entering IDLE state on the gRPC channel ManagedChannelOrphanWrapper{delegate=ManagedChannelImpl{logId=1, target=temporal-api-staging.us-east-1.dpegrid.net:443}}
22:50:43.411 [grpc-connection-manager-thread-0] DEBUG io.temporal.serviceclient.ChannelManager - Resetting gRPC connection backoff on the gRPC channel ManagedChannelOrphanWrapper{delegate=ManagedChannelImpl{logId=1, target=temporal-api-staging.us-east-1.dpegrid.net:443}}
22:50:44.483 [grpc-default-worker-ELG-1-20] DEBUG io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler - [id: 0xb94a0972, L:/100.68.154.79:44964 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
22:50:44.483 [grpc-default-worker-ELG-1-20] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xb94a0972, L:/100.68.154.79:44964 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] OUTBOUND SETTINGS: ack=false settings={ENABLE_PUSH=0, MAX_CONCURRENT_STREAMS=0, INITIAL_WINDOW_SIZE=1048576, MAX_HEADER_LIST_SIZE=4194304}
22:50:44.483 [grpc-default-worker-ELG-1-20] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xb94a0972, L:/100.68.154.79:44964 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] OUTBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=983041
22:50:44.483 [grpc-default-worker-ELG-1-20] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xb94a0972, L:/100.68.154.79:44964 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] INBOUND SETTINGS: ack=false settings={MAX_CONCURRENT_STREAMS=128, INITIAL_WINDOW_SIZE=65536, MAX_FRAME_SIZE=16777215}
22:50:44.483 [grpc-default-worker-ELG-1-20] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xb94a0972, L:/100.68.154.79:44964 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] OUTBOUND SETTINGS: ack=true
22:50:44.483 [grpc-default-worker-ELG-1-20] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xb94a0972, L:/100.68.154.79:44964 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] INBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=2147418112
22:50:44.483 [grpc-default-worker-ELG-1-20] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xb94a0972, L:/100.68.154.79:44964 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] OUTBOUND HEADERS: streamId=3 headers=GrpcHttp2OutboundHeaders[:authority: temporal-api-staging.us-east-1.dpegrid.net:443, :path: /temporal.api.workflowservice.v1.WorkflowService/PollWorkflowTaskQueue, :method: POST, :scheme: https, content-type: application/grpc, te: trailers, user-agent: grpc-java-netty/1.50.2, client-version: 1.17.0, supported-server-versions: >=0.31.0 <2.0.0, client-name: temporal-java, x-datadog-trace-id: 332905237983563094, x-datadog-parent-id: 5460966639672769176, x-datadog-sampling-priority: 1, x-datadog-tags: _dd.p.upstream_services=bXVzaHUtd29ya2Zsb3ctc3RhZ2luZw|1|1|1, grpc-accept-encoding: gzip, grpc-timeout: 69991017u] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false
22:50:44.483 [grpc-default-worker-ELG-1-20] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xb94a0972, L:/100.68.154.79:44964 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] OUTBOUND DATA: streamId=3 padding=0 endStream=true length=161 bytes=000000009c0a056d75736875125d0a5931406d757368752d776f726b666c6f772d73746167696e672d6465706c6f796d656e742d353864646366343735362d70...
22:50:44.483 [grpc-default-worker-ELG-1-20] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xb94a0972, L:/100.68.154.79:44964 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] INBOUND SETTINGS: ack=true
22:50:44.484 [grpc-default-worker-ELG-1-18] DEBUG io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler - [id: 0x2f534b8e, L:/100.68.154.79:55038 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.121.115:443] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
22:50:44.484 [grpc-default-worker-ELG-1-18] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x2f534b8e, L:/100.68.154.79:55038 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.121.115:443] OUTBOUND SETTINGS: ack=false settings={ENABLE_PUSH=0, MAX_CONCURRENT_STREAMS=0, INITIAL_WINDOW_SIZE=1048576, MAX_HEADER_LIST_SIZE=4194304}
22:50:44.484 [grpc-default-worker-ELG-1-18] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x2f534b8e, L:/100.68.154.79:55038 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.121.115:443] OUTBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=983041
22:50:44.484 [grpc-default-worker-ELG-1-18] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x2f534b8e, L:/100.68.154.79:55038 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.121.115:443] INBOUND SETTINGS: ack=false settings={MAX_CONCURRENT_STREAMS=128, INITIAL_WINDOW_SIZE=65536, MAX_FRAME_SIZE=16777215}
22:50:44.484 [grpc-default-worker-ELG-1-18] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x2f534b8e, L:/100.68.154.79:55038 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.121.115:443] OUTBOUND SETTINGS: ack=true
22:50:44.484 [grpc-default-worker-ELG-1-18] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x2f534b8e, L:/100.68.154.79:55038 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.121.115:443] INBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=2147418112
22:50:44.484 [grpc-default-worker-ELG-1-19] DEBUG io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler - [id: 0xbd48e8fe, L:/100.68.154.79:42464 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.5.81:443] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
22:50:44.484 [grpc-default-worker-ELG-1-19] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xbd48e8fe, L:/100.68.154.79:42464 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.5.81:443] OUTBOUND SETTINGS: ack=false settings={ENABLE_PUSH=0, MAX_CONCURRENT_STREAMS=0, INITIAL_WINDOW_SIZE=1048576, MAX_HEADER_LIST_SIZE=4194304}
22:50:44.484 [grpc-default-worker-ELG-1-19] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xbd48e8fe, L:/100.68.154.79:42464 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.5.81:443] OUTBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=983041
22:50:44.484 [grpc-default-worker-ELG-1-19] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xbd48e8fe, L:/100.68.154.79:42464 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.5.81:443] INBOUND SETTINGS: ack=false settings={MAX_CONCURRENT_STREAMS=128, INITIAL_WINDOW_SIZE=65536, MAX_FRAME_SIZE=16777215}
22:50:44.484 [grpc-default-worker-ELG-1-19] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xbd48e8fe, L:/100.68.154.79:42464 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.5.81:443] OUTBOUND SETTINGS: ack=true
22:50:44.484 [grpc-default-worker-ELG-1-19] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xbd48e8fe, L:/100.68.154.79:42464 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.5.81:443] INBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=2147418112
22:50:44.485 [grpc-default-worker-ELG-1-18] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x2f534b8e, L:/100.68.154.79:55038 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.121.115:443] INBOUND SETTINGS: ack=true
22:50:44.485 [grpc-default-worker-ELG-1-19] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xbd48e8fe, L:/100.68.154.79:42464 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.5.81:443] INBOUND SETTINGS: ack=true
22:50:47.959 [grpc-default-worker-ELG-1-5] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xeeb64c2f, L:/100.68.154.79:45156 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] OUTBOUND GO_AWAY: lastStreamId=0 errorCode=0 length=0 bytes=
22:50:47.961 [grpc-default-worker-ELG-1-2] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x2d883d51, L:/100.68.154.79:45148 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.121.115:443] OUTBOUND GO_AWAY: lastStreamId=0 errorCode=0 length=0 bytes=
22:50:47.961 [grpc-default-worker-ELG-1-4] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xfc7a91d8, L:/100.68.154.79:55604 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.5.81:443] OUTBOUND GO_AWAY: lastStreamId=0 errorCode=0 length=0 bytes=
22:50:53.412 [grpc-connection-manager-thread-0] DEBUG io.temporal.serviceclient.ChannelManager - Resetting gRPC connection backoff on the gRPC channel ManagedChannelOrphanWrapper{delegate=ManagedChannelImpl{logId=1, target=temporal-api-staging.us-east-1.dpegrid.net:443}}
22:50:53.432 [grpc-default-worker-ELG-1-5] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xeeb64c2f, L:/100.68.154.79:45156 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] INBOUND HEADERS: streamId=3 headers=GrpcHttp2ResponseHeaders[:status: 200, server: awselb/2.0, date: Tue, 06 Dec 2022 22:50:53 GMT, content-type: application/grpc] padding=0 endStream=false
22:50:53.432 [grpc-default-worker-ELG-1-5] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xeeb64c2f, L:/100.68.154.79:45156 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] INBOUND DATA: streamId=3 padding=0 endStream=false length=5 bytes=0000000000
22:50:53.432 [grpc-default-worker-ELG-1-5] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xeeb64c2f, L:/100.68.154.79:45156 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] OUTBOUND PING: ack=false bytes=1234
22:50:53.432 [grpc-default-worker-ELG-1-5] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xeeb64c2f, L:/100.68.154.79:45156 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] INBOUND HEADERS: streamId=3 headers=GrpcHttp2ResponseHeaders[grpc-status: 0, grpc-message: ] padding=0 endStream=true
22:50:53.433 [grpc-default-worker-ELG-1-5] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xeeb64c2f, L:/100.68.154.79:45156 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] INBOUND PING: ack=true bytes=1234
22:50:53.433 [grpc-default-worker-ELG-1-19] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xbd48e8fe, L:/100.68.154.79:42464 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.5.81:443] OUTBOUND HEADERS: streamId=3 headers=GrpcHttp2OutboundHeaders[:authority: temporal-api-staging.us-east-1.dpegrid.net:443, :path: /temporal.api.workflowservice.v1.WorkflowService/PollActivityTaskQueue, :method: POST, :scheme: https, content-type: application/grpc, te: trailers, user-agent: grpc-java-netty/1.50.2, client-version: 1.17.0, supported-server-versions: >=0.31.0 <2.0.0, client-name: temporal-java, x-datadog-trace-id: 866544010018320827, x-datadog-parent-id: 1747858842706298679, x-datadog-sampling-priority: 1, x-datadog-tags: _dd.p.upstream_services=bXVzaHUtd29ya2Zsb3ctc3RhZ2luZw|1|1|1, grpc-accept-encoding: gzip, grpc-timeout: 69999954u] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false
22:50:53.433 [grpc-default-worker-ELG-1-19] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xbd48e8fe, L:/100.68.154.79:42464 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.5.81:443] OUTBOUND DATA: streamId=3 padding=0 endStream=true length=86 bytes=00000000510a056d7573687512120a106d757368755f7461736b5f71756575651a3431406d757368752d776f726b666c6f772d73746167696e672d6465706c6f...
22:50:58.078 [grpc-default-worker-ELG-1-5] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0xeeb64c2f, L:/100.68.154.79:45156 - R:temporal-api-staging.us-east-1.dpegrid.net/10.231.170.174:443] INBOUND HEADERS: streamId=5 headers=GrpcHttp2ResponseHeaders[:status: 504, server: awselb/2.0, date: Tue, 06 Dec 2022 22:50:58 GMT, content-type: application/grpc, content-length: 0, grpc-status: 14, grpc-message: unavailable] padding=0 endStream=true
22:50:58.078 [Activity Poller taskQueue="default_task_queue", namespace="default": 2] WARN io.temporal.internal.worker.Poller - Failure in poller thread Activity Poller taskQueue="default_task_queue", namespace="default": 2
io.grpc.StatusRuntimeException: UNAVAILABLE: unavailable
	at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:271)
	at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:252)
	at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:165)
	at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.pollActivityTaskQueue(WorkflowServiceGrpc.java:3799)
	at io.temporal.internal.worker.ActivityPollTask.poll(ActivityPollTask.java:100)
	at io.temporal.internal.worker.ActivityPollTask.poll(ActivityPollTask.java:40)
	at io.temporal.internal.worker.Poller$PollExecutionTask.run(Poller.java:274)
	at io.temporal.internal.worker.Poller$PollLoopTask.run(Poller.java:237)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:831)

But when I send a task, task could still be polled and run successfully. I wonder why debug informaition like this printed when no tasks…
And here is my worker code:

package com.disneystreaming.cmp.mushu.worker;

import com.disneystreaming.cmp.client.MushuStorageClient;
import com.disneystreaming.cmp.mushu.activity.impl.BumperActivityImpl;
import com.disneystreaming.cmp.mushu.workflow.impl.BumperWorkflowImpl;
import com.disneystreaming.cmp.utils.PropertiesUtil;
import io.temporal.client.WorkflowClient;
import io.temporal.client.WorkflowClientOptions;
import io.temporal.serviceclient.SimpleSslContextBuilder;
import io.temporal.serviceclient.WorkflowServiceStubs;
import io.temporal.serviceclient.WorkflowServiceStubsOptions;
import io.temporal.worker.WorkerFactory;

import javax.net.ssl.SSLException;

public class Worker {

    private static final String TASK_QUEUE = PropertiesUtil.get("task_queue");
    private static final String TARGET_ENDPOINT = PropertiesUtil.get("target_endpoint");
    private static final String NAMESPACE = PropertiesUtil.get("namespace");

    public static void main(String[] args) throws SSLException {

        WorkflowServiceStubs service;
        if (!PropertiesUtil.getEnv().equals("dev")) {
            service =
                    WorkflowServiceStubs.newServiceStubs(
                        WorkflowServiceStubsOptions.newBuilder()
                                .setTarget(TARGET_ENDPOINT)
                                .setSslContext(SimpleSslContextBuilder.noKeyOrCertChain().build())
                                .build());
        } else {
            service = WorkflowServiceStubs.newLocalServiceStubs();
        }

        WorkflowClient client =
                WorkflowClient.newInstance(
                        service, WorkflowClientOptions.newBuilder().setNamespace(NAMESPACE).build());

        WorkerFactory factory = WorkerFactory.newInstance(client);

        io.temporal.worker.Worker worker = factory.newWorker(TASK_QUEUE);

        worker.registerWorkflowImplementationTypes(WorkflowImpl.class);

        MushuStorageClient storageClient = new StorageClient();
        worker.registerActivitiesImplementations(new ActivityImpl(storageClient));

        factory.start();
    }
}

Thanks in advance for any reply!

I’ve deployed my Temporal worker in prod

Assume this is for self-deployed cluster, right?
Workers should not time out on PollWorkflowTaskQueue/PollActivityTaskQueue that fast, my guess is a very small grpc connection timeout set on your load balancer config? If thats the case, try setting this timeout to 75 seconds and test again.

1 Like

Thanks for the reply! Both Temporal cluster and worker is deployed through k8s on our own cluster. The timeout you said is the var for worker side right? or it’s for Temporal Cluster settings?

From the logs it seems requests/response go through server: awselb/2.0, so my guess would be grpc timeout setting thats too low in the lb config (workers long-poll frontend service). I am not expert with that lb config to be sure what property to change/update.

Thanks Tihomir! I work with Shean, and based on your feedback, I was able to identify the root cause as the load balancer that fronts the Temporal server cluster in our deployment. The ALB idle timeout was sitting at the default of 60s, which doesn’t play well with the default Temporal task queue polling duration of also 60s. Increasing the ALB idle timeout was definitely the solution:
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=75

1 Like

Thanks for the quick update David! The 504 disappears too from our side. And thanks Tihomir for giving a brief and right solution.

@Shean @dlamont I am trying to set up AWS ALB in front of the temporal frontend, I am running into problems. @tihomir suggested I ask here, can you help me figure out how you set up your ALB with temporal frontend grpc. Are you guys using EKS?

Any help is much appreciated.

Hi @chaitanya , i’m not very clear about details of cluster deployment. we use spinnaker to manage the k8s pods(aws resources) and as David said, he set the the variable alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=75 of lb, and it works well. Maybe the variable is mapped to the some params on aws.