Too slow latency and error break down

I have two question :

  1. temporal frontend show warn log. what doesn’t this mean ?
    {“level”:“warn”,“ts”:“2022-09-19T15:23:00.983Z”,“msg”:“Context timeout is lower than critical value for long poll API.”,“service”:“frontend”,“wf-handler-name”:“PollWorkflowTaskQueue”,“wf-poll-context-timeout”:18.530926065,“logging-call-at”:“util.go:628”}
  2. when monitoring the temporal dashboard, service latency and service error breakdown too slow.

Context timeout is lower than critical value for long poll API

This is a warning server message coming from here. The critical grpc timeout is set to 20s.

Seems that you have a timeout set thats lower that than. Do you have a gRPC proxy by chance that you client calls go through which has timeout set to <20s?

For 2) whats’ the top service latencies operations, can’t really tell from your graph.
I think if you wanted to go over maybe some performance tuning things you would need to describe your deployment in more details.

Hi @tihomir ,
for 1 , after I increse timeout of workflow, the temporal frontend always log error
“level”:“error”,“ts”:“2022-09-20T15:11:01.876Z”,“msg”:“unavailable error”,“operation”:“PollWorkflowTaskQueue”,“wf-namespace”:“temporal-system”,“error”:“error reading from server: read tcp> read: connection reset by peer”,“logging-call-at”:“telemetry.go:187”,“stacktrace”:"
I don’t know that port 33692->7235.

for 3, I have a question. When my system have no request but worker_task_slots_available value increse from 10 to 1000. Seem that have a process use slot of the worker.