Why do we see too many logs for the visibility-queue-processor


We see thousands of below logs with our temporal cluster. To be precise 11K logs for 10 seconds.

{"level":"info","ts":"2023-03-16T16:21:29.999Z","msg":"queue reader started","shard-id":467" component":"visibility-queue-processor","queue-reader-id":0,"lifecycle":"Started","logging-call-at":"reader.go:166"}

{"level":"info","ts":"2023-03-16T16:21:29.999Z","msg":"queue reader stopped","shard-id":5353, "component":"timer-queue-processor","queue-reader-id":0,"lifecycle":"Stopped","logging-call-at":"reader.go:184"}

The question is; is this something that we need to be concerned? May it explain some of the performance issues we have with our temporal deployment.

We are on server version: 1.18.5
And our cluster is backed by Aurora RDS.

Hi Kenand,
This issue is fixed in server version 1.19. You will still see the logs on service start up, which is for showing the background task processing for a certain queue is started (for each shard).