High volume of context deadline exceeded from visiblity_manager_metrics.go

Hi,
I have a high volume of these errors in the temporal server logs. I’m using Temporal 1.22.2 with Visibility on Mysql. It’s around ~100 logs / sec so basically my temporal log is completely dominated by these. Does anyone have any suggestions on what I could be looking at the fix this?

{
    "level": "error",
    "ts": "2024-02-13T01:18:48.686Z",
    "msg": "Operation failed with an error.",
    "error": "context deadline exceeded",
    "logging-call-at": "visiblity_manager_metrics.go:264",
    "stacktrace": "go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/runner/work/temporal/temporal/common/log/zap_logger.go:156\ngo.temporal.io/server/common/persistence/visibility.(*visibilityManagerMetrics).updateErrorMetric\n\t/home/runner/work/temporal/temporal/common/persistence/visibility/visiblity_manager_metrics.go:264\ngo.temporal.io/server/common/persistence/visibility.(*visibilityManagerMetrics).UpsertWorkflowExecution\n\t/home/runner/work/temporal/temporal/common/persistence/visibility/visiblity_manager_metrics.go:118\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).upsertExecution\n\t/home/runner/work/temporal/temporal/service/history/visibility_queue_task_executor.go:333\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).processUpsertExecution\n\t/home/runner/work/temporal/temporal/service/history/visibility_queue_task_executor.go:234\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).Execute\n\t/home/runner/work/temporal/temporal/service/history/visibility_queue_task_executor.go:118\ngo.temporal.io/server/service/history/queues.(*executableImpl).Execute\n\t/home/runner/work/temporal/temporal/service/history/queues/executable.go:236\ngo.temporal.io/server/common/tasks.(*FIFOScheduler[...]).executeTask.func1\n\t/home/runner/work/temporal/temporal/common/tasks/fifo_scheduler.go:223\ngo.temporal.io/server/common/backoff.ThrottleRetry.func1\n\t/home/runner/work/temporal/temporal/common/backoff/retry.go:119\ngo.temporal.io/server/common/backoff.ThrottleRetryContext\n\t/home/runner/work/temporal/temporal/common/backoff/retry.go:145\ngo.temporal.io/server/common/backoff.ThrottleRetry\n\t/home/runner/work/temporal/temporal/common/backoff/retry.go:120\ngo.temporal.io/server/common/tasks.(*FIFOScheduler[...]).executeTask\n\t/home/runner/work/temporal/temporal/common/tasks/fifo_scheduler.go:233\ngo.temporal.io/server/common/tasks.(*FIFOScheduler[...]).processTask\n\t/home/runner/work/temporal/temporal/common/tasks/fifo_scheduler.go:211"
}

After a temporal-server restart we getting hundreds of this. Never had this issue before. How to debug this? Temporal itself is working, workflows are running, temporal ui can do queries, db is not under load.