Can't create more than max_prepared_stmt_count statements MySQL error

I have Temporal deployed using the latest helm chart (v 1.17.4) with MySQL as its persistence store. I’m periodically seeing the following error in our logs

Error: Error 1461: Can't create more than max_prepared_stmt_count statements (current value: 16382)

This usually seems to happen when trying to refresh the namespace cache or when trying to lock the task queue.

I understand that we are hitting a limit on the number of prepared statements our MySQL server can accept and we’ll look into increasing that. I’m wondering though if there is any way to reduce the number of prepared statements that Temporal is creating.

The following post seems related but I’m not sure if any of the mentioned performance improvements have been implemented.

These errors don’t seem to be causing any issues and Temporal is up and running and working as expected. We would like though to avoid having all these errors in our logs

What makes you believe refresh namespace cache or lock task queue could lead to this issue? And what exactly is “lock task queue”?

These are the most common error logs we are seeing:

"msg" : "Error refreshing namespace cache",
"error" : "ListNamespaces operation failed. Failed to get namespace rows. Error: Error 1461: Can't create more than max_prepared_stmt_count statements (current value: 16382)",
"stacktrace" : "go.temporal.io/server/common/log.(*zapLogger).Error
	/home/builder/temporal/common/log/zap_logger.go:142
go.temporal.io/server/common/namespace.(*registry).refreshLoop
	/home/builder/temporal/common/namespace/registry.go:407
go.temporal.io/server/internal/goro.Go.func1
	/home/builder/temporal/internal/goro/goro.go:56"
"msg" : "Persistent store operation failure",
"error" :  "Failed to lock task queue. Error: Error 1461: Can't create more than max_prepared_stmt_count statements (current value: 16382)",
"stacktrace" : "go.temporal.io/server/common/log.(*zapLogger).Error
	/home/builder/temporal/common/log/zap_logger.go:142
go.temporal.io/server/service/matching.(*taskReader).getTasksPump
	/home/builder/temporal/service/matching/taskReader.go:200
go.temporal.io/server/internal/goro.(*Group).Go.func1
	/home/builder/temporal/internal/goro/group.go:57"

I don’t think necessarily that the issue is specific to these operations. We don’t have any workflow executions on this cluster yet so it might be that things like refreshing the namespace cache accounts for most of the database access.

What is your numHistoryShards?
And how many service instances (k8s pod) do you have for each temporal service (frontend/history/matching/worker)?

numHistoryShards is set to the default of 512.

We are using 2 replicas for each of frontend/history/matching/worker