Data serialization error for Temporal on Azure Test

We are deploying Temporal.io on our clusters using a 3rd party helm chart. We’ve successfully deployed Temporal on AWS clusters but not on Azure.

Temporal has a number of columns with type mediumblob , which store data encoded with a Proto3 format. Our application logs show errors similar to the following:

> Unable to start server. Error: could not build arguments for function
"go.temporal.io/server/temporal".ServerLifetimeHooks (/home/builder/temporal/temporal/fx.go:753): failed to build temporal.Server: could not build arguments for function
"go.temporal.io/server/temporal".glob..func9 (/home/builder/temporal/temporal/server_impl.go:63): failed to build *temporal.ServerImpl: could not build arguments for function
"go.temporal.io/server/temporal".NewServerFxImpl (/home/builder/temporal/temporal/server_impl.go:67): could not build value group *temporal.ServicesMetadata[group="services"]: could not build arguments for function
"go.temporal.io/server/temporal".HistoryServiceProvider (/home/builder/temporal/temporal/fx.go:340): failed to build config.Persistence: received non-nil error from function
"go.temporal.io/server/temporal".ApplyClusterMetadataConfigProvider (/home/builder/temporal/temporal/fx.go:577): error while fetching cluster metadata: proto: ClusterMetadata: wiretype end group for non-group

For tables with static data we’ve been able to manually update the stored values and copy the properly encoded values from other successful releases. Those values were then de-serialized properly and helped us get the application move past some errors on startup. From some point onwards though the application is generating data dynamically and obviously we cannot keep correcting it manually. What this indicates though is that the application behaves correctly when stored data is encoded properly but the serialization process ends up creating corrupted values

We’ve been able to replicate the same errors by running the application on a local machine but pointing it to the Azure DB . This seems to indicate the the DB is the culprit and that this is not an issue with the Kubernetes cluster.

Please, let me know if you have any info or experience regarding this issue and any input is greatly appreciated. Thanks!

First time seeing this issue. Could you try to clean all data in the cluster_metadata_info table and star the service again? Will it encounter the same issue?