Multi Cluster metadata

How to configure multi cluster metadata?

I would like to deploy the same code in k8s two clusters(mt-t1.carl and mt-t2.cdc2) and backend is a cockroach database.

TEMPORAL_CLUSTERNAME=mt-t1.carl
TEMPORAL_PUBLICCLIENT_HOSTPORT=server-asyncworkflow-test1.apps.mt-t1.carl.gkp.net:443

deployment.yaml configuration

clusterMetadata:
enableGlobalNamespace: true
failoverVersionIncrement: 10
masterClusterName: “##TEMPORAL_CLUSTERNAME##”
currentClusterName: “##TEMPORAL_CLUSTERNAME##”
clusterInformation:
##TEMPORAL_CLUSTERNAME##:
enabled: true
initialFailoverVersion: 1
rpcName: “frontend”
rpcAddress: “##TEMPORAL_PUBLICCLIENT_HOSTPORT##”

First cluster successfully deployed hence hence new record inserted in the table.
Second cluster deployment show the error. Hence currentClusterName is immutable.

panic: Current cluster is not specified in cluster info
goroutine 1 [running]:
go.temporal.io/server/common/cluster.NewMetadata(0x274fa01, 0xa, 0xc000048302, 0xa, 0xc000511220, 0xa, 0xc000624fc0, 0xa, 0x2752090)
/temporal/common/cluster/metadata.go:114 +0x425
go.temporal.io/server/common/resource.New(0xc00000a1e0, 0x23a091b, 0x8, 0xc000623f40, 0xc000623f60, 0xc000790180, 0xc00053cf48, 0xc000580c00, 0x0, 0xc000ba6cd0)
/temporal/common/resource/resourceImpl.go:191 +0x4a5
go.temporal.io/server/service/frontend.NewService(0xc00000a1e0, 0xc000942528, 0x8, 0xc000485440)
/temporal/service/frontend/service.go:250 +0x24e
go.temporal.io/server/temporal.(*Server).Start(0xc000614360, 0xc000614360, 0x7)
/temporal/temporal/server.go:202 +0x11fa
main.buildCLI.func2(0xc000956540, 0x0, 0x0)
/temporal/cmd/server/main.go:160 +0xbe8
github.com/urfave/cli/v2.(*Command).Run(0xc00016c360, 0xc000956340, 0x0, 0x0)
/temporal/vendor/github.com/urfave/cli/v2/command.go:163 +0x4dd
github.com/urfave/cli/v2.(*App).RunContext(0xc00064e1a0, 0x273ab00, 0xc000048060, 0xc00003a180, 0x3, 0x3, 0x0, 0x0)
/temporal/vendor/github.com/urfave/cli/v2/app.go:313 +0x810
github.com/urfave/cli/v2.(*App).Run(...)
/temporal/vendor/github.com/urfave/cli/v2/app.go:224
main.main()
/temporal/cmd/server/main.go:50 +0x66

It is not allowing for both the clusters. How to configure for multi clusters?

Hi, @maxim , @alex

I have deployed the temporal-server in two k8s clusters ie., (mt-t1.carl & mt-t2.cdc2). Load Balancing is not yet set now. Backend i am using the same cockroach database and same namespace for the both clusters.

When I create workflow request against to cluster-1, i can see only in cluster-1 temporal-web. The same case for cluster-2. But I am using the same database and namespace, it should be appear in both the clusters of the temporal-web right? That is not happening. here is the below configuration and what is missing here? why the data doesn’t sync with both clusters?

clusterMetadata:
enableGlobalNamespace: true
failoverVersionIncrement: 10
masterClusterName: “mt-t1.carl
currentClusterName: “mt-t2.cdc2
clusterInformation:
mt-t1.carl:
enabled: true
initialFailoverVersion: 1
rpcName: “frontend”
rpcAddress: “frontend:7233”
mt-t2.cdc2:
enabled: true
initialFailoverVersion: 2
rpcName: “frontend”
rpcAddress: “frontend:7233”

The configuration of two independent Temporal clusters on top of a single DB is not supported..

You could have a single Temporal cluster across multiple availability zones. But it would require the full point to point connectivity among of all the hosts.

@maxim
can you please elaborate more on a single Temporal cluster across multiple availability zones?
How can be configured?

All nodes have the same configuration, share the same DB and can communicate to each other directly.

I have updated the rpcAdress as Temporal_Pod_ID:7233.

I have the same configuration on both the clusters.

Now I can see few requests of cluster-2 in cluster-1,
Very less of requests of cluster-1 in cluster-2.
But both of them are not sync.

MasterCluster is Cluster-1.
CurrentCluster is Cluster-2.

Try tctl admin membership and tctl admin cluster describe to troubleshoot the cluster membership.

I have tried most of the possible by changing the clusters between master & current and also tried with same configuration for both the clusters, even tried with default configuration like “active” for both. It doesn’t sync the data.

clusterMetadata:
enableGlobalNamespace: true
failoverVersionIncrement: 10
masterClusterName: “mt-d1.carl”
currentClusterName: “mt-d2.carl”
clusterInformation:
“mt-d1.carl”:
enabled: true
initialFailoverVersion: 1
rpcName: “frontend”
rpcAddress: “100.127.0.243:7233”
“mt-d2.carl”:
enabled: true
initialFailoverVersion: 2
rpcName: “frontend”
rpcAddress: “100.127.0.243:7233”

tctl admin cluster describe

{
“supportedClients”: {
“temporal-cli”: “\u003c2.0.0”,
“temporal-go”: “\u003c2.0.0”,
“temporal-java”: “\u003c2.0.0”,
“temporal-server”: “\u003c2.0.0”
},
“serverVersion”: “1.12.0”,
“membershipInfo”: {
“currentHost”: {
“identity”: “100.127.0.243:7233”
},
“reachableMembers”: [
“100.127.0.243:6933”,
“100.127.0.201:6934”,
“100.127.0.196:6935”,
“100.127.0.247:6939”
],
“rings”: [
{
“role”: “frontend”,
“memberCount”: 1,
“members”: [
{
“identity”: “100.127.0.243:7233”
}
]
},
{
“role”: “history”,
“memberCount”: 1,
“members”: [
{
“identity”: “100.127.0.201:7234”
}
]
},
{
“role”: “matching”,
“memberCount”: 1,
“members”: [
{
“identity”: “100.127.0.196:7235”
}
]
},
{
“role”: “worker”,
“memberCount”: 1,
“members”: [
{
“identity”: “100.127.0.247:7239”
}
]
}
]
}
}

tctl admin membership

{
“ActiveMembers”: [
{
“Role”: 4,
“HostID”: “1090b629-1bac-11ec-8761-fe83d08b49fa”,
“RPCAddress”: “100.127.17.14”,
“RPCPort”: 6939,
“SessionStart”: “2021-09-22T13:50:36.83646Z”,
“LastHeartbeat”: “2021-09-22T18:23:51.33193Z”,
“RecordExpiry”: “2021-09-24T18:23:51.33193Z”
},
{
“Role”: 3,
“HostID”: “190ca612-1bac-11ec-a7e2-6efe79455199”,
“RPCAddress”: “100.127.17.59”,
“RPCPort”: 6935,
“SessionStart”: “2021-09-22T13:50:51.065879Z”,
“LastHeartbeat”: “2021-09-22T18:23:50.232357Z”,
“RecordExpiry”: “2021-09-24T18:23:50.232357Z”
},
{
“Role”: 2,
“HostID”: “1e93a68b-1bac-11ec-83e1-0e6ca2b56073”,
“RPCAddress”: “100.127.17.26”,
“RPCPort”: 6934,
“SessionStart”: “2021-09-22T13:51:00.544684Z”,
“LastHeartbeat”: “2021-09-22T18:23:46.255067Z”,
“RecordExpiry”: “2021-09-24T18:23:46.255067Z”
},
{
“Role”: 1,
“HostID”: “3e3399f9-1bac-11ec-9e02-524832b90dc3”,
“RPCAddress”: “100.127.17.53”,
“RPCPort”: 6933,
“SessionStart”: “2021-09-22T13:51:53.612378Z”,
“LastHeartbeat”: “2021-09-22T18:23:48.05551Z”,
“RecordExpiry”: “2021-09-24T18:23:48.05551Z”
},
{
“Role”: 4,
“HostID”: “42336f64-1bcc-11ec-a840-7e3b6798f7d9”,
“RPCAddress”: “100.127.126.98”,
“RPCPort”: 6939,
“SessionStart”: “2021-09-22T17:41:03.970385Z”,
“LastHeartbeat”: “2021-09-22T18:06:35.025661Z”,
“RecordExpiry”: “2021-09-24T18:06:35.025661Z”
},