Cannot achieve consistency level LOCAL_QUORUM with Replication Factor 3 with Cassandra

Hi Team,

Need help, I have 1 cluster Cassandra with 3 nodes in data center 1 and another 3 nodes in data center 2.
Whenever I start temporal in docker with Replication Factor = 3 (with default LOCAL_QUORUM and SERIAL_QUORUM) I got this error below (Cannot achieve consistency level LOCAL_QUORUM). How to fix it?

Failed: could not build arguments for function “go.temporal.io/server/common/resource”.RegisterBootstrapContainer (/temporal/common/resource/fx.go:342): failed to build *archiver.HistoryBootstrapContainer: could not build arguments for function “go.temporal.io/server/common/resource”.HistoryBootstrapContainerProvider (/temporal/common/resource/fx.go:328): failed to build client.Bean: received non-nil error from function “go.temporal.io/server/common/resource”.PersistenceBeanProvider (/temporal/common/resource/fx.go:269): Cannot achieve consistency level LOCAL_QUORUM
Unable to start server. Error: unable to construct service “history”: could not build arguments for function “go.temporal.io/server/common/resource”.RegisterBootstrapContainer (/temporal/common/resource/fx.go:342): failed to build *archiver.HistoryBootstrapContainer: could not build arguments for function “go.temporal.io/server/common/resource”.HistoryBootstrapContainerProvider (/temporal/common/resource/fx.go:328): failed to build client.Bean: received non-nil error from function “go.temporal.io/server/common/resource”.PersistenceBeanProvider (/temporal/common/resource/fx.go:269): Cannot achieve consistency level LOCAL_QUORUM

Hi

Are you able to resolve this issue?
We are facing the same issue.

Thanks
phani

I think this has nothing to do with Temporal, but a question to Cassandra. I’m not an expert on Cassandra, but I will try to answer based on my knowledge.
The LOCAL_QUORUM require that you have majority of replica acked in local data center where your coordinator is. However, you have 2 data centers and your replica is randomly distributed among them. So it is possible that for some portion of data, all of the replicas live in a single DC that is different from the DC coordinator is in and won’t be able to receive any ack from replicas within that DC.

Hi ,

Thank you for the response. We have three cassandra nodes in the same data center with replication factor 3 ( we used to have 2 we changed to 3 ). When one nodes goes down whole temporal is going down.

We have below values when installing temporal.

replicationFactor: 1
consistency:
default:
consistency: “local_quorum”
serialConsistency: “local_serial”

Thanks
phani

Is this typo? You said you have 3 replica?

Hi Phani,

For my case it turns out the keyspace cannot use SimpleStrategy, it needs to be changed to NetworkTopologyStrategy.
LOCAL_QUORUM also cannot work with Replication Factor: 1, needs to change to 3.

You can see from this reference link also:

regards,

ridwan

2 Likes