Not enough hosts to serve the request

I am running into
Error looking up host for shardID
with this error Not enough hosts to serve the request and stack trace:

go.temporal.io/server/common/log.(*zapLogger).Error
	/home/builder/temporal/common/log/zap_logger.go:156
go.temporal.io/server/service/history/shard.(*ControllerImpl).acquireShards.func2
	/home/builder/temporal/service/history/shard/controller_impl.go:388
go.temporal.io/server/service/history/shard.(*ControllerImpl).acquireShards.func3
	/home/builder/temporal/service/history/shard/controller_impl.go:430

spinning up a new cluster.

Eventually it hits All services are stopped and the cluster appears to be non-functional.

The ring looks fine – it is able to pick up all the services.

➜  ~  tctl  adm cl d
{
  "supportedClients": {
    "temporal-cli": "\u003c2.0.0",
    "temporal-go": "\u003c2.0.0",
    "temporal-java": "\u003c2.0.0",
    "temporal-php": "\u003c2.0.0",
    "temporal-server": "\u003c2.0.0",
    "temporal-typescript": "\u003c2.0.0",
    "temporal-ui": "\u003c3.0.0"
  },
  "serverVersion": "1.21.2",
  "membershipInfo": {
    "currentHost": {
      "identity": "[2602:fb33:8:2:99d6::e]:7233"
    },
    "reachableMembers": [
      "[2602:fb33:8:2:99d6::f]:6939",
      "[2602:fb33:8:2:99d6::e]:6933",
      "[2602:fb33:8:2:c2e8::6]:6934",
      "[2602:fb33:8:2:99d6::11]:6935"
    ],
    "rings": [
      {
        "role": "frontend",
        "memberCount": 1,
        "members": [
          {
            "identity": "[2602:fb33:8:2:99d6::e]:7233"
          }
        ]
      },
      {
        "role": "history",
        "memberCount": 1,
        "members": [
          {
            "identity": "[2602:fb33:8:2:c2e8::6]:7234"
          }
        ]
      },
      {
        "role": "matching",
        "memberCount": 1,
        "members": [
          {
            "identity": "[2602:fb33:8:2:99d6::11]:7235"
          }
        ]
      },
      {
        "role": "worker",
        "memberCount": 1,
        "members": [
          {
            "identity": "[2602:fb33:8:2:99d6::f]:7239"
          }
        ]
      }
    ]
  },
  "clusterId": "7c56e517-87a9-448c-a89c-d5057f394d01",
  "clusterName": "active",
  "historyShardCount": 2048,
  "persistenceStore": "mysql8",
  "visibilityStore": "elasticsearch",
  "versionInfo": {
    "current": {
      "version": "1.21.2",
      "releaseTime": "2023-07-15T02:00:00Z"
    },
    "recommended": {
      "version": "1.25.0",
      "releaseTime": "2024-09-09T00:00:00Z"
    },
    "alerts": [
      {
        "message": "🪐 A new release is available!",
        "severity": "Low"
      }
    ],
    "lastUpdateTime": "2024-10-02T16:48:54.032707384Z"
  },
  "failoverVersionIncrement": "10",
  "initialFailoverVersion": "1"
}

any guidance here? thanks in advance

If you by chance set
worker.perNamespaceWorkerCount in your dynamic config to something else than 1, would make sure its set to 1 (or dont have this knob in your dynamic config)

Otherwise would look at your server metrics, specifically restarts counter metric
as well as any persistence errors persistence_error_with_type metric
to get more info if this is related to possible shard reloading