What is the equivalent of cadence:bootstrapHosts in temporal

Looking to define property for list of hosts

Thanks,

Kasi

This was removed in Temporal – Each service now heartbeats into the ClusterMembership table which is then used to derive the current bootstrap hosts.

Also note the new broadcastAddress configuration value that can be used to change the IP address that is reported to the table above - See https://docs.temporal.io/docs/configure-temporal-server#membership---required

Thanks, is there a sample definition I can look at? if I have host1,host2,host3 and host4 running temporal how do I define this property?

1 Like

You do not need to define anything unless bindOnIp is different than the address you need remote nodes to communicate to you on.

Are you experiencing any problems without specifying any options in this area?

Right now this is what I have for front end, do I need to define a bindOnIp? also
what would be the BindOnIp value ? 127.0.0.1 ? how do the hosts know of each other to communicate if I do not specify all the hosts ? thanks
frontend:
rpc:
grpcPort: 7233
membershipPort: 6933
bindOnLocalHost: true

Is BindOnIP the actual IP Address of the host that temporal is running on?

Are you running all services on a single machine or multiple machines through something like Kubernetes?

If you are running everything on a single machine, you can specify bindOnLocalHost: true (as you have) for all services and not specify bindOnIp or broadcastAddress.

all the services (frontend, history, matching, worker) on run on host1, host2, host3 and host4 how will each of them discover each other, is there a way to validate it?

Thanks a lot,

Kasi

This is my current configuration, can you please verify? thanks

services:
  frontend:
    rpc:
      grpcPort: 7233
      membershipPort: 6933
      bindOnLocalHost: true
    metrics:
      tags:
        type: frontend
      prometheus:
        timerType: "histogram"
        listenAddress: "0.0.0.0:9090"
  history:
    rpc:
      grpcPort: 7234
      membershipPort: 6934
      bindOnIP: 127.0.0.1
    metrics:
      tags:
        type: history
      prometheus:
        timerType: "histogram"
        listenAddress: "0.0.0.0:9090"
  matching:
    rpc:
      grpcPort: 7235
      membershipPort: 6935
      bindOnIP: 127.0.0.1
    metrics:
      tags:
        type: matching
      prometheus:
        timerType: "histogram"
        listenAddress: "0.0.0.0:9090"
  worker:
    rpc:
      grpcPort: 7239
      membershipPort: 6939
      bindOnIP: 127.0.0.1
    metrics:
      tags:
        type: worker
      prometheus:
        timerType: "histogram"
        listenAddress: "0.0.0.0:9090"
clusterMetadata:
  enableGlobalDomain: false
  failoverVersionIncrement: 10
  masterClusterName: "active"
  currentClusterName: "active"
  clusterInformation:
    active:
      enabled: true
      initialFailoverVersion: 1
      rpcName: "frontend"
      rpcAddress: "localhost:7233"
dcRedirectionPolicy:
  policy: "noop"
  toDC: ""
archival:
  history:
    status: "disabled"
  visibility:
    status: "disabled"
domainDefaults:
  archival:
    history:
      status: "disabled"
    visibility:
      status: "disabled"
publicClient:
  hostPort: "localhost:7233"
dynamicConfigClient:
  filepath: "config/dynamicconfig/development.yaml"
  pollInterval: "10s"
global:
    pprof:
      port: 7936
    membership:
      name: temporal
      maxJoinDuration: 30s
      broadcastAddress: "127.0.0.1"

In your configuration, where all services run on all nodes, the simplest approach would be to change your configuration by:

  • Remove references to bindOnLocalHost
  • All services defined with bindOnIp: 0.0.0.0
  • broadcastAddress: $hostIP - Should be the unique IP of the host.

Also, note - you can paste code without losing formatting by escaping with the delimiters [code][/code]

Thanks a ton, will try it out

1 Like

Hi Shawn,

One question I have is, how does the hosts discover each other, how the table gets built? previously we used to mention the list of hosts, in this case by defining the host IP and membership port how do the hosts discover each other?

I am looking at the logic/mechanism as to how the hosts discover each other.

Thanks,

Kasi

Hey Shawn,

This is my understanding, under membership we have defined a ‘name’ attribute which determines all the other participating nodes, based on the registered IP Address, correct?

Thanks,

Kasi

Even if the above is true, how do the hosts discover each other, is there a port they communicate/broadcast so others listen, how does a host find out who are all the others registered with that ‘name’?

Thanks a lot,

Kasi

The ‘name’ config should be the same across all hosts in your cluster and is being deprecated to be handled internally - A value like temporal should suffice.

The functionality you are interested is represented by the rpc_address and rpc_port values that are reported into membership database table.

  • rpc_port is derived from the rpc.membershipPort config value for each service
  • rpc_address defaults to the rpc.bindOnIp value defined for the service, however is overriden by global.membership.broadcastAddress

The combination of these two provides a unique discovery mechanism for each host. Each service writes these values into the database on regular intervals, which is then used by newly initializing hosts to bootstrap the ringpop/membership layer.

Thanks shawn really appreciate it

Happy to help - This behavior requires a bit of digging to discover and could be better documented :slight_smile:

Let me know if you have any other questions or run into any issues.

Hi Shawn,

what is the polling interval and when does the table gets purged?

Thanks,

Kasi

For reads, the table is only read once during initialization of the service role – as you only need to join the membership layer once.

A row is created on each startup of a role and then that row is updated every 10-15 seconds - Only nodes that have heartbeated in the last 20 seconds are considered bootstrap candidates on the read side. Each row is cleaned up after 48 hours of no updates.

Hi Shawn,

After this change, I do not see the front end service listening on port 7233, if I do a netstat there is no LISTEN at port 7233 and I so several CLOSE_WAIT on 7239, can you please advice?