Matching service high QPS on persistence

After adding cluster specific membershipPort in services.service.rpc, I am still seeing service grpc ports in different cluster get discovered. Do I need to use different grpc ports per cluster?

Also tried to remove DB key space in a different region(use to be both us west and us east now us west only), that improved qps load on existing clusters from 10X to 3X in cluster 01 and 02. But the shard steal is still there. However the staging env is good without the issue. Any idea I could try?

history:
rpc:
grpcPort: 7934
membershipPort: 6734
bindOnIP: “0.0.0.0”

{“level”:“info”,“ts”:“2021-07-30T05:29:57.931Z”,“msg”:“Current reachable members”,“service”:“frontend”,“component”:“service-resolver”,“service”:“history”,“addresses”:"[10.32.187.66:7934 10.32.36.243:7934",“logging-call-at”:“rpServiceResolver.go:266”}