Hot Partition in Matching?

Hi,

Currently running performance benchmarking for Cadence. I found a unique issue, could be the way we are using Cadence (not quite sure).

As of now, I have 5 cadence-matching pods running in my Kubernetes setup. Now, the issue arises when I begin loading the system - I see that the CPU and Memory of 1 matching pod os heavily skewed than the others and this grows as the performance activity progresses. My hunch is their is a hot partition in my setup pertaining to how I am creating the ID. We are using the UUID to generate the ID that we send to Cadence: payout-approval:payout_HmUsiAUriMhJqn

Was wondering if someone could guide me to what I am done wrong.

EDIT: Have tried with UUID as well. Still the same issue.

By default, a task queue is using a single partition. You can increase the number of partitions for a specific task list using dynamic config. The relevant config looks like. The constraints can include specific task queue and namespace names.

server:
  dynamicConfig:
    matching.numTaskqueueReadPartitions:
    - value: 5
      constraints: {}
    matching.numTaskqueueWritePartitions:
    - value: 5
      constraints: {}