High DB CPU utilization with sub optimal number of history shards?

When youre not getting enough throughput on your temporal workflows, and your DB CPU utilization is not high enough, the solution is to increase the number of history shards since its the unit of parallelism.

Is there any scenario where DB CPU utilization is very high for a given number of history shards, and it drops as soon as the same test is run with a higher number of history shards?