in our project we have successfully implemented couple relatively small workflows that fit into db.r6g.large (2 vCpu + 16 GiB memory) AWS Aurora cluster (1 writer - 1 reader ). Now we want to implement a heavier task with ~1M workflow instances and ~400 signals per second at average with peak load up to 800. Initial load test showed that with ‘db.r6g.xlarge’ db node (4 vCpu + 32 GiB memory) we can only handle 5% of expected load. Cpu utilization on writer node went from 30% to 80%. Does this database load look normal or it’s somethign wrong with our workflow implementation? Is there a way to utilize
reader database instances with Temporal?
We’re running v1.0.9 with Java sdk.