DB size in Temporal

Hi Maxim/Thiomir

Is there any way to check the database size that temporal creates for its integrated db either it is MySQL, postgres etc while installation. If yes please help and how can we increase the size if required.


Hi, not sure I understand the question, the size of the db is something you set up and the db capacity needed is much related to your workload size.

1 Like

Hi @tihomir you mean to say that while installation of temporal we can define the size, basically capacity of database.

Let say in future where temporal logs its history in its integrated database run out of memory(size assigned while temporal installation) and I want to increase the size and as well want to know the default size it has taken.

Any cmd, any config that can help me with this.

I don’t think there is a great answer for your question currently.

If you are doing a new setup you would imo have to do a load test and then measure the db size differences. This would then give you an idea of estimation given your db size currently, and what to expect given your expected workloads in the future.

For example consider having following simple workflows to test:
a) workflow that executes 1, 4, 16, etc sequential activities
b) workflow that executes 1, 4, 16 etc parallel child workflows

If you have an existing setup already, look at your db cpu, memory utilization and disc IO as well as
measure your persistence latencies (server metrics):

histogram_quantile(0.95, sum(rate(persistence_latency_bucket{}[1m])) by (operation, le))

What to aim for can vary depending on db, for example shoot for p99 being less than 50ms for Cassandra, MySQL. For PostgreSQL you might want this to be even lower (as its single db setup).
Distributed dbs (for example Cockroach) might be slower on larger scales, but would have to measure.

Hope this helps

Thanks @tihomir