Temporal and Cadence performance comparison

Hello.

I did Temporal and Cadence performance comparison testing, and I saw different patterns between them. Please give me some advice to understand the result.

If you need more information to analyze the result, let me know, please.

[Test environment]

  • 3 hosts
  • OS version: CentOS 8.5.2111
  • Cassandra version: 3.11.12-1
  • Elasticsearch version: 7.17.1
  • Temporal-server version: 1.15.2
  • Cadence-server version: 0.23.1
  • Kafka/zookeeper for Cadence
  • 12 workers (4 workers per host)

[Temporal/Cadence configuration]

  • Number of history shards: 100
  • Enable advanced visibility
  • history,matching,worker.persistenceMaxQPS: 20000

[Test method]

  • Create 1000, 5000, 8000 cron workflows that will be triggered for each minute.
  • Mesure the workflow latency (end time - execution time)

[Test result]

  • Average latency is similar between them.
  • But Temporal’s maximum latency was long rather than Cadence. there was a long tail in the Temporal result.

[Test data]

According to my mem, at least the default configs for task queue partition are different (if your worker poller count is not large enough, this config can affect latency):

By default, cadence use 1 task queue partition: cadence/config.go at v0.23.1 · uber/cadence · GitHub

By default, temporal use 4 task queue partition:

can your try to use the same number? either all 4 or all 1

@Wenquan_Xing Thank you for your comment.
I changed the number of task queue partitions to 1, and the result has been changed that is almost similar to Cadence. :+1:

Here is the comparison data and chart.

1 Like

One more thing to adjust is the default rate limit of task dispatching (default to 1000) temporal/config.go at v1.15.2 · temporalio/temporal · GitHub

According to the pic, I believe the above config should be increased

1 Like


It’s good. That configuration affected performance enhancement.
Thank you for the suggestion.

During the test, I found another issue regarding advanced visibility, I’ll leave a new post regarding it.