Hello!
Together with the team, we are implementing the temporal. The workflow consists of 4 activities (1 local). When generating a load of 200 RPS, the temporal works normally, but I’m not satisfied with its speed. When testing, 12000 workflows are launched. Most of which get stuck in the WorkflowTaskScheduled status.
And the time according to the temporal_workflow_task_schedule_to_start_latency_seconds metric is continuously growing depending on the duration of the test.
Increasing maxConcurrentWorkflowTaskExecutionSize , maxConcurrentActivityExecutionSize , maxWorkflowThreadCount and so on does not affect the result.
it should stay around 1 (100%) if you see it dip or not even reach 100% then most likely you need either more worker pods or more workflow task/activity task pollers on your workers.
if this graph shows values > 0 then again seems as you might have unprovisioned workers (this metric shows tasks written to db when there are no pollers available).
On your worker side look at cpu and mem utilization % and share please. We need to see if we need to add more workers or add more poller counts.
If you can share results of all this and also share your dynamic config values of