Workflows Stuck in Running Mode for Several Days

The application has been idle from Aug 3rd UTC, all the workflows are given 45 seconds WorkflowExecution Timeout and Task timeout of 15 seconds.
The activities have 3 retries and StartToClose Timeout and ScheduleToClose Timeout as 1 second.
The cluster has been idle without any incoming traffic. But the server has been running about 4K “getTasks” operations (graphs are attached). The DB is seeing the traffic and even on the UI, all these tasks show up as still “Running”.

How can we get rid of these tasks which the application is expected to discard after workflow has timeout within 45 seconds.


Tasks from several days ago which show up as still “Running” which is causing the system to query Cassandra unnecessarily for workflows that are discarded by the application.

Hi @ramyamagham sorry your question did not get a response so far. Can you let us know if this is still an issue?

You could use the Temporal CLI (tctl) to batch-terminate all workflows for a provided namespace.
See the tctl docs page for more info.