Completed workflows appear as running in workflow list ui

Hi,
we have a weird issue with completed workflows still showing in the UI as running.

When I describe or query the workflow, everything seems ok :

temporal workflow describe --workflow-id analyze-ftp-SINA01398-demo-891dc036-9100-4023-bf57-e4b17921e9d7 --run-id 1404beab-1e4a-4a95-b249-288242e8c6ee

{
  "executionConfig": {
    "taskQueue": {
      "name": "worker-queue",
      "kind": "Normal"
    },
    "workflowExecutionTimeout": "0s",
    "workflowRunTimeout": "0s",
    "defaultWorkflowTaskTimeout": "10s"
  },
  "workflowExecutionInfo": {
    "execution": {
      "workflowId": "analyze-ftp-SINA01398-demo-891dc036-9100-4023-bf57-e4b17921e9d7",
      "runId": "1404beab-1e4a-4a95-b249-288242e8c6ee"
    },
    "type": {
      "name": "AnalyzeRemoteServerWorkflow"
    },
    "startTime": "2025-02-05T13:56:06.403057592Z",
    "closeTime": "2025-02-05T13:56:06.614598694Z",
    "status": "Completed",
    "historyLength": "18",
    "executionTime": "2025-02-05T13:56:06.403057592Z",
    "memo": {

    },
    "searchAttributes": {
      "indexedFields": {
        "BuildIds": "[\"unversioned\"]"
      }
    },
    "autoResetPoints": {

    },
    "stateTransitionCount": "11",
    "historySizeBytes": "2835",
    "mostRecentWorkerVersionStamp": {

    }
  }
}

temporal workflow query --workflow-id analyze-ftp-SINA01398-demo-891dc036-9100-4023-bf57-e4b17921e9d7 --run-id 1404beab-1e4a-4a95-b249-288242e8c6ee --type “__stack_trace”

Query result:
["Workflow is closed."]

We have to manually delete workflow to “clean” the UI with temporal workflow delete ...

Anything we could do wrong here?
Thanks.

1 Like

Including the other screenshot here.

1 Like

I’m also facing this issue, not sure what to do about it other than just cleaning up.

If you are running visibility on SQL db:

Upgrade to 1.27.1+ server version and see if you run into this again
see release notes here
specifically “Fix out-of-order Visibility tasks with SQL database” section

Hi @tihomir
We are using Postgres with temporal 1.27.2 and still facing such workflows.
Is there any chance that the issue can be because of wrong Postgres configuration?

I also ran into this using temporal with postgres. Not sure if other posters here have the same db.

FYI, I noticed that we had async replication in postgres. This was the reason.

So try either getting rid of the read replicas or enable sync replication PostgreSQL Documentation: synchronous_commit parameter

1 Like

Eh, its wasn’t the reason. All the rest of the issues were fixed by the sync replication, but the “running” workflow issues are still here.
So I’m still looking how to fix this irritating bug

And again I was wrong. The rest of the issues are back. Thanks to the bitnami images of postgres with buggy sync replication. Synchronous replication support is very limited · Issue #1011 · bitnami/containers · GitHub
So most probably the visibility issue is also resulted by this. I’m tired of these postgres setups, so, Temporal Cloud, wait for us :grinning_face_with_smiling_eyes:

how did you clean it up? I ran into this problem today, my current temporal version is 1.24.1 but we cannot upgrade its version soon. How should I do in this situation for now?

Thank you.

the inside it shows completed:

You can use the admin-tools pod and the temporal cli Temporal CLI command reference | Temporal Platform Documentation to remove it from histor

1 Like