Is it ok to use Temporal Schedule to run infinite repeat schedule jobs?


The new Schedule feature allows us to schedule workflow to run at interval of seconds, compared with the cron scheduler where only minute interval is supported.
My question is is it ok to use the Schedule feature to run workflow infinitely (i.e. without remaining actions) at second level? The reason is I notice there is a dedicated temporal-sys-schedule workflow (please refer to the attached screenshot) and the workflow has events, and I remember temporal workflow with long history events will have performance impact, that’s why cron scheduled job uses continueasnew to start a new scheduled workflow. Hence I have this question.

temporal-sys-scheduler-workflow Continue-As-News after 500 actions, so there is no performance concern with length of its history. It’s fine to create a Schedule that runs every second without limited actions.

If you have more than 10 actions per second across all Schedules on a Namespace, you’ll need to raise config limits (or open a ticket if you’re on Cloud). If you have more than 100 actions per second, it might be better to use a Cron Workflow (as overall, the way that’s implemented is more efficient).

1 Like

Thank you @loren! Is there a way to change this 500 value through config? I didn’t find anything through dynamic config.

I think it might not be configurable. Are you running into issues with it?

@loren We noticed our schedule ended up with WorkflowExecutionTerminated error since it’s hitting the event history size limit before Continue-As-New. When checking on the event history, we notice a large number of SideEffect events between the 2 actions (WorkflowTaskStarted for example). Do you know what are those events? and is there any workaround or solution?

BTW, we are on Temporal server 1.19.1 if that matters, not sure if upgrading would help this case.

Sample SideEffect event:

{
      "eventId": "41546",
      "eventTime": "2023-11-20T22:58:56.161406436Z",
      "eventType": "MarkerRecorded",
      "version": "0",
      "taskId": "112207209",
      "workerMayIgnore": false,
      "markerRecordedEventAttributes": {
        "markerName": "SideEffect",
        "details": {
          "data": {
            "payloads": [
              {
                "metadata": {
                  "encoding": "anNvbi9wbGFpbg=="
                },
                "data": "eyJOb21pbmFsIjoiMjAyMy0xMS0yMFQyMjo0NjowMFoiLCJOZXh0IjoiMjAyMy0xMS0yMFQyMjo0NjowMFoifQ=="
              }
            ]
          },
          "side-effect-id": {
            "payloads": [
              {
                "metadata": {
                  "encoding": "anNvbi9wbGFpbg=="
                },
                "data": "MzgzMjI="
              }
            ]
          }
        },
        "workflowTaskCompletedEventId": "33310",
        "header": null,
        "failure": null
      }
    }

Ah apologies, you ran into a known bug. It should be fixed in the upcoming 1.22.3 release

Cool, thanks for the reply! We will try upgrading to a newer version. BTW do you have the github issue link for this bug? Just curious to take a look, thanks!

Update: nvm found those PRs:

There isn’t one for this bug, sorry!