Callback progress updates/intermediate results from activity/workflow to external API

Hello, in my case I have the following:

  • A web backend, in python flask
  • A long running computation graph with multiple python executables. Some executables are running in parallel, some chained together. Computation graph (in other words, orchestration of these executables) is written in python, as multiple Celery tasks triggering those executables as subprocess, then merging results in other Celery tasks etc…

Backend triggers the long running tasks. Celery starts doing the tasks. Then, backend queries for the progress every seconds (we use current_task.update_state to keep this in celery tasks). When the comp. graph finishes, backend polling returns the end result instead. (See image in post below)

We would like to move this entire operation to Temporal, creating this as a separate service, with refactoring the comp. graph and communication mechanisms (reasons include code refactoring/maintainability, better separation of concerns, workflow engine reliability etc…).

Instead of polling, we need async callbacks to backend, that let backend know about the progress changes/intermediate results/finished result. See pic below:
backend_temporal

Here are my questions:

  • Does it makes sense to use workflow, or activity? (we’ll need some nestedness here due to the nature of the comp. graph, i.e workflows that triggers multiple other workflows, or activities that trigger other activities, or a mixture of those) Does either of them support some kind of “workflow/activity metadata” for progress tracking?
  • How can we achieve this callback mechanism? We would like to deploy the backend and comp. graph in separate docker containers. Do we also need to run a temporal worker on backend side?

Thank you in advance :pray:

Current state of backend - celery diagram (couldn’t upload two images in first post above):
backend_celery

I would model these executables as activities. A long Temporal activity should periodically heartbeat to indicate that it is still alive. Activity progress can be included in the heartbeat details. If an activity fails (because of the missing heartbeat, for example) and is retried, the content of the last reported details is available to the new attempt.

  • Does it makes sense to use workflow, or activity? (we’ll need some nestedness here due to the nature of the comp. graph, i.e workflows that triggers multiple other workflows, or activities that trigger other activities, or a mixture of those) Does either of them support some kind of “workflow/activity metadata” for progress tracking?

Activities cannot directly call other activities. It is always done by workflows. So a workflow calls the first activity and then the second one. A workflow can call a child workflow if the subflow is complex or hosted by anther process.

  • How can we achieve this callback mechanism? We would like to deploy the backend and comp. graph in separate docker containers. Do we also need to run a temporal worker on backend side?

You deploy workflows and activities in different containers if you want. Temporal also supports workflows receiving async events (we call them signals). But I don’t think you need them here. Just activities would be enough.

1 Like