All of the scenarios are feasible and were used by different users of Temporal. Here are some reasons to choose one over another.
- I could first implement rest interfaces in each service that trigger the necessary steps and then implement activities that call these endpoints at some central place. I would then create a workflow calling these actions and register it with a worker, running in a separate service.
This approach has a very serious drawback in the lack of flow control between the REST service and the temporal activities. For example, if the REST service is down the activities that call it are still executed leading to errors in the logs, retries, load on the system, etc. If the REST service is overloaded the activities are still executed at whatever rate they are scheduled. When a service hosts activities directly they are executed only when the service is available and has a spare capacity to execute them as activities are dispatched through Task Queues.
Another disadvantage of the REST based approach is that it doesn’t directly work for long running operations. While hosted Temporal Activities can be long running and support heartbeating.
The only time we recommend this approach is when you don’t have control over other services. For example, you call some third party API or an existing service owned by another team that is not yet using Temporal.
- I could implement actions and workflows for each of the services, while each of the workflows would only do what has to be done on this very service. The workers would also run within the services and therefore I would be able to directly access the database within the actions (??). I this scenario, one of the workflows would be the parent workflow that is called first and that would trigger the other workflows as child workflows.
The Temporal brings the maximum value with this approach. It serves as a service mesh for invocations of child workflows and activities hosted by different services. All invocations are flow controlled and operations can be long running. It also has fewer moving parts and is more secure as no direct RPC communications are happening.
- Similar to scenario 2 but with a central parent workflow that has a worker that is running at some central place (like in scenario 1)
To me this is the same as scenario 2 with a dedicated orchestration service. I think the decision on how to partition your application into specific services is very use case specific.