Hi, I have a question regarding how to use temporal in a microservice environment. Given that I have a backend with 3 microservices (calendar, notifications, users) and each of the microservices has its own database. If I want to implement for instance the user deletion process as a workflow (let’s say this process requires us to alter data in all three services), where should my workflow, actions and workers live?
As far as I’ve understood, there are several possibilities.
I could first implement rest interfaces in each service that trigger the necessary steps and then implement activities that call these endpoints at some central place. I would then create a workflow calling these actions and register it with a worker, running in a separate service.
I could implement actions and workflows for each of the services, while each of the workflows would only do what has to be done on this very service. The workers would also run within the services and therefore I would be able to directly access the database within the actions (??). I this scenario, one of the workflows would be the parent workflow that is called first and that would trigger the other workflows as child workflows.
Similar to scenario 2 but with a central parent workflow that has a worker that is running at some central place (like in scenario 1)
Is one of the scenarios even feasible? Thanks for any help!
All of the scenarios are feasible and were used by different users of Temporal. Here are some reasons to choose one over another.
I could first implement rest interfaces in each service that trigger the necessary steps and then implement activities that call these endpoints at some central place. I would then create a workflow calling these actions and register it with a worker, running in a separate service.
This approach has a very serious drawback in the lack of flow control between the REST service and the temporal activities. For example, if the REST service is down the activities that call it are still executed leading to errors in the logs, retries, load on the system, etc. If the REST service is overloaded the activities are still executed at whatever rate they are scheduled. When a service hosts activities directly they are executed only when the service is available and has a spare capacity to execute them as activities are dispatched through Task Queues.
Another disadvantage of the REST based approach is that it doesn’t directly work for long running operations. While hosted Temporal Activities can be long running and support heartbeating.
The only time we recommend this approach is when you don’t have control over other services. For example, you call some third party API or an existing service owned by another team that is not yet using Temporal.
I could implement actions and workflows for each of the services, while each of the workflows would only do what has to be done on this very service. The workers would also run within the services and therefore I would be able to directly access the database within the actions (??). I this scenario, one of the workflows would be the parent workflow that is called first and that would trigger the other workflows as child workflows.
The Temporal brings the maximum value with this approach. It serves as a service mesh for invocations of child workflows and activities hosted by different services. All invocations are flow controlled and operations can be long running. It also has fewer moving parts and is more secure as no direct RPC communications are happening.
Similar to scenario 2 but with a central parent workflow that has a worker that is running at some central place (like in scenario 1)
To me this is the same as scenario 2 with a dedicated orchestration service. I think the decision on how to partition your application into specific services is very use case specific.
Thank you for your detailed answer maxim. Things are a lot clearer to me now. I think I’ll try approach 2 and 3 to see which one ‘feels’ better for my system.
Hi there, any sample code for this? I don’t get it if we use option 2 (activities & workflow on each service), then how to trigger the sequence?
E.g.
serviceA.do1stSomething
serviceB.do2ndSomething
serviceC.do3rdSomething
serviceA.do4thSomething
I mean is, in existing examples, they are written in one service, so the way we implement workflow is like writing a common imperative method.
So approach 1 (calling sync REST API) is straightforward based on the examples
But I don’t see an example where model is like suggested in approach 2
The trick is to use a different task queue name for each service. Then specify the correspondent task queue in the ActivityOptions when invoking the activity.
This thread is relative to my question. We have numerous microservices (mostly GO some Java) exclusively using gRPC to communicate with one another. Hence, there are a significant number of protos defined and shared among the microservices in a monorepo. How do we expose/use our protos in a Temporal solution?
We are looking at Temporal to help mitigate some of the negative issues with eventing and long running processes that can fail.
@maxim If we implement option 1, then we have a RESR, GRPC or something else and we can see the service contract in an explicit form. For example, through swagger we can see that there is service A, and it has a createOrder API with params (email: string). If we implement options 2 or 3, how can I as a developer know that there is some activity a in service A and activity b in service B? Is there any way to display a single registry of all activities, their descriptions and parameters somewhere?
Great observation! We recognize this limitation and are actively working on a project Nexus, which would allow defining RPC interfaces (using gRPC, Swagger or any other IDL). Each operation could take as long as necessary and be implemented as a workflow or an activity.