Temporal & GCP pubsub - should we build an adapter or replace our pubsub interactions

Hi there, our architecture currently uses gcp pubsub heavily for our async messaging between services. We currently have ‘saga-esqe’ interactions between our microservice to perform long running processes etc…

To adopt temporal I see two options (perhaps there are others, plz share)

Option 1

Write a pubsub adapter layer that listens to all topics that all workflows could be concerned about. The adapter would then forward the messages to the corresponding workflow by sending signals.

Pro: Existing service code remains largely the same
Con: Mapping the pubsub event → workflow would require including the workflow ID in the message context this would require code changes.
Con: Lose flow control for activities that will be started via HTTP
Con: The workflow is more complex as it now depends on signals.

Option 2:

Refactor service/logic into it’s own workflow that the parent workflow can await until its complete.

Pro: The workflow logic is now simple and we can just await the response of the child workflow
Con: Work required to create child worker/workflow/activities, these are mostly wrappers on the existing logic though.

Concrete example:

Doc-to-video service

  • Home for the workflow in question, its job is to convert a document to a video which requires orchestrating the various calls to our existing micro-services.
  • The first step in the workflow would be to call the asset-service transload api.
  • There will be several other steps similar to the asset call, some times its start with http call and completion notified via pubsub, and sometimes its us sending an event to a given topic to start the async task.

Assets service

  • Currently has a HTTP api to begin trans-loading an asset into our buckets and DB
  • The completion of the task is currently only determined by listening to the assets-service-topic on the asset-updated event.

Other information that might be relevant:

  • All our micro-services are written in TS/Node.js in one mono-repo
  • All deployed to cloud-run
  • Each service creates pull-based subscription to the relevant topics it wants to listen to events for.

The final architecture doesn’t need pubsub as all communication can happen through Temporal. This assumes that the services host activities and child workflows directly.

I don’t advocate making such a big change from your current architecture at once. So, I would start by orchestrating services through pub-sub and then refactoring them individually to a simpler architecture.

Note that you can create libraries in your workflow code to simplify it. For example, you can create a class that encapsulates HTTP/Async reply interaction and gives the workflow the impression of a single synchronous call.

Thanks for the reply!

Im not sure what the class/libraries you described would look like. Would they use AsyncCompletionClient + taskId so that the activity itself can give the impression of a sync call.

Or do you mean encapsulate the signal/callback logic somehow.

If it’s the latter, do you have any examples of this or even pseudo code would suffice to get the gist :slight_smile:

I meant encapsulating the request and reply through a signal. I don’t know typescript to give a sample.

Take a look at this example of handling multiple signals coming into a workflow: Listening to event streams in a workflow - #2 by bergundy