Temporal Dynamic Rate Limiting per Activity

Hello, I’m experimenting with Temporal to build a DSL based workflow execution framework aimed at simplifying the workflows creation for my company.
At the outset, I want to thank the team who have put together a wonderful piece of software which otherwise would be incredibly hard to build (reliably) from scratch.
Now, I have one generic workflow that accepts a DSL and dynamically calls one activity function.

The execution flow is dynamically driven by DSL and the code simply acts as a dynamic implementation of the flow defined in the DSL.

This is pretty much like a generic workflow calling a generic activity depending upon the DSL steps.
Within the activity, I call different microservices based on the values defined in the DSL (thats the implementation part).

I always have only one activity function (and only one task queue).

This seem to work well on my laptop so that’s encouraging.

2 questions:

  1. Are there any downsides/caveats to this approach (in terms of scalability/recovery etc)? As a side note, I tried killing the worker/temporal server etc and I’m able to see this resuming from the point where it left off when they are back up - so that’s promising so far - but still want to hear for any gotchas that I need to know.

  2. I want rate limiting to be applied for certain activity calls (dynamically). In my case it’s always one activity function. But depending upon the values defined in my DSL, I want to be able to rate limit an activity. Is this possible?

  1. This is a very common and supported pattern. So, I don’t see any apparent caveats.

  2. Temporal supports rate limiting per task queue. So you can dispatch activities that require rate limiting on a separate task queue. This assumes that the activity is running through its own worker.

Great, thanks @maxim .

I’m wondering is it a common pattern to run multiple worker instances within the same worker process (Same program) even if they are using the same task queues?
What’s the real advantage of this?

Or is every worker entity instance spins of a long polling loop to the task queue?

Sort of like a work stealing pattern - more the instances, more the throughput?

Anything to watch out for?

I’m wondering is it a common pattern to run multiple worker instances within the same worker process (Same program) even if they are using the same task queues?
What’s the real advantage of this?

We recommend using a single worker per process per task queue. There are only drawbacks of having more than one worker for the same task queue in a process.