Share resources between workers

Hi everyone,

I am developing a distributed port scanner. My temporal cluster contains multiple scheduled workflows, each one of them with an associated IP address to scan as its workflow arguments.

I have multiple VPS set up with multiple public network interfaces for this purpose.
I am planning to deploy multiple workers on each of these VPS.

I am trying to share those network interfaces in such a way that, when a worker is processing a task, other workers cannot access this network interface.

I saw multiple posts linking to the mutex workflow example, but it does not looks appropriate as it is targeting a single shared resource (correct me if i’m wrong).

How would you design your workflows / activities in such a way that:

  • Each worker can pick a network interface as soon as it is available
  • Wait for it to be available if all of them are locked

We previously developed this using a Redis instance shared between multiple workers, and I’m trying to see if we can get rid of it by migrating those actions to workflows.

Looking forward to your response.

Use Session as it allows to limit the number of parallel sessions per worker.

AFAIK, Sessions are used to limit per-process concurrency or to execute multiple activities on the same host (like local activities) ?

I’m afraid it would not work if there are multiple workers containers on each VPS am I right ?

I was thinking of deploying multiple workers sharing a pool of the VPS network interfaces, and locking/unlocking the pool accordingly.

I saw multiple posts linking to the mutex workflow example , but it does not looks appropriate as it is targeting a single shared resource (correct me if i’m wrong).

You can create the workflow instance per resource.

Mutex workflow works if the rate of locking/unlocking is not high per resource.