This is my use case.
Activity 2, runs iteration. say upto 45. In each iteration, it runs another iterations say upto 100, these 100 workflows spawned are to be executed in parallel. Each workflow is fed some unique input ids, basis which it fetches the data from the DB
Once workflows/activities are executed, output of these activities are put in cache, so that they can be used in next iteration. It’s sort of map reduce usecase
For each iteration; for 100 workflow, I fetch the data from DB. Data is very large to be saved in redis/memcache. So i am figuring out, if I can run the workflow at jth position on single machine for all 45 iterations? so that data kept in memory on that machine for that element can be re used.
