Hi folks. I am building a proof-of-concept using Temporal. At $employer, we have a need to write (peak) ~100 messages/s to a datastore asynchronously with retries. Typically, we would put this into RabbitMQ and have a worker consume it, then hand-roll retries, alerting, tracing, etc.,
At a high level, some of the options I can think of are:
- Start a new workflow for each distinct business entity. Clients signals workflows for based on the business entity in each message. (EDITED)
- Create a pool of workflows that read from another queue (RabbitMQ). Each workflow read a message from the queue, does a write to a remote DB reliably, then ContinueAsNew to pick up the next message.
- A long running workflow that receives messages as signals and execute write activities. I suspect this needs to ContinueAsNew after a while to prune history size.
Would love to hear your thoughts.