Can a workflow receive signals more than once?

Hi, I plan to use signals extensively in my workflow. I see there is a limit to how many signals a workflow can consume and it seems the recommendation is to do ContinueAsNew periodically with async signal drain as Maxim showed here. (Async Drain Signals?)

I tried this example (Async Drain Signals?) in my local cadence setup.

  1. I sent 15 signals to the workflow. (10 signals read using blocking receive using a for loop and then I do async drain of the channel using ReceiveAsync
  2. I see that some of the signals are received multiple times when we try to drain the channel using ReceiveAsync Is it expected? Is this only cadence specific or temporal also have same behavior?
  3. Can workflow receive same signals multiple times?

Sorry I have not tried this yet on temporal, having issues with local setup.

Welcome to the forum! Normally in Temporal, a Signal will be delivered once to the Workflow. It is possible in a rare edge case for it to be delivered more than once, which is why we recommend:

If multiple deliveries of a Signal would be a problem for your Workflow, add idempotency logic to your Signal handler that checks for duplicates.

thanks @loren for response. I am seeing a little weird behavior when I send signals to the workflow.

	log.Println("Sending signals")
	// Send signals in random order
	for signal := 0; signal < 100; signal++ {
		signalName := fmt.Sprintf("Signal%d", signal)
		err = c.SignalWorkflow(context.Background(), we.GetID(), we.GetRunID(), "Signal1", string(signalName))
		if err != nil {
			log.Fatalln("Unable to signals workflow", err)
		}
		log.Println("Sent " + string(signalName))
	}

In the workflow I read 25 signals first and read remaining through draining the channel.

func SignalsWorkflow(ctx workflow.Context) (err error) {
	signalChan := workflow.GetSignalChannel(ctx, "Signal1")

	for i := 0; i < 25; i++ {
		var signalVal string
		signalChan.Receive(ctx, &signalVal)
		workflow.GetLogger(ctx).Info("Received signal!", "signal", "Signal1", "value", signalVal)
		// PROCESS SIGNAL
	}
	// Drain signal channel asynchronously to avoid signal loss
	for {
		var signalVal string
		ok := signalChan.ReceiveAsync(&signalVal)
		if !ok {
			break
		}
		workflow.GetLogger(ctx).Info("Received signal!", "signal", "Signal1", "value", signalVal)
		// PROCESS SIGNAL
	}
	return workflow.NewContinueAsNewError(ctx, AwaitSignalsWorkflow)
}

But here in the draining part, it keeps receiving same signals again and again. But eventually it finished receiving all 100 signals. But the number of duplicates received is quite high during draining. Is this a bug within ReceiveAsync?

Log showing signals 23… onwards being received again.

2023/06/26 09:40:13 INFO  Received signal! Namespace default TaskQueue await_signals WorkerID 97871@MKD7THQX4N@ WorkflowType AwaitSignalsWorkflow WorkflowID await_signals_6f37599e-3e27-41dd-93b5-c2d9e9b9f566 RunID 190b7bf4-9808-45af-a22b-212fc73d4f9f Attempt 1 signal Signal1 value Signal92
2023/06/26 09:40:13 INFO  Task processing failed with error Namespace default TaskQueue await_signals WorkerID 97871@MKD7THQX4N@ WorkerType WorkflowWorker Error UnhandledCommand
2023/06/26 09:40:13 INFO  Received signal! Namespace default TaskQueue await_signals WorkerID 97871@MKD7THQX4N@ WorkflowType AwaitSignalsWorkflow WorkflowID await_signals_6f37599e-3e27-41dd-93b5-c2d9e9b9f566 RunID 190b7bf4-9808-45af-a22b-212fc73d4f9f Attempt 1 signal Signal1 value Signal23
2023/06/26 09:40:13 INFO  Received signal! Namespace default TaskQueue await_signals WorkerID 97871@MKD7THQX4N@ WorkflowType AwaitSignalsWorkflow WorkflowID await_signals_6f37599e-3e27-41dd-93b5-c2d9e9b9f566 RunID 190b7bf4-9808-45af-a22b-212fc73d4f9f Attempt 1 signal Signal1 value Signal24

I’m not familiar with Go, maybe @tihomir knows? If not, you could submit the bug to sdk-go.

When a signal is received during the execution of a workflow task that decided to complete the workflow (continue-as-new counts as a completion for this behavior), the task completion fails with UnhandledCommand. It allows retrying the task allowing workflow to process the newly received signal. So what you are witnessing is expected and works as designed.

Note that a single workflow is not designed to process a sustained high rate of signals which might lead to repeated UnhandledCommand failure, which would not let the workflow ever complete. In reality, it is going to complete after the total limit of allowed signals per workflow will be reached.

Thanks Maxim and Loren for your input. I will think about a different architecture to avoid this problem.