Does using Signals to block Activity retries break determinism in workflows?

Hello,

I’m working with some very brittle APIs that cannot handle a barrage of requests and I worry about the activities retrying too frequently. I could set an activity retry policy, but as an experiment I’d like to go down my proposed path. As an alternative, I decided to use signals to pause any retries of an activity until I gave a manual signal for the workflow to try again. Here is the snippet of code I’m using:

for retry {
		logger.Error("activity failed.", "Error", err)
		err = workflow.ExecuteActivity(ctx, activities.QueryableActivity, wait, state.String()).Get(ctx, &result)
		if err != nil {
			signalChan.Receive(ctx, &signalData)
			if signalData == "retry" {
				continue // Retry the activity
			} else if signalData == "die" {
				state = FAILED
				return state.String(), err
			}
		} else {
			retry = false
		}
	}

I’m wondering if the following code would break the determinism constraint for workflows. In my opinion it doesn’t because I’ll either be getting the output I expect (the underlying activity is idempotent) or I would tell the workflow to fail.

Any insight or scenarios y’all would suggest to further test if this would result in an indeterministic workflow?

This code looks deterministic to me.

By the way, have you considered using task queue rate limiting (worker.Options.TaskQueueActivitiesPerSecond) to avoid overloading the brittle APIs?