Allowing activity retries to be interrupted

I want to make a long-running workflow that responds to signals by making some API call to a remote service whenever a signal is received. I want to have a very long retry period, in case the remote service is down for a while, but if a new signal comes in while I’m still retrying I want to give up on the previous API call I was retrying and just process the new signal.

My idea for how to do this was to make a cancellable child context for the activity that makes the API call and keep around a reference to the cancel function and the activity future. Then I wait for that activity (and deal with any errors that arise) in a new "Go"routine and, if a new signal comes in in the meantime, cancel before processing it.

Code looks like this:

func MyWorkflow(ctx workflow.Context) error {
	myChannel := workflow.GetSignalChannel(ctx, "my-channel-name")
	var inProgressCancel workflow.CancelFunc
	var inProgressFuture workflow.Future
	for {
		var msg MyMessageType
		myChannel.Receive(ctx, &msg)
		// Cancel any pending updates.
		if inProgressCancel != nil {
			inProgressCancel()
			inProgressCancel = nil
		}
		// If an update is in progress, wait until it is no longer happening
		// (finished or cancelled, doesn't matter which).
		if inProgressFuture != nil {
			inProgressFuture.Get(ctx, nil)
			inProgressFuture = nil
		}
		var activityContext workflow.Context
		activityContext, inProgressCancel = workflow.WithCancel(ctx)
		activityContext = workflow.WithActivityOptions(activityContext, workflow.ActivityOptions{
			// Allow very long retry period
		})
		workflow.Go(ctx, func(ctx workflow.Context) {
			inProgressFuture = workflow.ExecuteActivity(activityContext, MakeAPICall)
			err := inProgressFuture.Get(ctx, nil)
			inProgressFuture = nil
			// handle error
		})
	}
}

Should this code work? Is it the best way to accomplish what I want?

Thanks!

I believe your approach should work. If the number of such signals is unbounded make sure to call continue as new periodically to limit history growth.

Thanks @maxim! By the way, is there a guideline for how big is “too big” for history? I could not find one.

I believe the limit is 100k events and 100mb. But we don’t recommend getting close to the limit. Also for such use cases we recommend calling continue as new periodically (for example once a week). This makes updates simpler as there is guarantee that none of the workflows are executed by code that is older than a week.

1 Like