Retrying on ScheduleToStartTimeout

Been reading the docs and I think I’m missing how to do this one.

I have workers set up that look like the example here.

When there’s a release and the pods are rolling in k8s, this UUID gets shut down and all of the state saved to the worker specific queue is gone. That’s absolutely foreseen and fine, but the problem is I’m having trouble finding how I’d get Temporals workflow to catch the thrown ScheduleToStart error. I don’t seem to be able to catch it in the workflow, as the activity failing kills the workflow. Do I have to add that ScheduleToStart as an error that doesn’t fail in the proxy, and THEN catch it and handle it in the workflow?

I see where I would add a retry but not really a “non-failure” list.

Edit to add: The recovery scenario would be to catch the error in the workflow, call the getUniqueTaskQueue activity to get a new uuid, create a new activity proxy, and continue from there.

It looks like I can catch that error in the workflow but for some reason, the specific activity I’m trying to call can’t catch it, it’s caught much further up. Is there any way to have the code that calls the activity be able to catch the error?

Actually I’m just lost. This works perfectly fine, I wasn’t writing the JS right. Forgive me. I’m a Go dev.

Solution: await the right call in a try/catch block!

1 Like