I am building temporal workflows in python that accept a dataclass that I defined and one of the issues I’m running into is if bad input is ever passed to the workflow the dataclass may raise a TypeError, ValueError, or RuntimeError. This causes Temporal to retry the Workflow essentially forever.
I know I can define these as failure_exception_types on the workflow definition to avoid this. But all of those error types are very generic. Is there a better approach to this with temporal that allows me to force the workflow to only fail when the dataclass throws an exception for bad input?
There is an issue open here to make this a more specific error type. Usually if using type-safety on the caller side these issues should not occur, only when not using the type-safe argument forms (or there is a code mismatch). The best approach currently is to validate before passing to Temporal or use the failure exception types. Technically there are other approaches such as accepting RawValue parameter instead and using workflow.payload_converter().from_payload(the_raw_value, MyDataClass) in code where you can try/except it, but that removes some type checking that can be done with proper parameter types.
Thanks Chad for sharing the ticket, I’ll definitely keep an eye on that one because I think in some cases a mismatch can still happen. So its definitely better to handle it appropriately but you do have a good point that it would only happen in niche cases where maybe we also aren’t handling arguments correctly.
I was also wondering if it’s possible for us to override the data converter to use a try/except on converting the payload to the dataclass and throw our own ApplicationError in there. That way we can kind of implement our own workaround that applies to multiple workflows
Although, I’m not familiar with data converters so I’m not sure if that’s a path worth going down or where I would find documentation on overriding that data converter.
Yes, though in this case it’s actually overwriting the payload converter. This sample shows a custom payload converter for a custom type, but you can just make a custom payload converter that delegates to the default but catches and throws your own exception.
took a bit of experimenting on my side to catch up in understanding, theres definitely a couple things going on in that sample but I think I got a simple example working. I’ll have to experiment with it some more, but this is what I put together based on your last comment. Let me know if it seems off or seems like a bad idea
class CustomPayloadConverter(DefaultPayloadConverter):
def to_payloads(
self, values: Sequence[Any]
) -> List[temporalio.api.common.v1.Payload]:
try:
return super().to_payloads(values)
except TypeError as e:
raise ApplicationError('Bad Input could not be returned') from e
def from_payloads(
self,
payloads: Sequence[temporalio.api.common.v1.Payload],
type_hints: Optional[List[Type]] = None,
) -> List[Any]:
try:
return super().from_payloads(payloads, type_hints)
except TypeError as e:
raise ApplicationError('Bad Input was received') from e
overwritten_data_converter = dataclasses.replace(
temporalio.converter.default(),
payload_converter_class=CustomPayloadConverter,
)
client = await Client.connect(...some other params...
data_converter=overwritten_data_converter
)
This is definitely how to do it if you want to fail a workflow on deserialization failure. Would still recommend trying to prevent the deserialization failure in the first place (use type-safe call, etc).
I was looking at this again and I realized that the PayloadConverter example I shared earlier doesn’t fully work-around the issue.
There actually is a case where if no input is provided to a workflow that is expecting input that the temporal sdk will raise a TypeError from this line in the _workflow_instance.py instead of the payload converter. I’m still sitting on if there is a way to workaround that as well but also curious if others have ideas