Is there a way to increase the workflow deadlock detection timeout from the default 2 seconds? in python_sdk

I’m getting [TMPRL1101] Potential deadlock detected: workflow didn’t yield within 2 second(s) and need a bit more time for some operations in my workflow interceptor.

Any configuration options or environment variables I can set?


Use case: Implementing distributed locking in workflow interceptors to limit concurrent activities across multiple workflow instances (e.g., max 5 database-heavy activities running simultaneously). The Redis calls for lock acquisition occasionally take >2s, triggering the deadlock detection.

Use local activities for this. Direct calls to Redis from workflows are not allowed as they are not deterministic and break other assumptions about workflow code.

Thanks for the response! I understand the determinism concerns, but I’m facing a catch-22:

  • Can’t use regular activities because I need the lock BEFORE spawning the activity (to control concurrency)

  • Don’t want separate lock activities because that would double the activity count and add complexity

  • Local activities still count toward activity limits and add overhead

Have you considered controlling the number of parallel activities by controlling the number of workers and limiting the parallelism per worker?

The challenge is more granular: I need to limit specific activity types (not all activities) across multiple worker instances within a tenant.

Scenario:

  • Multiple activity types per tenant

  • Only certain activities need concurrency limits (e.g., resource-intensive ones)

  • Other activities should run without limits

  • Multiple worker instances per tenant

Worker-level limits don’t work because max_concurrent_activities applies to all activity types, but I need per-activity-type limits.

hello, any update on this?

Using a separate activity to acquire the lock is the way to go. Another option is to have a single task queue and then use a completely separate system like Redis to implement your rate limiting requirements.

I have an existing external Redis rate limiting system that I need to integrate. Is there a way to bypass deadlock detection for specific code blocks or increase the deadlock timeout without enabling full debug mode?

Context: I understand the determinism concerns, but I have a working Redis-based rate limiter that just needs a bit more time (3-5 seconds) for lock acquisition. Debug mode seems overkill and has performance implications.

Options I’m looking for:

  1. Selective deadlock bypass for specific operations

  2. Configurable deadlock timeout (per-workflow or globally)

  3. Any other workarounds that don’t require debug mode

Does your system guarantee that the lock is obtained within seconds?
What happens when the downstream system is down for a while and a large backlog accumulated?