Timeline for python-client support

What is the timeline for a python client that works with temporal?

I’ve been looking at GitHub - firdaus/cadence-python: Python framework for Cadence Workflow Service and assume it is not compatible with temporal.

1 Like

@firdaus has been working hard on a python experience for Temporal. He will know the exact status better than I do.

Been overloaded with work this month. Targeting September to be on par with cadence-python in terms of features.

2 Likes

Looking forward to this! Thank you for all of your efforts @firdaus

@firdaus What is the status on the python experience for Temporal?
really looking forward to this ^^

Current status:

  • A couple of weeks ago I got the first workflows running.
  • There were some typing issues that mypy didn’t discover so will be fixing that next
  • Then off to functional testing to see whether everything ported still works
  • I only know one item at the moment that won’t work for sure without code changes: Markers

Hard to give a date at the moment because my workload varies quite a bit.

Just to set expectations, I won’t be making the repo public until all currently implemented features in cadence-python have been ported over.

You can sign up to participate in the beta test here:

EDIT: fixed URL.

1 Like

I like how @ryland does regular transparency updates to the community. I thought it would be a good idea to follow in his footsteps and update my progress on a regular basis as well.

I know a number of people are waiting on the Python SDK release and really the only way to get the library into a production quality state is for people to try it out for their use cases and report bugs — but my feeling is that if I publish before it’s on par with cadence-python that I would be spending too much time doing support.

To manage expectations, I’m publishing my functional testing tasks. I’ll tick the items below as the functional tests get done and once everything on this list has functional tests in place I’ll make the repo public and publish a beta to PYPI.

  • Workflow argument passing and return values
  • Activity invocation
  • Activity heartbeat and Activity.getHeartbeatDetails()
  • doNotCompleteOnReturn
  • ActivityCompletionClient
    • complete
    • complete_exceptionally
  • Activity get_namespace(), get_task_token() get_workflow_execution()
  • Activity Retry
  • Activity Failure Exceptions
  • Workflow exceptions
  • Cron workflows
  • Workflow static methods:
    • await_till()
    • sleep()
    • current_time_millis()
    • now()
    • random_uuid()
    • new_random()
    • get_workflow_id()
    • get_run_id()
    • get_version()
    • get_logger()
  • Activity invocation parameters
  • Query method
  • Signal methods
  • Workflow start parameters - workflow_id etc…
  • Workflow client - starting workflows synchronously
  • Workflow client - starting workflows asynchronously
  • Get workflow result after async execution
  • Workflow client - invoking signals
  • Workflow client - invoking queries
6 Likes

Is this still under active development? We are starting a new project and Temporal would be perfect. We are using a Django stack and Python SDK would be ideal.

1 Like

Yes, still under development. Will be updating that list soon.

It seems that Discourse limits edits based on some criteria – so I’m not able to modify the original “todo list”.

The latest status is now at: https://gist.github.com/firdaus/4ec442f2c626122ad0c8d379a7ffd8bc

Current status – there are some issues with serializing/deserializing exceptions that were discovered while testing complete_exceptionally – partly due to the changes I made in my betterproto fork.

The idea is that exceptions don’t need to be serialized as they are converted to ApplicationFailure without serialization.

Eagerly waiting for Python support, would love to get started to see if temporal can be part of solutions.

1 Like

Made a lot of progress in the past few days. Have a look at the GIST link for updates.

Wanted to get some feedback on this:

In the new client, any functions that talk to the Temporal service are async def and therefore must be executed from an async def function.

Activity functions have to be async def too but activities run in their own thread and there is a separate event loop per thread so you are free to call blocking coding from it – it wont interfere with other coroutines.

Are there any GIL concerns with running an activity per thread? I suppose my activities can fork themselves so that the work isnt GIL-limited, but I’m wondering if the SDK should do that as well, both for process isolation and also to avoid GIL limitations.

Or on the opposite side of spectrum, stick to “pure” asyncio and let the user make the decisions about threading/forking for blocking code in the activity. I’d sort of prefer this, if we’re making people use async def anyways.

1 Like

@llchan

Good point, probably should launch activity workers in a separate process using multiprocessing.

1 Like

The downside of using a process pool is that it may be harder for the user to use/manage shared resources (caches, conn pools, etc) between activities. Still maybe possible if the SDK uses a process pool with worker reuse, but the user gives up some control there.

The more I think about it the more I prefer for the SDK to defer all these decisions to the user, and only manage the asyncio-level request handling :thinking:. Are we worried that it will be hard to use for people who don’t want to worry about blocking calls? If so, perhaps we can have a high-level thread/process-pool-based API built on top of the lower-level asyncio single-threaded core?

I agree with this.

FWIW, in new async packages like FastAPI / Starlette, there is often auto-detection (asyncio.iscoroutinefunction(..)) so that you can run a combination of standard / async functions. The standard functions are executed in a dedicated threadpool (e.g. https://github.com/encode/starlette/blob/master/starlette/concurrency.py), separate from the main loop.

See also Django’s asgiref also: https://github.com/django/asgiref/blob/master/asgiref/sync.py#L224

1 Like

Yeah, here is the relevant code in temporal-python

                if inspect.iscoroutinefunction(fn):
                    return_value = await fn(*args)
                else:
                    return_value = fn(*args)

If we are keeping the SDK asyncio-centric (which is what I think what @mrsaints and I are both suggesting), we should probably push user-defined functions to a thread pool in case they are blocking. Something roughly equivalent to this:

                if inspect.iscoroutinefunction(fn):
                    return_value = await fn(*args)
                else:
                    return_value = await loop.run_in_executor(executor, fn, *args)
2 Likes