"Auto RPC" extension

Hi folks, I’m thinking about creating a general “Auto RPC” adaptor, and I wonder how the community think about this idea – is it feasible, is it a good practice, should I make this an open source contrib, etc.
The context is that I’m moving a monolithic application towards a distributed microservice architecture, and one task is to expose “MyDataStore” class as a microservice so that operations on the local datastore can be invoked remotely (RPC). As many of you may already feel, a Temporal workflow resembles a RPC method in many ways: clearly defined contract; serializable input/output type; can be executed from anywhere that’s connected to temporal cluster.

So I’ve found a way to make it really easy to adapt Temporal as a microservice platform, simply by
adding a @activity.defn to the method that I want to expose, e.g.:

class MyDataStore:
  @activity.defn(name='MyDataStore.do_this')
  def do_this(self, input: InputType, ...) -> OutputType:
     ...

Then, I write a generic workflow that executes a single activity

@workflow.defn
class GenericSingleActivityWorkflow():
  @workflow.run
  def run(self, rpc_method: str, *args):
    return await workflow.execute_activity(rpc_method, args=args, ...)

Then I generate a client stub class, i.e., a subclass of MyDataStore class that intercepts method invocation and forward to execute this GenericSingleActivityWorkflow passing method name and args. BTW, I used the activity._Definition class to check if the method has @activity.defn decorator.

This has worked pretty well. Now that I’m re-thinking about my prototype, I realize it can be made much simpler as a general extension to Temporal, making it a general microservice platform, e.g.:

@auto_rpc_service
class MyDataStore:
   def _priviate_method(...)
  
   @auto_rpc_method
   def public_rpc_method(...)

# "Worker" == "Server"
store = MyDataStore(...)
worker = auto_rpc_service.create_worker(store, worker_config)
worker.run()

# Client stub can be auto-generated
my_data_store_client = auto_rpc_service.client_stub(MyDataStore, temporal_client)
my_data_store_client.public_rpc_method(...)

Yes, many users have different ways to expose their workflows to RPC, but we do not provide an official way to do it yet. The method you have here works well. We’d be hesitant to adopt it officially because: 1) most of our SDK features need to be available in all languages, 2) we have an upcoming specification/implementation currently named “Nexus” that will be geared towards exposing workflows/activities via RPC.

But for your use case, this is great. This is the main reason we make our library so simple and composable so you can do things like this.

Thanks for the feedback! The upcoming “Nexus” sounds cool, is there any more details that you could share or we’ll have to wait for it?

I’ve just finished the “AutoRPC Framework” impl with the motivation of transforming our monolithic python application towards a microservice architecture. I’d like to share what the framework looks like. We might consider to open source it as a third-party extension library (with dependency on temporal.io), if there is enough interest in the open source community.

  1. By adding a @auto_rpc_service class decorator, a regular python class becomes both a) service interface definition – I’d call it “code-as-IDL” approach, resembling the “code-as-workflow” approach :slight_smile: and b) service impl.
@auto_rpc_service
class FooBarService:
   def __init__(self, ...):
       ...

   def foo_method(self, args…) -> X:
       ...

   async def bar_async_method(self, ...) -> Y:
       ...
  1. run the server with 4 lines of code server/main.py:
if __name__ == "__main__":
   my_service_impl = FooBarService(...)
   server_config = AutoRpcServerWorkerConfig(...)
   asyncio.run(auto_rpc.run_server(my_service_impl, server_config))
  1. On the client side, create and use a type-safe client stub to make RPC calls in the exact same way as if using the service impl instance directly:
   config = AutoRpcClientConfig(...)
   Foo_bar_client = auto_rpc.create_client(FooBarService, config)
   ...
   foo_result: X = foo_bar_client.foo_method(...)
   bar_result: Y = await foo_bar_client.bar_async_method(...)

I want to say that 3) is dramatically useful for migrating an existing monolithic python application to the microservice architecture, assuming there is already modular service-like class abstraction and class instances are already being dependency-injected into a ton of places to make direct method invocations. With the AutoRPC framework, client-side migration requires nearly zero effort (just replacing the dependency with the client stub), comparing to choosing other RPC frameworks gRPC/protobuf or finagle/Thrift and rewrite pretty much the entire application code.

Additionally, our Auto-RPC framework provides a “@async_or_sync” method decorator to support dual interfaces in the client stub, i.e., both sync & async call interfaces, just like gRPC does. This makes it possible to accommodate both aysnc (coroutine) or sync programming style at the client side, regardless of the service impl. Example:

@auto_rpc_service
class MyService:
   @async_or_sync
   def bar_sync(self, param: str) -> str:
       ...

   @async_or_sync
   async def bar_async(self, param: str) -> str:
       ...

Client side code can freely choose either calling the original interface, sync interface, or async(coroutine) interface (all with type-safety):

service_client = create_client(MyService, config)

result = service_client.bar_sync('test')
result = service_client.bar_sync.sync('test')
result = await service_client.bar_sync.coroutine('test')

result = await service_client.bar_async('test')
result = service_client.bar_async.sync('test')
result = await service_client.bar_async.coroutine('test')