Release plan for Increasing grpc message limit from 4MB default?

Acknowledging that everything has a limit, there is a tantalizing suggestion in the following link that it will soon be possible to increase the grpc message size to some larger value:

Support for setting max payload size · Issue #961 · temporalio/temporal · GitHub

We haven’t seen any configuration setting for it in the latest version, is there still a plan to support an increase, and with what upper value?

We are potentially facing building a “cache” for multi-MB objects that we receive or author (large existing REST messages), and developing clients & activities to manage the I/O to this cached object tree, which might otherwise be tight deterministic logic in the workflow.

Many thanks

Temporal server internode max receive payload size has been increased to 128mb, see pr here.
This also applies for frontend service so it can receive messages with sizes up to 128mb.

Which SDK are you using?
For example the default max payload size in Go SDK is 64mb but you could up it to max 128mb via

client.Options.ConnectionOptions.MaxPayloadSize

For large data, its recommended to store it yourself (maybe s3 bucket) and pass references to it as inputs to workflows/activities or use specific SDK features like sessions in Go. See forum post here for reference.

1 Like

Thats great news!
We are using the Java SDK and have upgraded to the latest version (1.10.0 release 2022-04-19), but the problem persists, and we can’t see a configuration option to change the default
RESOURCE_EXHAUSTED: grpc: received message larger than max (6096488 vs. 4194304)
Do you have any further suggestions?
Many thanks

1 Like

For Java it defaults to the gRPC 4mb max.
Our team is discussing a possible alignment, will get back to you with more info when available.

Currently you can increase the payload size via WorkflowServiceStubsOptions-> setChannelInitializer, for example:

     // set max to 50mb
    int MAX_INBOUND_MESSAGE_SIZE = 52_428_800;

    WorkflowServiceStubsOptions.ChannelInitializer channelInitializer = new 
    WorkflowServiceStubsOptions.ChannelInitializer() {
        public void initChannel(ManagedChannelBuilder builder) {
            ((NettyChannelBuilder) builder).maxInboundMessageSize(MAX_INBOUND_MESSAGE_SIZE);
        }
    };

    WorkflowServiceStubs service = WorkflowServiceStubs.newInstance(
            WorkflowServiceStubsOptions.newBuilder()
                    .setChannelInitializer(channelInitializer)
                    .build());

give it a try and see if this helps.

Hi Tihomir, we appreciate the time you are putting into this.

Your answer looks very promising, yet we still have the same problem.

The context is that we receive a large payload in an external client that we wrote (REST API), and then signal the payload data to a workflow instance (via the temporal platform). At this stage it is an outbound transmission from our external client, but this is the place where we see the failure See below). We will also face simialr issues with activities, but that is the not case in hand.

2022-04-27T09:42:33,714 INFO [http-nio-x-exec-4] i.t.i.r.GrpcSyncRetryer: Retrying after failure
io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: grpc: received message larger than max (6096486 vs. 4194304)
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:262)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:243)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:156)
at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.signalWorkflowExecution(WorkflowServiceGrpc.java:2984)
at io.temporal.internal.client.external.GenericWorkflowClientExternalImpl.lambda$signal$1(GenericWorkflowClientExternalImpl.java:110)
at io.temporal.internal.retryer.GrpcRetryer.lambda$retry$0(GrpcRetryer.java:44)
at io.temporal.internal.retryer.GrpcSyncRetryer.retry(GrpcSyncRetryer.java:61)
at io.temporal.internal.retryer.GrpcRetryer.retryWithResult(GrpcRetryer.java:51)
at io.temporal.internal.retryer.GrpcRetryer.retry(GrpcRetryer.java:41)
at io.temporal.internal.client.external.GenericWorkflowClientExternalImpl.signal(GenericWorkflowClientExternalImpl.java:103)
at io.temporal.internal.client.RootWorkflowClientInvoker.signal(RootWorkflowClientInvoker.java:74)
at io.temporal.internal.sync.WorkflowStubImpl.signal(WorkflowStubImpl.java:90)
at io.temporal.internal.sync.WorkflowInvocationHandler$SyncWorkflowInvocationHandler.signalWorkflow(WorkflowInvocationHandler.java:297)
at io.temporal.internal.sync.WorkflowInvocationHandler$SyncWorkflowInvocationHandler.invoke(WorkflowInvocationHandler.java:274)
at io.temporal.internal.sync.WorkflowInvocationHandler.invoke(WorkflowInvocationHandler.java:178)
at com.sun.proxy.$Proxy98.pxResult(Unknown Source)
at x.x.x.x.x.x.x.postPxResults(x.java:146)

We see nothing reported in the Temporal Server, or the worker, which would be receiving the message.
The Temporal Server is 1.16.1, the Worker and External Client are built with the latest Java SDK (1.10.0).

We note that use of the ChannelInitializer is deprecated, but should still work, we cannot extend ServiceStubsOptions instead because it is package visibility only. In any event, we can see that the ChannelInitializer is called in both the worker and external client (via debug log code we inserted) and can see a large receive size set in both the Worker and the External Client. We don’t know if there are other channels to resize too?

Note we do not set any outbound message size, as we understand there is no need to (as no buffers are created).

Any more thoughts?
Many Thanks

We note that use of the ChannelInitializer is deprecated, but should still work, we cannot extend ServiceStubsOptions instead because it is package visibility only.

You mean ServiceStubsOptions.ChannelInitializer right? Yes this will be fixed in SDK in future.

I think your client is hitting the 4mb gRPC limit, but even then I believe you would run into an issue on the server side due to
BlobSizeLimit which is set to 2mb.

Even tho this can be configured on server side it’s highly recommended not to increase this value.
Temporal is not a blob store :slight_smile:

You could instead:

  1. Store your large objects in some external blob store like S3 and pass references as inputs and outputs of or workflows/activities/ and pass as signal inputs.

  2. Cache large objects in the process memory or on local disk and route all tasks to the host that holds the cache, see this sample for more info.

Right now the maximum limit on incoming message on Temporal Server is 4Mb. One blob (one Payload, one activity input, one return value, etc) is limited by 2Mb.

The general advice is that Temporal shouldn’t be used as a blob storage. Large blobs from experience tend to complicate maintenance of the system. Please use an external system to store blobs and pass IDs into workflows/activities/signals.

Even if we lift the 4Mb limitation on the incoming gRPC message on the Temporal Server, your example will hit a limit of 2Mb on a single blob right behind it, which we definitely are not planning to change at the moment.

Regarding plans:
You can use this feature request to track decisions and changes regarding incoming gRPC message limit, but no specific timelines and no commitment that it will be delivered: Relax maximum inbound frontend gRPC message size enforcement · Issue #2786 · temporalio/temporal · GitHub
There is no plans to change limit on a single blob (one Payload, one activity input, one return value, etc), Temporal shouldn’t be used as a large blobs storage.

Thanks for your help all, we will re-structure some workflows as you suggest