RESOURCE_EXHAUSTED: Received message larger than max (4853862 vs. 4194304)

Hi team,

I got this error RESOURCE_EXHAUSTED: Received message larger than max (4853862 vs. 4194304) in temporal web, can you please let me know how to fix it? Thanks in advance.

hi @Gaoxin_Dai, thanks for opening the issue.

Could you provide more details, what exact page fails with this message (namespaces list, workflows list, workflow summary, workflow details, …) ? Or if you can check the exact call that is failing from the browser’s network tab in dev tools, what call fails with this message?

Thank you @Ruslan for looking into the issue, this is for the workflow summary page.

Here is the information in inspection.

  1. Request URL:
    http://localhost:8088/api/namespaces/default/workflows/d6109fc7e6be49d2ab78ecc3122abc51__9d16dec1-b71e-4a8e-81f8-338f805efbff/156782b1-faea-434d-beaf-c7daace1077b/history?waitForNewEvent=true

  2. Request Method:
    GET

  3. Status Code:
    500 Internal Server Error

  4. Response:
    {“message”:“8 RESOURCE_EXHAUSTED: Received message larger than max (4877314 vs. 4194304)”}

Could you also run this tctl command, does it throw the same error?
./tctl workflow observe -wid d6109fc7e6be49d2ab78ecc3122abc51__9d16dec1-b71e-4a8e-81f8-338f805efbff -rid 156782b1-faea-434d-beaf-c7daace1077b

Would you expect some specific data in the workflow history to possibly exceed it? For example quite big payloads (inputs, results?)

i will speak to my team to see if we should increase the max grpc message size or possibly look deeper into why this happened with your workflow

Yes, it shows the same error. My use case is one activity will download a few documents and following activities will process the documents, so the data might exceed the limit. Is there any resource/performance concern to increase the message size? What’s your recommendation/best practices for these kinds of situations?

bash-5.0# ./tctl workflow observe -wid d6109fc7e6be49d2ab78ecc3122abc51__9d16dec1-b71e-4a8e-81f8-338f805efbff -rid 156782b1-faea-434d-beaf-c7daace1077b
Progress:

Time elapse: 60s
Error: Unable to read event.
Error Details: grpc: received message larger than max (4877314 vs. 4194304)
(‘export TEMPORAL_CLI_SHOW_STACKS=1’ to see stack traces)

There is a sample for file processing:

Lmk if this helps

This is really helpful. At the same time, could you please advise what to do if the data exceed the size limit. Is the message size configurable or the input/result should not exceed 4M limit?

@Gaoxin_Dai , in general we do not recommend storing large objects in the workflow history, this not only may hit some limits like the one you see here with gRPC, but also has performance implications slowing down history replay, queries and increasing memory footprint of your workers due to growing cache size. It is preferred to save a pointer to the blob in your workflow and put actual object in the blob store, such as S3. Is that an option in your case?

@Vitaly @Ruslan - I seem to have hit this limit when many hundreds of activities were scheduled at the same time, which appears to translate into a single grpc request. Reducing my payload size solved the issue, however, I was wondering if you have plans to handle this within temporal itself? E.g. split up the commands into separate grpc requests.

@Vitaly @Ruslan Thank you for your suggestion, I’m not sure if it can solve my problems here. I have two use cases here, for example.

  1. All activities should be able to access previous activities’ result. The input data could be large if result has to be saved in global and then passed to next activity.

  2. First activity downloads an object, following activities will modify part of the object and then pass to next activity to process, the object will not be uploaded until workflow complete.

I would store the object in the memory (or on disk) and route all the activities to that process (host). If host fails retry the whole sequence from the beginning on the same host.

This topic was automatically closed after 5 days. New replies are no longer allowed.