Web UI pod does not respond

Hi! I have the following issue: I run the official helm chart which creates a temporal-web k8s deployment among others and then try to connect to that web UI with

kubectl port-forward services/temporal-web 8088:8088

which starts forwarding successfully. Unfortunately server does not give any response:

curl -i http://localhost:8088/
curl: (52) Empty reply from server

Logs from the temporal-web pod are unhelpful:

「hot」: webpack: Compiling...
temporal-web up and listening on port 8088
webpack is compiling...
「hot」: WebSocket Server Listening at localhost:8081
[BABEL] Note: The code generator has deoptimised the styling of /usr/app/node_modules/lodash/lodash.js as it exceeds the max of 500KB.
「wdm」: wait until bundle finished: 
「wdm」: wait until bundle finished: /

How can I debug this?

1 Like

Is this the end of the log? It looks like webpack hasn’t finished building a bundle and the web server hasn’t span up yet.
Does the result differ after few minutes from the server start?

I see. Yes, I needed to wait until server compiled but on the low resource nodes pod was being kicked before that happened. Apparently web UI pod needs at least these to start up:

resources:
  limits:
    cpu: 500m
    memory: 768Mi
  requests:
    cpu: 500m
    memory: 768Mi

Thanks for providing the resources info. We currently are tracking a github ticket to address the load time of temporal-web. New updates will follow in the ticket https://github.com/temporalio/temporal-web/issues/50

@Yuri just a follow up update, this was addressed in 0.27.0 and now Web spins up almost instantly. Also the resources usage is reduced :+1:

1 Like

Thanks, I tried 0.27 out, don’t need to specify resource requests anymore!

@Ruslan I’m currently running v0.23.1 and after a restart of the web pod, I also encounter this issue. Didn’t had it before.

Is there a workaround without upgrading to v0.27?

I would like to wait two more days until the first code-complete release is available, otherwise I need to upgrade again.

@ringods there is no immediate workaround for v0.23.1

A workaround that is possible is to build a new docker image based on v0.23.1 web tag, with a difference that the old dockerfile is replaced with the new dockerfile definition (starting v0.27) as the implemented fix is in it