How do I set the TaskQueue name of a worker?

Hi thanks for Temporal. I successfully started a Temporal cluster in kubernetes for testing. I used the helm chart to do this.

My current struggle is that I don’t know how to tell my temporal-worker pod which TaskQueue name to use. I can create Workflows using TS SDK, but I don’t have any workers in the kubernetes cluster polling the correct task queue. I have a worker pod, but I don’t know how to get it to poll a task queue name. I looked through the temporalio/server docker README and searched the docs site and I couldn’t find any config that lets me configure the Temporal worker docker container with the task-queue name.

I looked through the default values at temporal 0.37.0 · lemontech/lemontech and I don’t see any helm values related to setting the worker’s task queue name.

I want my temporal-worker pod to poll the ‘scout’ Task Queue. However, it’s not. I see an error in the web UI.

No Workers Running
There are no Workers polling the scout Task Queue.

I don’t think it’s true that there are no workers running. Here are some of the logs from my worker pod, temporal-worker-7cdbdc8ccf-5bw6k (k9s shows it is up and healthy)


Tracking new pod rollout (temporal-worker-7cdbdc8ccf-5bw6k):
     ┊ Scheduled       - <1s
     ┊ Initialized     - <1s
     ┊ Ready           - 1s
[event: pod futurenet/temporal-worker-7cdbdc8ccf-5bw6k] Container image "temporalio/server:1.22.4" already present on machine
TEMPORAL_ADDRESS is not set, setting it to 10.244.3.13:7233
2024/06/11 02:52:02 Loading config; env=docker,zone=,configDir=config
2024/06/11 02:52:02 Loading config files=[config/docker.yaml]
{"level":"info","ts":"2024-06-11T02:52:02.007Z","msg":"Build info.","git-time":"2024-01-12T20:57:30.000Z","git-revision":"ca74b18c7257fdb8a87a27afc89827901e152d15","git-modified":true,"go-arch":"amd64","go-os":"linux","go-version":"go1.20.10","cgo-enabled":false,"server-version":"1.22.4","debug-mode":false,"logging-call-at":"main.go:148"}
{"level":"info","ts":"2024-06-11T02:52:02.008Z","msg":"Updated dynamic config","logging-call-at":"file_based_client.go:195"}
{"level":"warn","ts":"2024-06-11T02:52:02.008Z","msg":"Not using any authorizer and flag `--allow-no-auth` not detected. Future versions will require using the flag `--allow-no-auth` if you do not want to set an authorizer.","logging-call-at":"main.go:178"}
{"level":"info","ts":"2024-06-11T02:52:02.041Z","msg":"Use rpc address 127.0.0.1:7933 for cluster active.","component":"metadata-initializer","logging-call-at":"fx.go:732"}
{"level":"info","ts":"2024-06-11T02:52:02.042Z","msg":"Service is not requested, skipping initialization.","service":"history","logging-call-at":"fx.go:373"}
{"level":"info","ts":"2024-06-11T02:52:02.042Z","msg":"Service is not requested, skipping initialization.","service":"matching","logging-call-at":"fx.go:421"}
{"level":"info","ts":"2024-06-11T02:52:02.042Z","msg":"Service is not requested, skipping initialization.","service":"frontend","logging-call-at":"fx.go:477"}
{"level":"info","ts":"2024-06-11T02:52:02.043Z","msg":"Service is not requested, skipping initialization.","service":"internal-frontend","logging-call-at":"fx.go:477"}
{"level":"info","ts":"2024-06-11T02:52:02.061Z","msg":"historyClient: ownership caching disabled","service":"worker","logging-call-at":"client.go:82"}
{"level":"info","ts":"2024-06-11T02:52:02.070Z","msg":"PProf listen on ","port":7936,"logging-call-at":"pprof.go:73"}
{"level":"info","ts":"2024-06-11T02:52:02.070Z","msg":"Starting server for services","value":{"worker":{}},"logging-call-at":"server_impl.go:94"}
{"level":"info","ts":"2024-06-11T02:52:02.090Z","msg":"RuntimeMetricsReporter started","service":"worker","logging-call-at":"runtime.go:138"}
{"level":"info","ts":"2024-06-11T02:52:02.090Z","msg":"worker starting","service":"worker","component":"worker","logging-call-at":"service.go:379"}
{"level":"info","ts":"2024-06-11T02:52:02.096Z","msg":"Membership heartbeat upserted successfully","address":"10.244.3.13","port":6939,"hostId":"942571bf-279d-11ef-bf7a-eabde9c4b9c4","logging-call-at":"monitor.go:256"}
{"level":"info","ts":"2024-06-11T02:52:02.098Z","msg":"bootstrap hosts fetched","bootstrap-hostports":"10.244.3.9:6934,10.244.2.4:6934,10.244.3.8:6933,10.244.3.10:6939,10.244.3.7:6935,10.244.4.3:6934,10.244.1.3:6934,10.244.1.6:6933,10.244.3.4:6934,10.244.1.2:6935,10.244.4.7:6933,10.244.3.13:6939,10.244.2.2:6935,10.244.4.5:6935,10.244.4.2:6935,10.244.3.6:6935,10.244.4.6:6934","logging-call-at":"monitor.go:298"}
{"level":"info","ts":"2024-06-11T02:52:03.100Z","msg":"Current reachable members","component":"service-resolver","service":"worker","addresses":["10.244.3.10:7239","10.244.3.13:7239"],"logging-call-at":"service_resolver.go:279"}
{"level":"info","ts":"2024-06-11T02:52:03.100Z","msg":"Current reachable members","component":"service-resolver","service":"frontend","addresses":["10.244.4.7:7233","10.244.1.6:7233","10.244.3.8:7233"],"logging-call-at":"service_resolver.go:279"}
{"level":"info","ts":"2024-06-11T02:52:03.101Z","msg":"Current reachable members","component":"service-resolver","service":"history","addresses":["10.244.1.3:7234","10.244.3.4:7234","10.244.4.3:7234","10.244.2.4:7234","10.244.3.9:7234","10.244.4.6:7234"],"logging-call-at":"service_resolver.go:279"}
{"level":"info","ts":"2024-06-11T02:52:03.103Z","msg":"Current reachable members","component":"service-resolver","service":"matching","addresses":["10.244.4.5:7235","10.244.3.6:7235","10.244.4.2:7235","10.244.1.2:7235","10.244.3.7:7235","10.244.2.2:7235"],"logging-call-at":"service_resolver.go:279"}
{"level":"info","ts":"2024-06-11T02:52:03.119Z","msg":"temporal-sys-tq-scanner-workflow workflow successfully started","service":"worker","logging-call-at":"scanner.go:292"}
{"level":"info","ts":"2024-06-11T02:52:03.120Z","msg":"temporal-sys-history-scanner-workflow workflow successfully started","service":"worker","logging-call-at":"scanner.go:292"}
{"level":"info","ts":"2024-06-11T02:52:03.274Z","msg":"Started Worker","service":"worker","Namespace":"temporal-system","TaskQueue":"temporal-sys-tq-scanner-taskqueue-0","WorkerID":"1@temporal-worker-7cdbdc8ccf-5bw6k@","logging-call-at":"scanner.go:239"}
{"level":"info","ts":"2024-06-11T02:52:03.282Z","msg":"Started Worker","service":"worker","Namespace":"temporal-system","TaskQueue":"temporal-sys-history-scanner-taskqueue-0","WorkerID":"1@temporal-worker-7cdbdc8ccf-5bw6k@","logging-call-at":"scanner.go:239"}
{"level":"info","ts":"2024-06-11T02:52:03.289Z","msg":"Started Worker","service":"worker","Namespace":"temporal-system","TaskQueue":"temporal-sys-processor-parent-close-policy","WorkerID":"1@temporal-worker-7cdbdc8ccf-5bw6k@","logging-call-at":"processor.go:98"}
{"level":"info","ts":"2024-06-11T02:52:03.300Z","msg":"Started Worker","service":"worker","Namespace":"temporal-system","TaskQueue":"temporal-sys-batcher-taskqueue","WorkerID":"1@temporal-worker-7cdbdc8ccf-5bw6k@","logging-call-at":"batcher.go:98"}
{"level":"info","ts":"2024-06-11T02:52:03.305Z","msg":"Started Worker","service":"worker","Namespace":"temporal-system","TaskQueue":"default-worker-tq","WorkerID":"1@temporal-worker-7cdbdc8ccf-5bw6k@","logging-call-at":"worker.go:101"}
{"level":"info","ts":"2024-06-11T02:52:03.306Z","msg":"none","component":"worker-manager","lifecycle":"Started","logging-call-at":"worker.go:106"}
{"level":"info","ts":"2024-06-11T02:52:03.306Z","msg":"none","component":"perns-worker-manager","lifecycle":"Starting","logging-call-at":"pernamespaceworker.go:166"}
{"level":"info","ts":"2024-06-11T02:52:03.306Z","msg":"none","component":"perns-worker-manager","lifecycle":"Started","logging-call-at":"pernamespaceworker.go:177"}
{"level":"info","ts":"2024-06-11T02:52:03.306Z","msg":"worker service started","service":"worker","component":"worker","address":"10.244.3.13:7239","logging-call-at":"service.go:418"}
{"level":"info","ts":"2024-06-11T02:52:03.310Z","msg":"Started Worker","service":"worker","Namespace":"futurenet","TaskQueue":"temporal-sys-per-ns-tq","WorkerID":"server-worker@1@temporal-worker-7cdbdc8ccf-5bw6k@futurenet","logging-call-at":"pernamespaceworker.go:483"}
{"level":"info","ts":"2024-06-11T02:52:03.314Z","msg":"Started Worker","service":"worker","Namespace":"temporal-system","TaskQueue":"temporal-sys-per-ns-tq","WorkerID":"server-worker@1@temporal-worker-7cdbdc8ccf-5bw6k@temporal-system","logging-call-at":"pernamespaceworker.go:483"}
{"level":"info","ts":"2024-06-11T02:52:07.913Z","msg":"Current reachable members","component":"service-resolver","service":"worker","addresses":["10.244.3.13:7239"],"logging-call-at":"service_resolver.go:279"}

It looks like the worker is doing system stuff. Is there a way I can configure it to poll the scout task queue and do work on the pending workflows in that task queue?

The temporal worker service role is unrelated to the application worker containing your workflow and activity code. So you don’t want to change its configuration in any way and it doesn’t need to know anything about your application.

1 Like