Hi Scott,
What Maxim was saying in post #4 is that the Task Queue is specified in two places. One is the Worker (in this case, I can see from your screenshot in post #5 that your Worker uses the Task Queue named “default”). The other is the TaskQueue
field in the StartWorkflowOptions
used by the code that uses a Temporal Client to submit the execution request (here’s an example of that from a later exercise in the Temporal 101 with Go course).
If the Task Queue name isn’t the same in both places, the Workflow Execution does not progress and it will look exactly as depicted in your screenshot from post #2. The WorkflowExecutionStarted
and WorkflowTaskScheduled
events you see here do not involve the Worker. Task Queues are created dynamically, so submitting the Workflow to one Task Queue and having the Worker listen on a different one does not result in an error; it just creates two different Task Queues. In this case, the Worker is listening on a different Task Queue than the one used to start the execution, so the Worker will never see the queued Task (i.e., the one instructing it to run the Workflow code) and therefore does not do any work related to this execution.
When you run a Temporal Cluster via the CLI (as described in the course here), the default
Namespace is created automatically. That’s also the case when running one using our docker-compose configuration (as described here). Since a k8s-based cluster is not a supported configuration for this training course, I’d strongly recommend following the instructions for running the Temporal Cluster via the CLI if you want to work on the exercises locally. To be clear, you absolutely can run a self-hosted Temporal Cluster on Kubernetes and many of our users do so in production, but trying to do that while learning Temporal adds unnecessary complication.
Running Temporal with Kubernetes is not my personal area of expertise, so if you have follow-up questions about that, I’d recommend starting a new thread and using the kubernetes
tag on your post and/or asking in the #kubernetes
channel on our community Slack..