Worker Clarification

I wanted to clarify my understanding of what a worker is and where it runs. Based on the definition in the glossary I have taken the following snippets:

  1. A Worker is a service that hosts the Workflow and Activity implementations.
  2. Worker services are developed, deployed, and operated by Temporal customers.

Could you clarify:

  1. When a workflow is triggered and an Activity needs to be run is this run in the same environment that the customer Worker was created in or is it run in the worker pods created in K8.
  2. If the Activity is run by the customers worker service and you want to scale to support more workflows does it mean the environment the worker is running in needs to be scaled or is it the worker pods that need to be scaled.
1 Like
  1. Temporal workers are a unique concept of Temporal and have no connection with infrastructure or Kubernetes. I understand this may be confusing considering the term worker is used in many different ways in this ecosystem. Your question also makes me think you might have a misconception about the way workers operate. Workers are pull based (polling) not push. Therefore there is not a notion of a worker being “triggered”. Workers continuously poll for new tasks on the task queue they are listening on and execute ones they pull from the queue.

  2. I think the term “customers” in our docs is causing some confusion. Temporal does not have a notion of customers in the system. Right now all of our users are expected to run both the core Temporal service, along with a number of workers which host their activity and workflow implementations. Temporal does not dictate how you should run your workers, they can be containerized but also can be run as single processes on bare metal. That being said, a single worker instance can only scale to the biggest server you can buy. For this reason most Temporal users end up running workers within containers orchestrated by Kubernetes. If we naively assume that every worker hosts the same workflow and activity implementations, scaling would simply be a matter of adding another identical container to the Kubernetes deployment.

I’m going to leave my answer as this for now, and I’ll wait to see your response so I don’t write a bunch of stuff that isn’t relevant.

1 Like

Thanks Ryland for the response.

I think part of my confusion was differentiating between the “worker” pods that are spun up as part of the core temporal deployment in K8 and the “Worker”(s) that the user creates and the roles that both played.

1 Like

Yea that name is the cause of quite a bit confusion. We may eventually rename it if we can think of something much better. Do you feel that things are clearer now? More than happy to answer any other questions or elaborate.

I think I’m clear on the role of the user created workers and how we will need to implement these now.

If the user created workers are responsible for executing the Activities, could you elaborate on the role of the core worker pods in k8? One reason for this is to get a better understanding of when we need to scale to support additional workflow instances, what parts need to scale, is it primarily the ability of the workers to process the Activities?