Default Cluster Worker OIDC auth

Hello,

We are running a local temporal cluster for dev/poc and we are using OIDC auth keycloak. We have our custom workers setup and working correctly with a JWTHeaderProvider that resembles this:


type JWTHeaderProvider struct {
	AuthClient AuthClient
}

func (j *JWTHeaderProvider) GetHeaders(ctx context.Context) (map[string]string, error) {
	tkn, err := j.AuthClient.Token()
	if err != nil {
		return nil, err
	}

	// Return header
	return map[string]string{"Authorization": "Bearer " + tkn}, nil
}

type AuthClient interface {
	Token() (string, error)
}

func NewClient(addr string, authClient AuthClient) (client.Client, error) {
	tracingInterceptor, err := opentracing.NewInterceptor(opentracing.TracerOptions{})
	if err != nil {
		return nil, errors.Wrap(err, "failed to create temporal trace interceptor")
	}

	opts := client.Options{
		HostPort:           addr,
		Logger:             NewTemporalLoggerAdapter(log.Logger),
		ContextPropagators: []workflow.ContextPropagator{NewContextPropagator()},
		Interceptors:       []interceptor.ClientInterceptor{tracingInterceptor},
	}

	if authClient != nil {
		opts.HeadersProvider = &JWTHeaderProvider{AuthClient: authClient}
	}

	return client.Dial(opts)
}

This works. We were able to setup the claims with “permissions” and all is working as expected.

However, once we added the auth setup to the temporal cluster/server, the default worker is failing on auth, which is expected:

{"level":"error","ts":"2024-05-12T00:14:51.224Z","msg":"error starting temporal-sys-tq-scanner-workflow workflow","service":"worker","error":"Request unauthorized.","logging-call-at":"scanner.go:289"

My question is: What is the recommended approach for adding the auth header to the default cluster worker? Also, just for clarity, we are actually using the temporal operator in k8s to manage the temporal cluster. I have gone through these docs here: v1beta1 - Temporal Operator

As well as these docs here: Temporal Cluster configuration reference | Temporal Documentation

I am wondering if there are some ENV vars that we can pass to the default cluster worker service (CLIENT_ID, CLIENT_SECRET, AUTH_URL, etc.) that worker knows how to handle? If not, what’s the recommended approach to setting up the default cluster worker with OIDC auth?

Hi,
I am seeing the same behavior with the helm chart deployment of temporal cluster.

The builtin worker does not appear to be aware of the authorization step before trying to make contact with the frontend. I cannot find any authorization metadata or authorizer set in the scanner client options where the worker is created for the scanner workflow.

I suspect the worker would need to implement something like NewAPIKeyDynamicCredentials and be passed a callback URL in order to handle the auth flow. This could possibly be enabled through dynamic config and read by the worker config.