1.16.1 register namespace failure

Problem:
Fail to register new namespace after installation.
Methods tried: 1) tctl from port-forward 2) tctl admintools container

Temporal
Version: 1.16.1
Installation: temporal helm chart

Detailed output

bash-5.1# tctl --version
tctl version 1.16.1
bash-5.1# tctl --ns test namespace register
Error: Register namespace operation failed.
Error Details: rpc error: code = Unknown desc = Forbidden: Forbidden
	status code: 403, request id: TCPQT7PM37T1SYE7, host id: MXbqQhAmHveGGYaD5eSjxZrDDApWlpSWgQzuXF4F23baLs705x4JPUG8Zhs9xIhD/paudghRUcU=
Stack trace:
goroutine 1 [running]:
runtime/debug.Stack()
	/usr/local/go/src/runtime/debug/stack.go:24 +0x65
runtime/debug.PrintStack()
	/usr/local/go/src/runtime/debug/stack.go:16 +0x19
github.com/temporalio/tctl/cli_curr.printError({0x1f85452, 0x24}, {0x23a7600, 0xc00000e028})
	/home/builder/tctl/cli_curr/util.go:392 +0x21e
github.com/temporalio/tctl/cli_curr.ErrorAndExit({0x1f85452?, 0x23b7778?}, {0x23a7600?, 0xc00000e028?})
	/home/builder/tctl/cli_curr/util.go:403 +0x28
github.com/temporalio/tctl/cli_curr.(*namespaceCLIImpl).RegisterNamespace(0x0?, 0xc0001ab8c0)
	/home/builder/tctl/cli_curr/namespaceCommands.go:157 +0x9e5
github.com/temporalio/tctl/cli_curr.newNamespaceCommands.func1(0xc0001ab8c0?)
	/home/builder/tctl/cli_curr/namespace.go:77 +0x2f
github.com/urfave/cli.HandleAction({0x1b38b00?, 0x2012f38?}, 0x8?)
	/go/pkg/mod/github.com/urfave/cli@v1.22.5/app.go:526 +0x50
github.com/urfave/cli.Command.Run({{0x1f3fd7c, 0x8}, {0x0, 0x0}, {0xc00049f6a0, 0x1, 0x1}, {0x1f6b0a5, 0x1b}, {0x0, ...}, ...}, ...)
	/go/pkg/mod/github.com/urfave/cli@v1.22.5/command.go:173 +0x652
github.com/urfave/cli.(*App).RunAsSubcommand(0xc0002a96c0, 0xc0001ab4a0)
	/go/pkg/mod/github.com/urfave/cli@v1.22.5/app.go:405 +0x91b
github.com/urfave/cli.Command.startApp({{0x1f41b2c, 0x9}, {0x0, 0x0}, {0xc00049faa0, 0x1, 0x1}, {0x1f6859c, 0x1a}, {0x0, ...}, ...}, ...)
	/go/pkg/mod/github.com/urfave/cli@v1.22.5/command.go:372 +0x6e7
github.com/urfave/cli.Command.Run({{0x1f41b2c, 0x9}, {0x0, 0x0}, {0xc00049faa0, 0x1, 0x1}, {0x1f6859c, 0x1a}, {0x0, ...}, ...}, ...)
	/go/pkg/mod/github.com/urfave/cli@v1.22.5/command.go:102 +0x808
github.com/urfave/cli.(*App).Run(0xc0002a9340, {0xc00003a0a0, 0x5, 0x5})
	/go/pkg/mod/github.com/urfave/cli@v1.22.5/app.go:277 +0x8a7
main.main()
	/home/builder/tctl/cmd/tctl/main.go:45 +0xa6

Helm config snippet for archive

    archival:
      history:
        enableRead: true
        provider:
          s3store:
            region: us-east-2
        state: enabled
      visibility:
        enableRead: true
        provider:
          s3store:
            region: us-east-2
        state: enabled
    namespaceDefaults:
      archival:
        history:
          URI: s3://test-archive
          state: enabled
        visibility:
          URI: s3://test-archive
          state: enabled

Related issue: loggging/exception stack trace not capturing underlying errors · Issue #983 · temporalio/temporal · GitHub

You should be able to follow AWS docs: Configuring the AWS SDK for Go - AWS SDK for Go (version 1) on how to set up authentication against s3.

Thanks @tihomir , can you share how to config the aws key in this snippet? I would imagine key will be mounted to a volume, but cannot find the aws key lookup logic in the history/visibility archiver code

archival:
      history:
        enableRead: true
        provider:
          s3store:
            region: us-east-2
        state: enabled

I believe the previously linked aws doc gives you multiple ways you can do this, for example using env vars, credentials file, custom endpoint etc.

We are running in hosted k8s, not using ec2 or ecs. cred file as a secret mount volume is a good way but I did not see history/visibility archiver reading secret from any cred file. It is not client code but history/visibility trying to reach s3 buckets in this case, so loading aws credential logic should reside in history/visibility code. The ec2 case works because an IAM role can attach to the instance profile which itself is not safe because all services running on same host would inherit this access permission. in k8s, it is different and requires extra non trivial steps to attach to a service account, it would be great if we have the secret mount option available.

so loading aws credential logic should reside in history/visibility code

I think the original idea was to not do this in code to not enforce certain auth which may not work in all cases. AWS does provide many different way to set up auth to s3 that allow users to pick and chose what works best for them.
Feel free to open a feature request here and describe how you would like to see this.