Issue with Archival with AWS S3 provider

We installed temporal in the our AWS EKS Cluster with helm chart. (The version of temporal is 1.22.4, and version of the helm chart is 0.32.0). Also, we would like to use Archival with AWS S3 provider. Here, the part of our helm values file, related to Archival:

server:
  archival:
    history:
      state: "enabled"
      enableRead: true
      provider:
        s3store:
          region: "eu-west-2"
    visibility:
      state: "enabled"
      enableRead: true
      provider:
        s3store:
          region: "eu-west-2"

  namespaceDefaults:
    archival:
      history:
        state: "enabled"
        URI: "s3://dev-archival-bucket"
      visibility:
        state: "enabled"
        URI: "s3://dev-archival-bucket"

When I create workflow in the Temporal, I get this errors in the Archival pod logs:

{"level":"error","ts":"2024-01-27T14:45:27.896Z","msg":"failed to archive target","shard-id":242,"archival-caller-service-name":"history","archival-request-namespace-id":"f6c90475-977f-4603-a949-f415732c1d50","archival-request-namespace":"ges","archival-request-workflow-id":"464aa4a4-ffdf-452f-b94d-006afa61840e","archival-request-run-id":"19b38e5d-79a1-4f11-8270-87960ea1c5e8","archival-URI":"file:///tmp/temporal_vis_archival/development","target":"visibility","error":"unable to find archiver config for the given scheme","logging-call-at":"archiver.go:276","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:156\ngo.temporal.io/server/service/history/archival.(*archiver).recordArchiveTargetResult\n\t/home/builder/temporal/service/history/archival/archiver.go:276\ngo.temporal.io/server/service/history/archival.(*archiver).archiveVisibility\n\t/home/builder/temporal/service/history/archival/archiver.go:235\ngo.temporal.io/server/service/history/archival.(*archiver).Archive.func3\n\t/home/builder/temporal/service/history/archival/archiver.go:188"}
{"level":"error","ts":"2024-01-27T14:45:27.896Z","msg":"failed to archive target","shard-id":242,"archival-caller-service-name":"history","archival-request-namespace-id":"f6c90475-977f-4603-a949-f415732c1d50","archival-request-namespace":"ges","archival-request-workflow-id":"464aa4a4-ffdf-452f-b94d-006afa61840e","archival-request-run-id":"19b38e5d-79a1-4f11-8270-87960ea1c5e8","archival-request-branch-token":"ZiQxOWIzOGU1ZS03OWExLTRqMvTtODI1MC06Ezk2MGVhMWM1ZTgSJDJlNzhkN2M5LWElY7ItVYhlMC1iNDRlLWVlYjcjPmRzOGXjYw==","archival-request-close-failover-version":0,"archival-URI":"file:///tmp/temporal_archival/development","target":"history","error":"unable to find archiver config for the given scheme","logging-call-at":"archiver.go:276","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:156\ngo.temporal.io/server/service/history/archival.(*archiver).recordArchiveTargetResult\n\t/home/builder/temporal/service/history/archival/archiver.go:276\ngo.temporal.io/server/service/history/archival.(*archiver).archiveHistory\n\t/home/builder/temporal/service/history/archival/archiver.go:211\ngo.temporal.io/server/service/history/archival.(*archiver).Archive.func2\n\t/home/builder/temporal/service/history/archival/archiver.go:182"}
{"level":"warn","ts":"2024-01-27T14:45:27.897Z","msg":"failed to archive workflow","shard-id":242,"archival-caller-service-name":"history","archival-request-namespace-id":"f6c90475-977f-4603-a949-f415732c1d50","archival-request-namespace":"ges","archival-request-workflow-id":"464aa4a4-ffdf-452f-b94d-006afa61840e","archival-request-run-id":"19b38e5d-79a1-4f11-8270-87960ea1c5e8","error":"unable to find archiver config for the given scheme; unable to find archiver config for the given scheme","errorCauses":[{"error":"unable to find archiver config for the given scheme"},{"error":"unable to find archiver config for the given scheme"}],"logging-call-at":"archiver.go:153"}
{"level":"error","ts":"2024-01-27T14:45:27.897Z","msg":"Fail to process task","shard-id":242,"address":"10.0.4.41:7234","component":"archival-queue-processor","wf-namespace-id":"f6c90475-977f-4603-a949-f415732c1d50","wf-id":"464aa4a4-ffdf-452f-b94d-006afa61840e","wf-run-id":"19b38e5d-79a1-4f11-8270-87960ea1c5e8","queue-task-id":60817439,"queue-task-visibility-timestamp":"2024-01-27T14:33:30.483Z","queue-task-type":"ArchivalArchiveExecution","queue-task":{"NamespaceID":"f6c90475-977f-4603-a949-f415732c1d50","WorkflowID":"464aa4a4-ffdf-452f-b94d-006afa61840e","RunID":"19b38e5d-79a1-4f11-8270-87960ea1c5e8","VisibilityTimestamp":"2024-01-27T14:33:30.483205Z","TaskID":60817439,"Version":0},"wf-history-event-id":0,"error":"unable to find archiver config for the given scheme; unable to find archiver config for the given scheme","errorCauses":[{"error":"unable to find archiver config for the given scheme"},{"error":"unable to find archiver config for the given scheme"}],"lifecycle":"ProcessingFailed","logging-call-at":"lazy_logger.go:68","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:156\ngo.temporal.io/server/common/log.(*lazyLogger).Error\n\t/home/builder/temporal/common/log/lazy_logger.go:68\ngo.temporal.io/server/service/history/queues.(*executableImpl).HandleErr\n\t/home/builder/temporal/service/history/queues/executable.go:347\ngo.temporal.io/server/common/tasks.(*FIFOScheduler[...]).executeTask.func1\n\t/home/builder/temporal/common/tasks/fifo_scheduler.go:224\ngo.temporal.io/server/common/backoff.ThrottleRetry.func1\n\t/home/builder/temporal/common/backoff/retry.go:119\ngo.temporal.io/server/common/backoff.ThrottleRetryContext\n\t/home/builder/temporal/common/backoff/retry.go:145\ngo.temporal.io/server/common/backoff.ThrottleRetry\n\t/home/builder/temporal/common/backoff/retry.go:120\ngo.temporal.io/server/common/tasks.(*FIFOScheduler[...]).executeTask\n\t/home/builder/temporal/common/tasks/fifo_scheduler.go:233\ngo.temporal.io/server/common/tasks.(*FIFOScheduler[...]).processTask\n\t/home/builder/temporal/common/tasks/fifo_scheduler.go:211"}

As I can see from logs, URI for visibility still for local file system file:///tmp/temporal_vis_archival
Could anyone help me solve this problem? Thank you in advance!

Can you show the generated static config on one of your service pods (/etc/temporal/config/docker.yaml)

Thanks for the quick response!
Here is static config from /etc/temporal/config/docker.yaml. I just changed rds endpoint and password

log:
  stdout: true
  level: "debug,info"

persistence:
  defaultStore: default
  visibilityStore: visibility
  numHistoryShards: 512
  datastores:
    default:
      sql:
        pluginName: "mysql"
        driverName: "mysql"
        databaseName: "temporal"
        connectAddr: "endpoint_to_rds.eu-west-2.rds.amazonaws.com:3306"
        connectProtocol: "tcp"
        user: temporal_dev
        password: "mysuperpassword"
        maxConnLifetime: 1h
        maxConns: 20
        secretName: ""
    visibility:
      sql:
        pluginName: "mysql"
        driverName: "mysql"
        databaseName: "temporal_visibility"
        connectAddr: "endpoint_to_rds.eu-west-2.rds.amazonaws.com:3306"
        connectProtocol: "tcp"
        user: "temporal_dev"
        password: "mysuperpassword"
        maxConnLifetime: 1h
        maxConns: 20
        secretName: ""

global:
  membership:
    name: temporal
    maxJoinDuration: 30s
    broadcastAddress: 10.0.4.167

  pprof:
    port: 7936

  metrics:
    tags:
      type: history
    prometheus:
      timerType: histogram
      listenAddress: "0.0.0.0:9090"

services:
  frontend:
    rpc:
      grpcPort: 7233
      membershipPort: 6933
      bindOnIP: "0.0.0.0"

  history:
    rpc:
      grpcPort: 7234
      membershipPort: 6934
      bindOnIP: "0.0.0.0"

  matching:
    rpc:
      grpcPort: 7235
      membershipPort: 6935
      bindOnIP: "0.0.0.0"

  worker:
    rpc:
      grpcPort: 7239
      membershipPort: 6939
      bindOnIP: "0.0.0.0"
clusterMetadata:
  enableGlobalDomain: false
  failoverVersionIncrement: 10
  masterClusterName: "active"
  currentClusterName: "active"
  clusterInformation:
    active:
      enabled: true
      initialFailoverVersion: 1
      rpcName: "temporal-frontend"
      rpcAddress: "127.0.0.1:7933"
dcRedirectionPolicy:
  policy: "noop"
  toDC: ""
archival:
  history:
    enableRead: true
    provider:
      s3store:
        region: eu-west-2
    state: enabled
  visibility:
    enableRead: true
    provider:
      s3store:
        region: eu-west-2
    state: enabled
namespaceDefaults:
  archival:
    history:
      URI: s3://dev-archival-bucket
      state: enabled
    visibility:
      URI: s3://dev-archival-bucket
      state: enabled

publicClient:
  hostPort: "temporal-swfy-temporal-frontend:7233"

dynamicConfigClient:
  filepath: "/etc/temporal/dynamic_config/dynamic_config.yaml"
  pollInterval: "10s"

Don’t see anything thats obvious, can you

  1. check if this is asme on all service pods
  2. check your ring members with tctl
    tctl adm cl d
    see if any are not expected in the result list

I’ve checked generated service config, as I can see, it is same for all service pods. Configs have only two differences: in the field broadcastAddress that has current service pod address, and type that has current pod service type.
Also, I’ve run this command tctl adm cl d and as I can see, all is correct.

Hello!
Any other ideas, how to fix this issue?
Thank you!