AWS Keys updated every 10 hours in credential file

We are using AWS S3 for archival.
We are planning to use the AWS Credential file for authentication.
The keys will be stored in the AWS Credential file. The file will be stored in ~/.aws/credentials.

We rotate our keys every 10 hours. Keys will be rotated and updated in the Credential File.

Does AWS SDK/Temporal always pull out the latest keys from the credential file before every operation?

Say, the credential file has been updated, but the AWS SDK is still using the old keys, and the archival fails. And as part of retry, will Temporal/AWS SDK refer to the latest credential file contents?

It looks as a new session is created each time for both s3 history and visibility archivers.

Aws-sdk-go docs for NewSession say:

// NewSession returns a new Session created from SDK defaults, config files,
// environment, and user provided config files. Once the Session is created
// it can be mutated to modify the Config or Handlers. The Session is safe to
// be read concurrently, but it should not be written to concurrently.
// If the AWS_SDK_LOAD_CONFIG environment variable is set to a truthy value
// the shared config file (~/.aws/config) will also be loaded in addition to
// the shared credentials file (~/.aws/credentials). Values set in both the
// shared config, and shared credentials will be taken from the shared
// credentials file. Enabling the Shared Config will also allow the Session
// to be built with retrieving credentials with AssumeRole set in the config.

So my best guess (not aws sdk expert) is that it should pick up your stored credentials each time the archiver runs.


I tested it and we ran into stale keys issue.

Archival worked for 10hours and then stopped working with the error InvalidAccessKeyId: StorageFabric: Client access key not found at Gateway\n\t status code 403.

I had to restart temporal to get the archival working again.

Ideally if we are creating a new session each time, this should not happen.

But when I dug deeper into the code,
I came across caching of archivers. Looks like Visibility and History archivers are created once and resused. This means session is created once and the same session will be used for the lifetime of the application.

File : provider.go
line number 130 to 132 & 180 to 182.

Could you please check the code in the above lines and confirm if the archivers are cached?

And if yes, what’s the work around?

Looks to me as you are correct there, let me check with server team and get info on what they think can be done.


Any updates on this?

@yux this is regarding the archival question in server slack. Could you please help us?

@tihomir @yux

I have commented out the below two lines to prevent the caching of archivers.

p.historyArchivers[archiverKey] = historyArchiver (Line 174)

p.visibilityArchivers[archiverKey] = visibilityArchiver (Line 225)


  • How much of an overhead is it to create the S3 archiver each and every time?
  • Is there an elegant way to flush out the cache every 8 hours?

Another Problem
I run a sidecar container which pulls the AWS Keys every 8 hours and stores it in a shared volume at the path specified by AWS.
Temporal container also points to the shared volume.

This works fine, if the sidecar was able to pull the AWS Keys and store it in the volume before the archiver was created in temporal. Else it the AWS session created by Temporal will be missing the credential file and an error will be thrown saying the same.

So I need a way for temporal to check if the credential file is present, if not sleep for 60 seconds and then try to create the session.

Created a bug card →