Adjust temporal logging level

Is it possible to adjust the level of log output for temporal? When I run temporal within a docker container it’s defaulting to outputting info level logs and I prefer to have logs only output more severe levels such as warn and error.

At the moment, the docker logs are flooded with temporal log entires such as this:

temporal              | {"level":"info","ts":"2023-01-03T14:35:13.281Z","msg":"Get dynamic config","name":"system.clusterMetadataRefreshInterval","value":60,"default-value":60,"logging-call-at":"config.go:79"}
temporal              | {"level":"info","ts":"2023-01-03T14:35:13.298Z","msg":"Membership heartbeat upserted successfully","service":"history","address":"172.18.0.3","port":6934,"hostId":"cc063654-8b73-11ed-99d7-0242ac120003","logging-call-at":"rpMonitor.go:222"}
temporal              | {"level":"info","ts":"2023-01-03T14:35:13.303Z","msg":"bootstrap hosts fetched","service":"history","bootstrap-hostports":"172.18.0.3:6935,172.18.0.3:6934","logging-call-at":"rpMonitor.go:263"}
temporal              | {"level":"info","ts":"2023-01-03T14:35:13.312Z","msg":"Current reachable members","service":"history","component":"service-resolver","service":"history","addresses":["172.18.0.3:7234"],"logging-call-at":"rpServiceResolver.go:266"}
temporal              | {"level":"info","ts":"2023-01-03T14:35:13.317Z","msg":"RuntimeMetricsReporter started","service":"history","logging-call-at":"runtime.go:139"}
temporal              | {"level":"info","ts":"2023-01-03T14:35:13.320Z","msg":"history starting","service":"history","logging-call-at":"service.go:96"}
temporal              | {"level":"info","ts":"2023-01-03T14:35:13.320Z","msg":"Replication task fetchers started.","logging-call-at":"replicationTaskFetcher.go:141"}
temporal              | {"level":"info","ts":"2023-01-03T14:35:13.321Z","msg":"Get dynamic config","name":"history.acquireShardConcurrency","value":10,"default-value":10,"logging-call-at":"config.go:79"}
temporal              | {"level":"info","ts":"2023-01-03T14:35:13.322Z","msg":"Get dynamic config","name":"history.eventsCacheInitialSize","value":128,"default-value":128,"logging-call-at":"config.go:79"}
temporal              | {"level":"info","ts":"2023-01-03T14:35:13.323Z","msg":"Get dynamic config","name":"history.eventsCacheMaxSize","value":512,"default-value":512,"logging-call-at":"config.go:79"}
temporal              | {"level":"info","ts":"2023-01-03T14:35:13.324Z","msg":"Get dynamic config","name":"history.eventsCacheTTL","value":3600,"default-value":3600,"logging-call-at":"config.go:79"}

Which makes it harder to spot more severe (error) log entries.

Hello @dsc

can you try setting the env variable LOG_LEVEL?

Let me know if it works.

Regards,
Antonio

That did it @antonio.perez, thank you very much.