Hello, we’re using cassandra, and go-sdk. And we came across a weird thing. According to documentation, we get SDK metrics | Temporal Documentation schedule to start latency in milliseconds. In our case, we usually get 0.021 (presumably) milliseconds for a worker which is doing nothing. Which would mean it’s 21 microseconds.
We’ve debugged the metrics code and it actually seems that schedule to start latency is in seconds. Which would make lots more sense to us. Cause time ranges in microseconds is pretty small even for go routines?
Could someone clarify this up? We need it for our own documentation thanks guys.
Could you confirm if you are looking at SDK metrics here or server metrics?
Asking because worker service role also emits sdk metrics (for internal workflows running on the temporal-system namespace) and all server timer metric buckets reported should be in seconds.
We’re definitey looking at sdk metrics here
Thanks, yes the histogram buckets created (prometheus format) are in seconds / fractions of seconds.
So if you are looking at a specific bucket you would have to use fraction of seconds for example for 300ms
(this is shown in prometheus docs here too)
I think in Temporal SDK metrics docs page we should remove the mention of milliseconds because when consuming in prometheus format the historgram buckets will be in seconds regardless.
Do you mind opening an issue for this here?