Temporal cluster not emitting certain metrics

I am running a local temporal cluster 1.22 deployed in kubernetes through helm.

Port-forwarded the web ui port and the frontend port to the host.

kubectl -n temporal port-forward services/temporal-web 8080:8080 & kubectl -n temporal port-forward services/temporal-frontend-headless 7233:7233

Created a default namespace

tctl -ns default n register

Then I ran the temporal go sample application for metrics - samples-go/metrics at main · temporalio/samples-go · GitHub

go run metrics/worker/main.go (running in one terminal window)

go run metrics/starter/main.go (another terminal window)

2024/01/31 13:46:19 INFO  No logger configured for temporal client. Created default one.
2024/01/31 13:46:19 Started workflow. WorkflowID metrics_workflowID RunID 31698d47-09bd-4b44-b499-47c5197ad40b
2024/01/31 13:46:21 Check metrics at http://localhost:9090/metrics

I can see the worker sdk metrics at http://localhost:9090/metrics.

But I am looking for activity_end_to_end_latency metric which is emitted in the temporal cluster through the history service. When I port-forwarded the history service’s 9090 port and looked at its metric, I couldn’t find activity_end_to_end_latency for the default namespace. In fact I couldn’t find any metrics for the default namespace. All I see are metrics for the temporal-system namespace.

kubectl -n temporal port-forward services/temporal-history-headless 9091:9090

Here is my kube services list

kubectl get services -n temporal
NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
postgresql                         ClusterIP   10.96.54.24     <none>        5432/TCP            18h
postgresql-hl                      ClusterIP   None            <none>        5432/TCP            18h
temporal-admintools                ClusterIP   10.96.133.126   <none>        22/TCP              18h
temporal-frontend                  ClusterIP   10.96.120.135   <none>        7233/TCP            18h
temporal-frontend-headless         ClusterIP   None            <none>        7233/TCP,9090/TCP   18h
temporal-history-headless          ClusterIP   None            <none>        7234/TCP,9090/TCP   18h
temporal-kube-state-metrics        ClusterIP   10.96.231.19    <none>        8080/TCP            18h
temporal-matching-headless         ClusterIP   None            <none>        7235/TCP,9090/TCP   18h
temporal-prometheus-alertmanager   ClusterIP   10.96.223.245   <none>        80/TCP              18h
temporal-prometheus-pushgateway    ClusterIP   10.96.59.153    <none>        9091/TCP            18h
temporal-prometheus-server         ClusterIP   10.96.114.139   <none>        80/TCP              18h
temporal-web                       ClusterIP   10.96.45.211    <none>        8080/TCP            18h
temporal-worker-headless           ClusterIP   None            <none>        7239/TCP,9090/TCP   18h

Does anyone know what I am missing?

activity_endtoend_latency is no longer emitted, it’s been replaced: Activity success e2e latency metrics by cretz · Pull Request #575 · temporalio/sdk-go · GitHub

It’s an SDK metric, not a server metric.

I am able to see the sdk metrics. But there are some server metrics that should be emitted but I don’t see them.

For example, these are some server metrics that are emitted as a part of the history service.

But I don’t see them when I look at the history service metrics. All I see are the metrics for temporal_system namespace.