Prometheus metrics configuration confusion

I have a little doubt about the configuration of the prometheus metrics. From this docs, I learned the ‘listenAddress’ which is for Prometheus to scrape metrics from. But from the helm charts or other discussion , I learned that each service has exposed a metrics port, e.g default is 9090. So what is the difference between ‘listenAddress’ and service exposed port ? and How should I configure it?

Is the “process-wider configuration” text in docs maybe what’s creating the confusion?
For helm charts each temporal service pod defines its metrics format (prometheus) and the listenAddress defaulting to “0.0.0.0:9090”. Each service deployment also sets up prometheus auto discovery, see:

prometheus.io/job: {{ $.Chart.Name }}-{{ $service }}
prometheus.io/scrape: 'true'
prometheus.io/port: '9090'

in server-deployment.yaml for example.

So each service role can emit its metrics on a different port that can be then scraped by Prometheus. My guess is that the docs maybe assume the use of auto-setup image where all Temporal services are deployed onto same container and same process where you would use a single config.

Hi,@tihomir, I check if my understanding is correct. According your description, I can configure the listen address(prometheus) of each temporal service pod in server-deployment.yaml. However, the server-configmap.yaml is used in same process where use a single config.

global:
      metrics:
        tags:
          type: frontend
        prometheus:
          timerType: histogram
          listenAddress: "0.0.0.0:9090"

But I am still a little confusion, will this global configuration affect the each temporal service deployment? what is the use of this global metrics configuration?

Currently in helm charts every Temporal service is going to configure prometheus and port 9090 for metrics as you have shown. We should probably update this to be configurable from values.yaml, could you please open issue in helm-charts repo?

what is the use of this global metrics configuration?

global is “global” really if you run all Temporal services (frontend, matching, history, worker) in the same process and have single config for all (this is default unless you specify SERVICES env var in pods to the specific role you want to start).
If you deploy each service individually like with helm (or same in docker) each pod/container has own config so yes its not that “global” and you could set different configs for each role if you wanted.

my ops team to complained this, in fact it appers that after upgrading from 1.16 to 1.17 or beyond possibly, the Prometheus integration is broken, in spite of having

prometheus.io/job: {{ $.Chart.Name }}-{{ $service }}
prometheus.io/scrape: ‘true’
prometheus.io/port: ‘9090’

for each prossess at pod , i dont see 9090 running

netstat -a in each of the pod d0es not show this port at all.

i think its now deprecated and removed,can you please confirm @tihomir ,

and

But, if thats the case, its certainly a bug in the helms, because helms still seem to use per service metric configuration and has no defination for global values.

Looks like my helms were not upto date, changing/upgrading the helm charts helped.