Configuring external promethrus and grafana

I am trying to configure temporal with external prometheus and grafana. I have been trying to via helm charts but I am not able to make it work. Can you guide me to the file that needs prometheus and grafana configurations.

2 Likes

I have the same question.
From other discussions(How to monitor ScheduleToStart latency - #6 by tihomir) I know we need to add a prometheus job in its configuration like below:

scrape_configs:
  - job_name: 'temporal-to-custom-prometheus'
    metrics_path: /metrics
    scheme: http
    static_configs:
      - targets:
          - 'temporal:8000'
          - 'worker:8001'
          - 'starter:8002'

can anyone explain the configurations further? the exact metric path, scheme, and targets?
I tried replacing the target with my temporal service name and the grpc port 7233, but it did not work.
Shall I expose the 8000 port or change the scheme to rpc?

Hi @whitecrow, are you deploying with docker-compose or helm charts?
If you are using Temporal helm charts with the minimal install:

helm install \
    --set server.replicaCount=1 \
    --set cassandra.config.cluster_size=1 \
    --set prometheus.enabled=false \
    --set grafana.enabled=false \
    --set elasticsearch.enabled=false \
    temporaltest . --timeout 15m

and your cluster is up run:

kubectl get svc
and see all services (temporaltest-frontend-headless, temporaltest-history-headless, temporaltest-matching-headless, temporaltest-worker-headless) that expose port 9090
server metrics are by default enabled for these services on port 9090.

easiest way to view the metrics (and set them as targets in your prometheus scrape config) is just via port-forward, for example:

kubectl port-forward service/temporaltest-matching-headless 9090:9090

and view your matching service metrics at http://localhost:9090/metrics
(you can do this for each individual service for which you want to scrape metrics for)

If you are deploying via docker-compose in your config under temporal->environment enable metrics by setting a prometheus endpoint:

- PROMETHEUS_ENDPOINT=0.0.0.0:8000

and expose port under temporal->ports:

- 8000:8000

and finally view your metrics on http://localhost:8000/metrics
(metrics will be for all temporal services)

if you are deploying your app and worker services via docker-compose as well, see this demo app, especially the “deployment/prometheus” and “deployment/grafana” dirs to see how you can configure scrape points and grafana with dashboards.

hope this gets your started in the right direction

Thanks for the detailed guiding. It works for me.
I translate the docker-compose scripts into kubernetes deployments, and run them on our existing kubernetes cluster. So the docker-compose configuration also works for me.

If the use case docker, do we need to make a container for Grafana, make it integrate into cluster?
how do we integrate, any config file entries for the same?

do we need to make a container for Grafana, make it integrate into cluster?

Not sure I understand the question, yes you would need to deploy Prometheus and Grafana. Configure Prometheus to scrape your server metrics and Grafana data source to read from Prometheus.

how do we integrate, any config file entries for the same

here is one way to get some ideas if it helps:

1 Like

Thanks Tihomir!