The destination for health checks on the targets.
If the protocol version is HTTP/1.1 or HTTP/2, specify a valid URI (/ *path* ? *query* ). The default is /.
If the protocol version is gRPC, specify the path of a custom health check method with the format `/Package.Class/method` . The default is `/AWS.ALB/healthcheck` .
Probably the easiest way to go here is to pass a health check by opening a tcp connection to the grpc service endpoint - if you can connect its “healthy”
we don’t have a real grpc health check right now - so best recommendation is just to try and open a connection. the grpc service endpoint i’m talking about is just the temporal endpoint you’d point your client sdk to.
Struggling to find the correct combination to make AWS Application Load Balancers happy. Looks like they expect an HTTP response though… hmm
curl 172.17.0.3:7233/grpc.health.v1.Health/Check -v
* Trying 172.17.0.3:7233...
* Connected to 172.17.0.3 (172.17.0.3) port 7233 (#0)
> GET /grpc.health.v1.Health/Check HTTP/1.1
> Host: 172.17.0.3:7233
> User-Agent: curl/7.83.1
> Accept: */*
>
* Received HTTP/0.9 when not allowed
* Closing connection 0
curl: (1) Received HTTP/0.9 when not allowed
I’m not sure if its going to be possible to make this play nice with an application load balancer. A network one may work…
Each temporal server service type has default grpc port (see here) which you can change if you needed. Note that the worker service does not expose a health check currently.
The health check request should be grpc so not sure curl would work. For frontend service you could use grpcurl for example:
Don’t think this is possible with matching / history service as they don’t expose reflection api currently. You can however use grpc health probe for all frontend / matching / history:
OK, so if I make the GPRC ports available via a network load balancer, that part will work. And I can easily give them a consistent name via DNS.
What about the membership ports? I’m unclear on what the local networking requirements are / how the broadcast address comes into play when running these as separate containers on ECS.
I’m assuming if I gave the membership ports their own network load balancers, that may work, but then how does each service know how to find one another? Since I’m not sure how local broadcasting will work on ECS.
Can I pass an env var to each service that will tell it how to find the other services based on their hostname (and associated ports exposed via the load balancer?)
Would setting TEMPORAL_BROADCAST_ADDRESS to the DNS name work?