Exception while trying to connect to cluster deployed on EKS/Istio gateway with SSL

My temporal setup done in company controlled eks environment, I have attached all manifests for the same
The ssl offloading is happening at the istio gateway level( refer manifet for the exact foncfiguration)
Attached is my worker code:

    WorkflowServiceStubs service =
	        WorkflowServiceStubs.newInstance(
	            WorkflowServiceStubsOptions.newBuilder()
	            .setTarget("temporal-server.xyz.com")
	            .build());
    
    WorkflowClient client = WorkflowClient.newInstance(service);
    WorkerFactory factory = WorkerFactory.newInstance(client);
    Worker worker = factory.newWorker("WAS_TASK_QUEUE_scenarioexpansion");
    worker.registerWorkflowImplementationTypes(WASscenarioexpansionImpl.class,WASgetDataForCreateScenarioPageImpl.class,WASgetDataForrfsetsPageImpl.class,WASevalrfsetsImpl.class);
    worker.registerActivitiesImplementations(new CommonActivityImpl(), new WASActivityscenarioexpansionImpl(),new WASActivitygetDataForCreateScenarioPageImpl(),new WASActivitygetDataForrfsetsPageImpl(),new WASActivityevalrfsetsImpl());
    factory.start();

On trying to start the worker I am getting this exception:
Plz help me with the exception resolution

13:04:52.913 [main] WARN i.t.internal.retryer.GrpcSyncRetryer - Retrying after failure
io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 4.913540700s. [closed=, open=[[buffered_nanos=4921285600, waiting_for_connection]]]
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:262)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:243)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:156)
at io.grpc.health.v1.HealthGrpc$HealthBlockingStub.check(HealthGrpc.java:252)
at io.temporal.serviceclient.WorkflowServiceStubsImpl.lambda$checkHealth$2(WorkflowServiceStubsImpl.java:273)
at io.temporal.internal.retryer.GrpcSyncRetryer.retry(GrpcSyncRetryer.java:61)
at io.temporal.internal.retryer.GrpcRetryer.retryWithResult(GrpcRetryer.java:51)
at io.temporal.serviceclient.WorkflowServiceStubsImpl.checkHealth(WorkflowServiceStubsImpl.java:266)
at io.temporal.serviceclient.WorkflowServiceStubsImpl.(WorkflowServiceStubsImpl.java:173)
at io.temporal.serviceclient.WorkflowServiceStubs.newInstance(WorkflowServiceStubs.java:51)
at io.temporal.serviceclient.WorkflowServiceStubs.newInstance(WorkflowServiceStubs.java:41)
at com.xyz.workflow.worker.WASWorkerscenarioexpansion.main(WASWorkerscenarioexpansion.java:39)
Exception in thread “main” io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 4.998587800s. [closed=, open=[[buffered_nanos=5008413300, waiting_for_connection]]]
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:262)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:243)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:156)
at io.grpc.health.v1.HealthGrpc$HealthBlockingStub.check(HealthGrpc.java:252)
at io.temporal.serviceclient.WorkflowServiceStubsImpl.lambda$checkHealth$2(WorkflowServiceStubsImpl.java:273)
at io.temporal.internal.retryer.GrpcSyncRetryer.retry(GrpcSyncRetryer.java:61)
at io.temporal.internal.retryer.GrpcRetryer.retryWithResult(GrpcRetryer.java:51)
at io.temporal.serviceclient.WorkflowServiceStubsImpl.checkHealth(WorkflowServiceStubsImpl.java:266)
at io.temporal.serviceclient.WorkflowServiceStubsImpl.(WorkflowServiceStubsImpl.java:173)
at io.temporal.serviceclient.WorkflowServiceStubs.newInstance(WorkflowServiceStubs.java:51)
at io.temporal.serviceclient.WorkflowServiceStubs.newInstance(WorkflowServiceStubs.java:41)
at com.xyz.workflow.worker.WASWorkerscenarioexpansion.main(WASWorkerscenarioexpansion.java:39)

Would you try to add port to the target to “termporal-server.xyz.com:7233”? Target is passed gRPC which expects it in a specific format.

same error even after setting port

Whats the result of running

tctl --address temporal-server.xyz.com cl h

The ssl offloading is happening at the istio gateway level

What does this mean? Is Istio adding TLS auth info to each grpc requests before it forwards it to one of the temporal server frontend nodes?

If you need to pass the TLS credentials with your request see sample here.

output of

tctl --address temporal-server.xyz.com:443 cl h

is

bash-5.1# tctl --address temporal-server.xyz.com:443 cl h
Error: Unable to get “temporal.api.workflowservice.v1.WorkflowService” health check status.
Error Details: rpc error: code = Unavailable desc = connection closed before server preface received
Stack trace:
goroutine 1 [running]:
runtime/debug.Stack()
/usr/local/go/src/runtime/debug/stack.go:24 +0x65
runtime/debug.PrintStack()
/usr/local/go/src/runtime/debug/stack.go:16 +0x19
github.com/temporalio/tctl/cli_curr.printError({0xc00014ce40, 0x54}, {0x25781c0, 0xc00011e4e8})
/home/builder/tctl/cli_curr/util.go:392 +0x21e
github.com/temporalio/tctl/cli_curr.ErrorAndExit({0xc00014ce40?, 0x25?}, {0x25781c0?, 0xc00011e4e8?})
/home/builder/tctl/cli_curr/util.go:403 +0x28
github.com/temporalio/tctl/cli_curr.HealthCheck(0x356bb40?)
/home/builder/tctl/cli_curr/clusterCommands.go:50 +0x174
github.com/temporalio/tctl/cli_curr.newClusterCommands.func1(0xc0004e18c0?)
/home/builder/tctl/cli_curr/cluster.go:36 +0x19
github.com/urfave/cli.HandleAction({0x1ca23a0?, 0x21c0fc8?}, 0x6?)
/go/pkg/mod/github.com/urfave/cli@v1.22.5/app.go:526 +0x50
github.com/urfave/cli.Command.Run({{0x20de52a, 0x6}, {0x0, 0x0}, {0xc00011bd40, 0x1, 0x1}, {0x211fa7d, 0x20}, {0x0, …}, …}, …)
/go/pkg/mod/github.com/urfave/cli@v1.22.5/command.go:173 +0x652
github.com/urfave/cli.(*App).RunAsSubcommand(0xc000464000, 0xc0004e1600)
/go/pkg/mod/github.com/urfave/cli@v1.22.5/app.go:405 +0x91b
github.com/urfave/cli.Command.startApp({{0x20e0b84, 0x7}, {0x0, 0x0}, {0xc00011be20, 0x1, 0x1}, {0x2108ede, 0x18}, {0x0, …}, …}, …)
/go/pkg/mod/github.com/urfave/cli@v1.22.5/command.go:372 +0x6e7
github.com/urfave/cli.Command.Run({{0x20e0b84, 0x7}, {0x0, 0x0}, {0xc00011be20, 0x1, 0x1}, {0x2108ede, 0x18}, {0x0, …}, …}, …)
/go/pkg/mod/github.com/urfave/cli@v1.22.5/command.go:102 +0x808
github.com/urfave/cli.(*App).Run(0xc0007a3c00, {0xc00003a0a0, 0x5, 0x5})
/go/pkg/mod/github.com/urfave/cli@v1.22.5/app.go:277 +0x8a7
main.main()
/home/builder/tctl/cmd/tctl/main.go:45 +0xa6

What might be the issue here ? is the traffic going to my pod at all?

Also doing grpcurl on the endpoint gives me this:

docker run fullstorydev/grpcurl temporal-server.xyz.com:443 list
grpc.health.v1.Health
grpc.reflection.v1alpha.ServerReflection
temporal.api.workflowservice.v1.WorkflowService
temporal.server.api.adminservice.v1.AdminService

but if i do list of individual service , it fails:

docker run fullstorydev/grpcurl temporal-server.xyz.com:443 list temporal.api.workflowservice.v1.WorkflowService
Failed to list methods for service “temporal.api.workflowservice.v1.WorkflowService”: Symbol not found: temporal.api.workflowservice.v1.WorkflowService

HI my issue is resolved , after i added setSslContext()

Final working code is:

            WorkflowServiceStubs service =
                   WorkflowServiceStubs.newInstance(
                       WorkflowServiceStubsOptions.newBuilder()
                      .setTarget("temporal-server.xyz.com:443")

.setSslContext(SimpleSslContextBuilder.noKeyOrCertChain().setUseInsecureTrustManager(true).build())
.build());


apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: temporal-server-gateway
spec:
selector:
istio: ingressgateway
servers:

  • port:
    number: 443
    name: https
    protocol: HTTPS
    hosts:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: temporal-server-vs
spec:
hosts:

  • temporal-server.xyz.com
    gateways:
  • temporal-server-gateway
    http:
  • match:
    • gateways:
      • temporal-server-gateway
        route:
    • destination:
      host: temporal
      port:
      number: 7233

apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (40646f47)
creationTimestamp: null
labels:
io.kompose.service: temporal
name: temporal
spec:
ports:
- name: grpc
protocol: TCP
port: 7233
targetPort: 7233
selector:
io.kompose.service: temporal