Jake
September 23, 2024, 12:43pm
1
Is there a good way to debug connection issues to a local Temporal Server? I’m working with the DotNet SDK and am hitting the following error.
System.InvalidOperationException: Connection failed: Server connection error: tonic::transport::Error(Transport, ConnectError(ConnectError(\"tcp connect error\", Os { code: 111, kind: ConnectionRefused, message: \"Connection refused\" })))\n at Temporalio.Bridge.Client.ConnectAsync(Runtime runtime, TemporalConnectionOptions options)\n at Temporalio.Client.TemporalConnection.GetBridgeClientAsync()\n at Temporalio.Client.TemporalConnection.ConnectAsync(TemporalConnectionOptions options)\n at Temporalio.Client.TemporalClient.ConnectAsync(TemporalClientConnectOptions options)\n at Temporalio.Extensions.Hosting.TemporalWorkerService.ExecuteAsync(CancellationToken stoppingToken)\n at Microsoft.Extensions.Hosting.Internal.Host.TryExecuteBackgroundServiceAsync(BackgroundService backgroundService)","EventId":{"Id":9,"Name":"BackgroundServiceFaulted"},"SourceContext":"Microsoft.Extensions.Hosting.Internal.Host","Application":"Blueprint.Web","Environment":"Development"}
I’m running a local Temporal server using the latest Docker image following the instructions here: Deploying a Temporal Service | Temporal Documentation .
I’m trying to establish connection via the following in my Startup.cs:
services.AddHostedTemporalWorker(
clientTargetHost: "localhost:7233",
TemporalConstants.TemporalNamespace,
TemporalConstants.TemporalTaskQueue)
.AddScopedActivities<MyWorkflowActivities>()
.AddWorkflow<MyWorkflow>()
I’ve also tried swapping out “localhost:7233” for “127.0.0.1:7233”.
Any help would be much appreciated!
Jake
September 23, 2024, 1:02pm
2
I’ve also included my docker.yaml file that was generated inside /etc/temporal/config/docker.yaml
log:
stdout: true
level: info
persistence:
numHistoryShards: 4
defaultStore: default
visibilityStore: es-visibility
datastores:
default:
sql:
pluginName: "postgres12"
databaseName: "temporal"
connectAddr: "postgresql:5432"
connectProtocol: "tcp"
user: "temporal"
password: "temporal"
maxConns: 20
maxIdleConns: 20
maxConnLifetime: 1h
tls:
enabled: false
caFile:
certFile:
keyFile:
enableHostVerification: false
serverName:
visibility:
sql:
pluginName: "postgres12"
databaseName: "temporal_visibility"
connectAddr: "postgresql:5432"
connectProtocol: "tcp"
user: "temporal"
password: "temporal"
maxConns: 10
maxIdleConns: 10
maxConnLifetime: 1h
tls:
enabled: false
caFile:
certFile:
keyFile:
enableHostVerification: false
serverName:
es-visibility:
elasticsearch:
version: v7
url:
scheme: http
host: "elasticsearch:9200"
username: ""
password: ""
indices:
visibility: "temporal_visibility_v1_dev"
global:
membership:
maxJoinDuration: 30s
broadcastAddress: ""
pprof:
port: 0
tls:
refreshInterval: 0s
expirationChecks:
warningWindow: 0s
errorWindow: 0s
checkInterval: 0s
internode:
# This server section configures the TLS certificate that internal temporal
# cluster nodes (history, matching, and internal-frontend) present to other
# clients within the Temporal Cluster.
server:
requireClientAuth: false
certFile:
keyFile:
certData:
keyData:
# This client section is used to configure the TLS clients within
# the Temporal Cluster that connect to an Internode (history, matching, or
# internal-frontend)
client:
serverName:
disableHostVerification: false
frontend:
# This server section configures the TLS certificate that the Frontend
# server presents to external clients.
server:
requireClientAuth: false
certFile:
keyFile:
certData:
keyData:
# This client section is used to configure the TLS clients within
# the Temporal Cluster (specifically the Worker role) that connect to the Frontend service
client:
serverName:
disableHostVerification: false
authorization:
jwtKeyProvider:
keySourceURIs:
refreshInterval: 1m
permissionsClaimName: permissions
authorizer:
claimMapper:
services:
frontend:
rpc:
grpcPort: 7233
membershipPort: 6933
bindOnIP: "172.18.0.4"
httpPort: 7243
matching:
rpc:
grpcPort: 7235
membershipPort: 6935
bindOnIP: "172.18.0.4"
history:
rpc:
grpcPort: 7234
membershipPort: 6934
bindOnIP: "172.18.0.4"
worker:
rpc:
grpcPort: 7239
membershipPort: 6939
bindOnIP: "172.18.0.4"
clusterMetadata:
enableGlobalNamespace: false
failoverVersionIncrement: 10
masterClusterName: "active"
currentClusterName: "active"
clusterInformation:
active:
enabled: true
initialFailoverVersion: 1
rpcName: "frontend"
rpcAddress: 127.0.0.1:7233
httpAddress: 127.0.0.1:7243
dcRedirectionPolicy:
policy: "noop"
archival:
history:
state: "enabled"
enableRead: true
provider:
filestore:
fileMode: "0666"
dirMode: "0766"
visibility:
state: "enabled"
enableRead: true
provider:
filestore:
fileMode: "0666"
dirMode: "0766"
namespaceDefaults:
archival:
history:
state: "disabled"
URI: "file:///tmp/temporal_archival/development"
visibility:
state: "disabled"
URI: "file:///tmp/temporal_vis_archival/development"
dynamicConfigClient:
filepath: "config/dynamicconfig/development-sql.yaml"
pollInterval: "60s"
Jake
September 24, 2024, 4:52pm
3
This ended up being a networking issue not related to the component setup (but rather how they were stitched together).