Deadline exceeded error with helm charts deployment

I am trying to run temporal server using helm-charts with Yugabyte database. It gets deployed properly, but facing below issue when trying to run HelloActivity workflow from samples code.

Exception in thread "main" io.temporal.client.WorkflowServiceException: workflowId='HelloActivityWorkflow', runId='', workflowType='GreetingWorkflow'}
	at io.temporal.internal.sync.WorkflowStubImpl.wrapStartException(WorkflowStubImpl.java:184)
	at io.temporal.internal.sync.WorkflowStubImpl.startWithOptions(WorkflowStubImpl.java:120)
	at io.temporal.internal.sync.WorkflowStubImpl.start(WorkflowStubImpl.java:138)
	at io.temporal.internal.sync.WorkflowInvocationHandler.startWorkflow(WorkflowInvocationHandler.java:192)
	at io.temporal.internal.sync.WorkflowInvocationHandler.access$300(WorkflowInvocationHandler.java:48)
	at io.temporal.internal.sync.WorkflowInvocationHandler$SyncWorkflowInvocationHandler.startWorkflow(WorkflowInvocationHandler.java:314)
	at io.temporal.internal.sync.WorkflowInvocationHandler$SyncWorkflowInvocationHandler.invoke(WorkflowInvocationHandler.java:270)
	at io.temporal.internal.sync.WorkflowInvocationHandler.invoke(WorkflowInvocationHandler.java:178)
	at com.sun.proxy.$Proxy9.getGreeting(Unknown Source)
	at io.temporal.samples.hello.HelloActivity.main(HelloActivity.java:176)
Caused by: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 9.999170042s. [closed=[], open=[[remote_addr=/127.0.0.1:7233]]]
	at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:262)
	at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:243)
	at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:156)
	at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.startWorkflowExecution(WorkflowServiceGrpc.java:2627)
	at io.temporal.internal.external.GenericWorkflowClientExternalImpl.lambda$start$0(GenericWorkflowClientExternalImpl.java:88)
	at io.temporal.internal.retryer.GrpcSyncRetryer.retry(GrpcSyncRetryer.java:61)
Caused by: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 9.999170042s. [closed=[], open=[[remote_addr=/127.0.0.1:7233]]]

	at io.temporal.internal.retryer.GrpcRetryer.retryWithResult(GrpcRetryer.java:51)
	at io.temporal.internal.external.GenericWorkflowClientExternalImpl.start(GenericWorkflowClientExternalImpl.java:81)
	at io.temporal.internal.client.RootWorkflowClientInvoker.start(RootWorkflowClientInvoker.java:55)
	at io.temporal.internal.sync.WorkflowStubImpl.startWithOptions(WorkflowStubImpl.java:113)
	... 8 more

I cloned the helm-charts repository from temporal.io.

git clone https://github.com/temporalio/helm-charts.git

Created a file in values directory named as “values.yugabyte.yaml” with below contents :

server:
  config:
    persistence:
      default:
        driver: "sql"

        sql:
          driver: "postgres"
          host: _HOST_
          port: 5433
          database: temporal
          user: _USERNAME_
          password: _PASSWORD_
          maxConns: 20
          maxConnLifetime: "1h"

      visibility:
        driver: "sql"

        sql:
          driver: "postgres"
          host: _HOST_
          port: 5433
          database: temporal_visibility
          user: _USERNAME_
          password: _PASSWORD_
          maxConns: 20
          maxConnLifetime: "1h"

cassandra:
  enabled: false

mysql:
  enabled: false

postgresql:
  enabled: true

schema:
  setup:
    enabled: false
  update:
    enabled: false

Installation command :

helm install -f values/values.yugabyte.yaml temporaltest \
  --set prometheus.enabled=false \
  --set grafana.enabled=false \
  --set elasticsearch.enabled=false \
  --set server.config.persistence.default.sql.tls.caData=$TLS_CERT_FILE \
  --set server.config.persistence.default.sql.tls.enabled=true \
  --set server.config.persistence.visibility.sql.tls.caData=$TLS_CERT_FILE \
  --set server.config.persistence.visibility.sql.tls.enabled=true . --timeout 900s

Looks like worker service is not able to communicate with frontend service. Please guide.

Also, how can I check the logs getting generated? Is any TLS related configuration missing ?

Are you able to execute any tctl commands against the service?

@maxim I am able to run below commands :

> tctl --address temporaltest-frontend-headless:7233 cluster health
> tctl --address temporaltest-frontend-headless:7233 cluster get-search-attributes

But when I try to start a workflow using tctl command, it fails :

> tctl wf start --tq <<task_queue_name>> --wt <<workflow_type>>

Error: Failed to create workflow.
Error Details: context deadline exceeded
Stack trace:

goroutine 1 [running]:

runtime/debug.Stack()

/usr/local/go/src/runtime/debug/stack.go:24 +0x65

runtime/debug.PrintStack()

/usr/local/go/src/runtime/debug/stack.go:16 +0x19

go.temporal.io/server/tools/cli.printError({0x1dbb50b, 0x1a}, {0x20dbf20, 0xc00000c3d8})

/temporal/tools/cli/util.go:391 +0x22a

go.temporal.io/server/tools/cli.ErrorAndExit({0x1dbb50b, 0x24}, {0x20dbf20, 0xc00000c3d8})

/temporal/tools/cli/util.go:402 +0x28

go.temporal.io/server/tools/cli.startWorkflowHelper.func1()

/temporal/tools/cli/workflowCommands.go:228 +0x1a8

go.temporal.io/server/tools/cli.startWorkflowHelper(0xc00084e160, 0x0)

/temporal/tools/cli/workflowCommands.go:265 +0x557

go.temporal.io/server/tools/cli.StartWorkflow(...)

/temporal/tools/cli/workflowCommands.go:176

go.temporal.io/server/tools/cli.newWorkflowCommands.func3(0xc00084e160)

/temporal/tools/cli/workflow.go:57 +0x1b

github.com/urfave/cli.HandleAction({0x19c7000, 0x1e4cbc8}, 0x5)

/go/pkg/mod/github.com/urfave/cli@v1.22.5/app.go:526 +0x50

github.com/urfave/cli.Command.Run({{0x1d8ea6b, 0x5}, {0x0, 0x0}, {0x0, 0x0, 0x0}, {0x1dc7658, 0x1e}, {0x0, …}, …}, …)

/go/pkg/mod/github.com/urfave/cli@v1.22.5/command.go:173 +0x652

github.com/urfave/cli.(*App).RunAsSubcommand(0xc00073b880, 0xc000207760)

/go/pkg/mod/github.com/urfave/cli@v1.22.5/app.go:405 +0x9ec

github.com/urfave/cli.Command.startApp({{0x1d95e01, 0x8}, {0x0, 0x0}, {0xc000818d50, 0x1, 0x1}, {0x1db918c, 0x19}, {0x0, …}, …}, …)

/go/pkg/mod/github.com/urfave/cli@v1.22.5/command.go:372 +0x6e9

github.com/urfave/cli.Command.Run({{0x1d95e01, 0x8}, {0x0, 0x0}, {0xc000818d50, 0x1, 0x1}, {0x1db918c, 0x19}, {0x0, …}, …}, …)

/go/pkg/mod/github.com/urfave/cli@v1.22.5/command.go:102 +0x808

github.com/urfave/cli.(*App).Run(0xc00073b6c0, {0xc00003a070, 0x7, 0x7})

/go/pkg/mod/github.com/urfave/cli@v1.22.5/app.go:277 +0x705

main.main()

/temporal/cmd/tools/cli/main.go:37 +0x33

@maxim @tihomir Request you to please guide on this…

tctl cluster health checks the frontend service only. this looks as a possible network/connectivity issue with the other temporal services.
Every temporal service supports the /grpc.health.v1.health/check grpc endpoint. You should be able to use it to check all services and see if you get any errors.
Health checks can also be done via code: temporal/clusterCommands.go at master · temporalio/temporal · GitHub

Note that fullWorkflowServiceName
should be changed depending on which service you want to health check:

temporal.api.workflowservice.v1.WorkflowService
temporal.api.workflowservice.v1.HistoryService
temporal.api.workflowservice.v1.MatchingService

@tihomir I am able to connect properly and create the default namespace as well via tctl. Facing issue while running the sample workflow by modifying the WorkflowServiceStubsOptions to point to FrontEnd Service which has been exposed as LoadBalancer.

Exception in thread “main” io.temporal.client.WorkflowServiceException: workflowId=‘HelloActivityWorkflow’, runId=‘’, workflowType=‘GreetingWorkflow’}
at io.temporal.internal.sync.WorkflowStubImpl.wrapStartException(WorkflowStubImpl.java:184)
at io.temporal.internal.sync.WorkflowStubImpl.startWithOptions(WorkflowStubImpl.java:120)
at io.temporal.internal.sync.WorkflowStubImpl.start(WorkflowStubImpl.java:138)
at io.temporal.internal.sync.WorkflowInvocationHandler.startWorkflow(WorkflowInvocationHandler.java:192)
at io.temporal.internal.sync.WorkflowInvocationHandler.access$300(WorkflowInvocationHandler.java:48)
at io.temporal.internal.sync.WorkflowInvocationHandler$SyncWorkflowInvocationHandler.startWorkflow(WorkflowInvocationHandler.java:314)
at io.temporal.internal.sync.WorkflowInvocationHandler$SyncWorkflowInvocationHandler.invoke(WorkflowInvocationHandler.java:270)
at io.temporal.internal.sync.WorkflowInvocationHandler.invoke(WorkflowInvocationHandler.java:178)
at com.sun.proxy.$Proxy9.getGreeting(Unknown Source)
at io.temporal.samples.hello.HelloActivity.main(HelloActivity.java:179)
Caused by: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 9.998204000s. [closed=, open=[[remote_addr=/34.68.62.212:7233]]]
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:262)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:243)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:156)
at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.startWorkflowExecution(WorkflowServiceGrpc.java:2627)
at io.temporal.internal.external.GenericWorkflowClientExternalImpl.lambda$start$0(GenericWorkflowClientExternalImpl.java:88)
at io.temporal.internal.retryer.GrpcSyncRetryer.retry(GrpcSyncRetryer.java:61)
at io.temporal.internal.retryer.GrpcRetryer.retryWithResult(GrpcRetryer.java:51)
at io.temporal.internal.external.GenericWorkflowClientExternalImpl.start(GenericWorkflowClientExternalImpl.java:81)
at io.temporal.internal.client.RootWorkflowClientInvoker.start(RootWorkflowClientInvoker.java:55)
at io.temporal.internal.sync.WorkflowStubImpl.startWithOptions(WorkflowStubImpl.java:113)
… 8 more
Caused by: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 9.998204000s. [closed=, open=[[remote_addr=/34.68.62.212:7233]]]

Logs from history pod :

{“level”:“error”,“ts”:“2022-01-10T16:09:07.328Z”,“msg”:“Operation failed with internal error.”,“error”:“GetVisibilityTasks operation failed. Select failed. Error: context deadline exceeded”,“metric-scope”:18,“logging-call-at”:“persistenceMetricClients.go:1329”,“stacktrace”:“go.temporal.io/server/common/log.(*zapLogger).Error\n\t/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence.(*metricEmitter).updateErrorMetric\n\t/temporal/common/persistence/persistenceMetricClients.go:1329\ngo.temporal.io/server/common/persistence.(*executionPersistenceClient).GetVisibilityTasks\n\t/temporal/common/persistence/persistenceMetricClients.go:363\ngo.temporal.io/server/service/history.(*visibilityQueueProcessorImpl).readTasks\n\t/temporal/service/history/visibilityQueueProcessor.go:292\ngo.temporal.io/server/service/history.(*queueAckMgrImpl).readQueueTasks.func1\n\t/temporal/service/history/queueAckMgr.go:108\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/service/history.(*queueAckMgrImpl).readQueueTasks\n\t/temporal/service/history/queueAckMgr.go:112\ngo.temporal.io/server/service/history.(*queueProcessorBase).processBatch\n\t/temporal/service/history/queueProcessor.go:245\ngo.temporal.io/server/service/history.(*queueProcessorBase).processorPump\n\t/temporal/service/history/queueProcessor.go:209”}

{“level”:“error”,“ts”:“2022-01-10T16:09:07.348Z”,“msg”:“Operation failed with internal error.”,“error”:“GetTransferTasks operation failed. Select failed. Error: context deadline exceeded”,“metric-scope”:14,“logging-call-at”:“persistenceMetricClients.go:1329”,“stacktrace”:“go.temporal.io/server/common/log.(*zapLogger).Error\n\t/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence.(*metricEmitter).updateErrorMetric\n\t/temporal/common/persistence/persistenceMetricClients.go:1329\ngo.temporal.io/server/common/persistence.(*executionPersistenceClient).GetTransferTasks\n\t/temporal/common/persistence/persistenceMetricClients.go:335\ngo.temporal.io/server/service/history.(*transferQueueProcessorBase).readTasks\n\t/temporal/service/history/transferQueueProcessorBase.go:76\ngo.temporal.io/server/service/history.(*queueAckMgrImpl).readQueueTasks.func1\n\t/temporal/service/history/queueAckMgr.go:108\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/service/history.(*queueAckMgrImpl).readQueueTasks\n\t/temporal/service/history/queueAckMgr.go:112\ngo.temporal.io/server/service/history.(*queueProcessorBase).processBatch\n\t/temporal/service/history/queueProcessor.go:245\ngo.temporal.io/server/service/history.(*queueProcessorBase).processorPump\n\t/temporal/service/history/queueProcessor.go:209”}

{“level”:“error”,“ts”:“2022-01-10T16:09:07.362Z”,“msg”:“Operation failed with internal error.”,“error”:“UpdateShard failed. Failed to start transaction. Error: context deadline exceeded”,“metric-scope”:2,“logging-call-at”:“persistenceMetricClients.go:1329”,“stacktrace”:“go.temporal.io/server/common/log.(*zapLogger).Error\n\t/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence.(*metricEmitter).updateErrorMetric\n\t/temporal/common/persistence/persistenceMetricClients.go:1329\ngo.temporal.io/server/common/persistence.(*shardPersistenceClient).UpdateShard\n\t/temporal/common/persistence/persistenceMetricClients.go:173\ngo.temporal.io/server/service/history/shard.(*ContextImpl).renewRangeLocked\n\t/temporal/service/history/shard/context_impl.go:931\ngo.temporal.io/server/service/history/shard.(*ContextImpl).acquireShard.func1\n\t/temporal/service/history/shard/context_impl.go:1583\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/service/history/shard.(*ContextImpl).acquireShard\n\t/temporal/service/history/shard/context_impl.go:1616”}

Logs from matching pod :

{“level”:“info”,“ts”:“2022-01-10T15:48:22.738Z”,“msg”:“none”,“service”:“matching”,“component”:“matching-engine”,“wf-task-queue-name”:“/_sys/HelloActivityTaskQueue/3”,“wf-task-queue-type”:“Workflow”,“wf-namespace”:“default”,“lifecycle”:“Started”,“logging-call-at”:“taskQueueManager.go:246”}

{“level”:“error”,“ts”:“2022-01-10T15:48:38.281Z”,“msg”:“Operation failed with internal error.”,“error”:“UpdateTaskQueue operation failed. Failed to commit transaction. Error: sql: transaction has already been committed or rolled back”,“metric-scope”:42,“logging-call-at”:“persistenceMetricClients.go:1329”,“stacktrace”:“go.temporal.io/server/common/log.(*zapLogger).Error\n\t/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence.(*metricEmitter).updateErrorMetric\n\t/temporal/common/persistence/persistenceMetricClients.go:1329\ngo.temporal.io/server/common/persistence.(*taskPersistenceClient).UpdateTaskQueue\n\t/temporal/common/persistence/persistenceMetricClients.go:762\ngo.temporal.io/server/service/matching.(*taskQueueDB).UpdateState\n\t/temporal/service/matching/db.go:110\ngo.temporal.io/server/service/matching.(*taskReader).persistAckLevel\n\t/temporal/service/matching/taskReader.go:268\ngo.temporal.io/server/service/matching.(*taskReader).getTasksPump\n\t/temporal/service/matching/taskReader.go:184\ngo.temporal.io/server/internal/goro.(*Group).Go.func1\n\t/temporal/internal/goro/group.go:57”}

{“level”:“error”,“ts”:“2022-01-10T15:48:38.281Z”,“msg”:“Persistent store operation failure”,“service”:“matching”,“component”:“matching-engine”,“wf-task-queue-name”:“/_sys/HelloActivityTaskQueue/2”,“wf-task-queue-type”:“Activity”,“wf-namespace”:“default”,“store-operation”:“update-task-queue”,“error”:“UpdateTaskQueue operation failed. Failed to commit transaction. Error: sql: transaction has already been committed or rolled back”,“logging-call-at”:“taskReader.go:187”,“stacktrace”:“go.temporal.io/server/common/log.(*zapLogger).Error\n\t/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/service/matching.(*taskReader).getTasksPump\n\t/temporal/service/matching/taskReader.go:187\ngo.temporal.io/server/internal/goro.(*Group).Go.func1\n\t/temporal/internal/goro/group.go:57”}

{“level”:“error”,“ts”:“2022-01-10T15:48:38.775Z”,“msg”:“transaction rollback error”,“error”:“sql: transaction has already been committed or rolled back”,“logging-call-at”:“common.go:75”,“stacktrace”:“go.temporal.io/server/common/log.(*zapLogger).Error\n\t/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence/sql.(*SqlStore).txExecute\n\t/temporal/common/persistence/sql/common.go:75\ngo.temporal.io/server/common/persistence/sql.(*sqlTaskManager).UpdateTaskQueue\n\t/temporal/common/persistence/sql/task.go:191\ngo.temporal.io/server/common/persistence.(*taskManagerImpl).UpdateTaskQueue\n\t/temporal/common/persistence/task_manager.go:174\ngo.temporal.io/server/common/persistence.(*taskRateLimitedPersistenceClient).UpdateTaskQueue\n\t/temporal/common/persistence/persistenceRateLimitedClients.go:526\ngo.temporal.io/server/common/persistence.(*taskPersistenceClient).UpdateTaskQueue\n\t/temporal/common/persistence/persistenceMetricClients.go:758\ngo.temporal.io/server/service/matching.(*taskQueueDB).UpdateState\n\t/temporal/service/matching/db.go:110\ngo.temporal.io/server/service/matching.(*taskReader).persistAckLevel\n\t/temporal/service/matching/taskReader.go:268\ngo.temporal.io/server/service/matching.(*taskReader).getTasksPump\n\t/temporal/service/matching/taskReader.go:184\ngo.temporal.io/server/internal/goro.(*Group).Go.func1\n\t/temporal/internal/goro/group.go:57”}

{“level”:“error”,“ts”:“2022-01-10T15:48:38.776Z”,“msg”:“Operation failed with internal error.”,“error”:“UpdateTaskQueue: driver: bad connection”,“metric-scope”:42,“logging-call-at”:“persistenceMetricClients.go:1329”,“stacktrace”:“go.temporal.io/server/common/log.(*zapLogger).Error\n\t/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence.(*metricEmitter).updateErrorMetric\n\t/temporal/common/persistence/persistenceMetricClients.go:1329\ngo.temporal.io/server/common/persistence.(*taskPersistenceClient).UpdateTaskQueue\n\t/temporal/common/persistence/persistenceMetricClients.go:762\ngo.temporal.io/server/service/matching.(*taskQueueDB).UpdateState\n\t/temporal/service/matching/db.go:110\ngo.temporal.io/server/service/matching.(*taskReader).persistAckLevel\n\t/temporal/service/matching/taskReader.go:268\ngo.temporal.io/server/service/matching.(*taskReader).getTasksPump\n\t/temporal/service/matching/taskReader.go:184\ngo.temporal.io/server/internal/goro.(*Group).Go.func1\n\t/temporal/internal/goro/group.go:57”}

{“level”:“error”,“ts”:“2022-01-10T15:48:38.776Z”,“msg”:“Persistent store operation failure”,“service”:“matching”,“component”:“matching-engine”,“wf-task-queue-name”:“/_sys/HelloActivityTaskQueue/1”,“wf-task-queue-type”:“Activity”,“wf-namespace”:“default”,“store-operation”:“update-task-queue”,“error”:“UpdateTaskQueue: driver: bad connection”,“logging-call-at”:“taskReader.go:187”,“stacktrace”:“go.temporal.io/server/common/log.(*zapLogger).Error\n\t/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/service/matching.(*taskReader).getTasksPump\n\t/temporal/service/matching/taskReader.go:187\ngo.temporal.io/server/internal/goro.(*Group).Go.func1\n\t/temporal/internal/goro/group.go:57”}

{“level”:“error”,“ts”:“2022-01-10T15:48:38.786Z”,“msg”:“Operation failed with internal error.”,“error”:“UpdateTaskQueue operation failed. Failed to commit transaction. Error: sql: transaction has already been committed or rolled back”,“metric-scope”:42,“logging-call-at”:“persistenceMetricClients.go:1329”,“stacktrace”:“go.temporal.io/server/common/log.(*zapLogger).Error\n\t/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence.(*metricEmitter).updateErrorMetric\n\t/temporal/common/persistence/persistenceMetricClients.go:1329\ngo.temporal.io/server/common/persistence.(*taskPersistenceClient).UpdateTaskQueue\n\t/temporal/common/persistence/persistenceMetricClients.go:762\ngo.temporal.io/server/service/matching.(*taskQueueDB).UpdateState\n\t/temporal/service/matching/db.go:110\ngo.temporal.io/server/service/matching.(*taskReader).persistAckLevel\n\t/temporal/service/matching/taskReader.go:268\ngo.temporal.io/server/service/matching.(*taskReader).getTasksPump\n\t/temporal/service/matching/taskReader.go:184\ngo.temporal.io/server/internal/goro.(*Group).Go.func1\n\t/temporal/internal/goro/group.go:57”}

{“level”:“error”,“ts”:“2022-01-10T15:48:38.786Z”,“msg”:“Persistent store operation failure”,“service”:“matching”,“component”:“matching-engine”,“wf-task-queue-name”:“HelloActivityTaskQueue”,“wf-task-queue-type”:“Workflow”,“wf-namespace”:“default”,“store-operation”:“update-task-queue”,“error”:“UpdateTaskQueue operation failed. Failed to commit transaction. Error: sql: transaction has already been committed or rolled back”,“logging-call-at”:“taskReader.go:187”,“stacktrace”:“go.temporal.io/server/common/log.(*zapLogger).Error\n\t/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/service/matching.(*taskReader).getTasksPump\n\t/temporal/service/matching/taskReader.go:187\ngo.temporal.io/server/internal/goro.(*Group).Go.func1\n\t/temporal/internal/goro/group.go:57”}

Looks like database operations failing from history and matching services, while no relevant logs found in frontend service pod. Please guide on this.

Failed to commit transaction. Error: sql: transaction has already been committed or rolled back"

I think this indicates issues with your DB, possibly having too small size, or network latencies of its replies/issues with query parsers (dialect).
Also, does Yugabyte support the Cassandra protocol? Might be worth trying to use temporal cassandra config for persistence instead of what you defined in your initial post, for example: https://github.com/temporalio/temporal/blob/master/config/development.yaml#L1

Thanks @tihomir . I will check the if there is any network issue connecting to the database.